• rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    The moral aspect is resolved if you approach building human systems correctly too.

    There is a person or an organization making a decision. They may use an “AI”, they may use Tarot cards, they may use the applicant’s f*ckability from photos. But they are somehow responsible for that decision and it is judged by some technical, non-subjective criteria afterwards.

    That’s how these things are done properly. If a human system is not designed correctly, then it really doesn’t matter which particular technology or social situation will expose that.

    But I might have too high expectations of humanity.

    • hsdkfr734r@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Accountability of a human decision maker is the way to go. Agreed.

      I see the danger when the accountant’s job asks for high throughput which enforces fast decision making and the tool (llm) offers fast and easy decisions. What is the accountant going to do, if (s)he just sees cases not people and fates?

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        If consequence for a mistake follows regardless, then it doesn’t matter.

        Or if you mean the person checking others - one can make a few levels of it. One can have checkers interested in different outcomes, like in criminal justice (… it’s supposed to be).