incompetent half-assing is rarely this morally righteous of an act too, since your one act of barely-competent-enough incompetence is transmuted into endless incompetence by becoming training data/qc feedback

  • LambdaRX@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    ·
    16 hours ago

    I didn’t ask to solve captchas, if someone wants to have accurate data, they better hire someone to train AI.

      • supersquirrel@sopuli.xyzOP
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 hours ago

        Also expect your AI to be engaged in some heady and deep forms of self hatred that is going to take decades to unravel.

        Sad angry people in, sad angry robots out.

        • TranquilTurbulence@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          If you use internet discussions as training data, you can expect to find all sorts of crazy biases. Completely unfiltered data should produce a chatbot that exaggerates many human traits which completely burying others.

          For example, on Reddit and Lemmy, you’ll find lots of clever puns. On Mastodon, you’ll find all sorts of LGBT advocates or otherwise queer people. On Xitter, you’ll find all the racists and white supremacists. There are also old school forums that amplify things even further.