A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent a collective shiver down the spines of privacy and security experts who are warning the feature represents the thin end of the wedge. They warn that, once client-side scanning is baked into mobile infrastructure, it could usher in an era of centralized censorship.

Apple abandoned a plan to deploy client-side scanning for CSAM in 2021 after a huge privacy backlash. However, policymakers have continued to heap pressure on the tech industry to find ways to detect illegal activity taking place on their platforms. Any industry moves to build out on-device scanning infrastructure could therefore pave the way for all-sorts of content scanning by default — whether government-led or related to a particular commercial agenda.

Meredith Whittaker, president of the U.S.-based encrypted messaging app Signal, warned: “This is incredibly dangerous. It lays the path for centralized, device-level client side scanning.

“From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w[ith] seeking reproductive care’ or ‘commonly associated w[ith] providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”

  • chunkystyles@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    8 months ago

    I’m not advocating for this, but I could see it effectively ending phone scams that often prey on the elderly.

    • hasnt_seen_goonies@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Yeah, I don’t want this for my phone right now, but I do think that this would help me sleep easier with my grandfather who has already fallen for multiple scams.