• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • Right – all privacy-positive methods to employ, but not helpful for fingerprinting. In fact, some things can make you more susceptible to fingerprinting because they make you more unique (like using a custom OS). It’s all about your browser and what it chooses to send with HTTP requests, how it responds to queries for you device/browser specs (via Javacript). Your OS, system architecture, hardware details, browser type and plugins, etc combine to make a very unique profile tied to your device. It’s especially nefarious because all those bits are cross-referenced over all accounts and devices to make a global profile on you. Even if you’ve never used Facebook, you probably have a shadow profile. If you’ve ever logged into the same service or website account on your de-Googled GrapheneOS device as another machine that does have Google services tracking, then your new device is likely already tied to your identity.

    Try this with different browsers – it tests the uniqueness of your device.
















  • Thanks for the response! It sounds like you had access to a higher quality system than the worst, to be sure. Based on your comments I feel that you’re projecting the confidence in that system onto the broader topic of facial recognition in general; you’re looking at a good example and people here are (perhaps cynically) pointing at the worst ones. Can you offer any perspective from your career experience that might bridge the gap? Why shouldn’t we treat all facial recognition implementations as unacceptable if only the best – and presumably most expensive – ones are?

    A rhetorical question aside from that: is determining one’s identity an application where anything below the unachievable success rate of 100% is acceptable?


  • Can you please start linking studies? I think that might actually turn the conversation in your favor. I found a NIST study (pdf link), on page 32, in the discussion portion of 4.2 “False match rates under demographic pairing”:

    The results above show that false match rates for imposter pairings in likely real-world scenarios are much higher than those from measured when imposters are paired with zero-effort.

    This seems to say that the false match rate gets higher and higher as the subjects are more demographically similar; the highest error rate on the heat map below that is roughly 0.02.

    Something else no one here has talked about yet – no one is actively trying to get identified as someone else by facial recognition algorithms yet. This study was done on public mugshots, so no effort to fool the algorithm, and the error rates between similar demographics is atrocious.

    And my opinion: Entities using facial recognition are going to choose the lowest bidder for their system unless there’s a higher security need than, say, a grocery store. So, we have to look at the weakest performing algorithms.