The Technology Facebook and Google Didn’t Dare Release

One afternoon in early 2017, at Facebook’s headquarters in Menlo Park, Calif., an engineer named Tommer Leyvand sat in a conference room with a smartphone standing on the brim of his baseball cap. Rubber bands helped anchor it in place with the camera facing out. The absurd hat-phone, a particularly uncool version of the future, contained a secret tool known only to a small group of employees. What it could do was remarkable.

The handful of men in the room were laughing and speaking over one another in excitement, as captured in a video taken that day, until one of them asked for quiet. The room went silent; the demo was underway.

Mr. Leyvand turned toward a man across the table from him. The smartphone’s camera lens — round, black, unblinking — hovered above Mr. Leyvand’s forehead like a Cyclops eye as it took in the face before it. Two seconds later, a robotic female voice declared, “Zach Howard.”

“That’s me,” confirmed Mr. Howard, a mechanical engineer.

An employee who saw the tech demonstration thought it was supposed to be a joke. But when the phone started correctly calling out names, he found it creepy, like something out of a dystopian movie.

The person-identifying hat-phone would be a godsend for someone with vision problems or face blindness, but it was risky. Facebook’s previous deployment of facial recognition technology, to help people tag friends in photos, had caused an outcry from privacy advocates and led to a class-action lawsuit in Illinois in 2015 that ultimately cost the company $650 million.

With technology like that on Mr. Leyvand’s head, Facebook could prevent users from ever forgetting a colleague’s name, give a reminder at a cocktail party that an acquaintance had kids to ask about or help find someone at a crowded conference. However, six years later, the company now known as Meta has not released a version of that product and Mr. Leyvand has departed for Apple to work on its Vision Pro augmented reality glasses.

In recent years, the start-ups Clearview AI and PimEyes have pushed the boundaries of what the public thought was possible by releasing face search engines paired with millions of photos from the public web (PimEyes) or even billions (Clearview). With these tools, available to the police in the case of Clearview AI and the public at large in the case of PimEyes, a snapshot of someone can be used to find other online photos where that face appears, potentially revealing a name, social media profiles or information a person would never want to be linked to publicly, such as risqué photos.

What these start-ups had done wasn’t a technological breakthrough; it was an ethical one. Tech giants had developed the ability to recognize unknown people’s faces years earlier, but had chosen to hold the technology back, deciding that the most extreme version — putting a name to a stranger’s face — was too dangerous to make widely available.

In the last few years, though, the gates have been trampled by smaller, more aggressive companies, such as Clearview AI and PimEyes. What allowed the shift was the open-source nature of neural network technology, which now underpins most artificial intelligence software.

Understanding the path of facial recognition technology will help us navigate what is to come with other advancements in A.I., such as image- and text-generation tools. The power to decide what they can and can’t do will increasingly be determined by anyone with a bit of tech savvy, who may not pay heed to what the general public considers acceptable.

How did we get to this point where someone can spot a “hot dad” on a Manhattan sidewalk and then use PimEyes to try to find out who he is and where he works? The short answer is a combination of free code shared online, a vast array of public photos, academic papers explaining how to put it all together and a cavalier attitude toward laws governing privacy.

The Clearview AI co-founder Hoan Ton-That, who led his company’s technological development, had no special background in biometrics. Before Clearview AI, he made Facebook quizzes, iPhone games and silly apps, such as “Trump Hair” to make a person in a photo appear to be coifed like the former president.

In his quest to create a groundbreaking and more lucrative app, Mr. Ton-That turned to free online resources, such as OpenFace — a “face recognition library” created by a group at Carnegie Mellon University. The code library was available on GitHub, with a warning: “Please use responsibly!”

And then the square filled with photos of him, a caption beneath each one. I scrolled through them using the touch pad. I tapped to select one that read “Clearview CEO, Hoan Ton-That;” it included a link that showed me that it had come from Clearview’s website.

I looked at his spokeswoman, searched her face, and 49 photos came up, including one with a client that she asked me not to mention. This casually revealed just how intrusive a search of someone’s face can be, even for a person whose job is to get the world to embrace this technology.

I wanted to take the glasses outside to see how they worked on people I didn’t actually know, but Mr. Ton-That said we couldn’t, both because the glasses required a Wi-Fi connection and because someone might recognize him and realize immediately what the glasses were and what they could do.

It didn’t frighten me, though I knew it should. It was clear that people who own a tool like this will inevitably have power over those who don’t. But there was a certain thrill in seeing it work, like a magic trick successfully performed.

Sahred From Source link Technology

Leave a Reply

Your email address will not be published. Required fields are marked *