Zuck’s ‘Eye of Sauron’ Smart Glasses Are Reportedly Streaming Naked Neighbors & Bank Records to Kenya

0
Zuck’s ‘Eye of Sauron’ Smart Glasses Are Reportedly Streaming Naked Neighbors & Bank Records to Kenya

Posted For: Egyptian My Ass

A growing number of people have started wearing Ray-Ban Meta smart glasses in everyday situations — on commutes, at work, or around the neighborhood. At first glance they look like ordinary sunglasses, but a recent investigation by two Swedish newspapers raises serious questions about what might be happening behind the scenes when those glasses are used.

The investigation, published February 27 by Svenska Dagbladet and Göteborgs-Posten, found that when users activate the glasses’ built-in AI assistant by saying “Hey Meta,” the images and video captured by the device may be sent to human reviewers working for a data-annotation company in Nairobi, Kenya called Sama. The purpose of this review process is to label and categorize footage so Meta can train and improve its artificial intelligence systems. While human review of AI data is common across the tech industry, workers interviewed for the investigation say the material they are asked to review often contains extremely private moments.

Several contractors, who spoke anonymously because they feared losing their jobs, described reviewing video that appeared to have been captured unintentionally. According to the report, workers said they had seen people in bathrooms, individuals getting dressed, and even explicit sexual activity. One reviewer described watching a video where someone set the glasses down in a bedroom before leaving, only for another person to enter and change clothes without realizing they were being recorded. Others said they encountered footage of people watching pornography while wearing the glasses.

Workers also reported reviewing transcripts of conversations between users and the AI assistant that sometimes included personal confessions or explicit comments. Many of the contractors believe the people appearing in the recordings likely had no idea the footage was being captured or reviewed. One worker told the Swedish outlets that if users understood what was happening with the data, they probably would not have recorded it.

The contractors also described a work environment where questioning the material they were reviewing could put their jobs at risk. According to the report, employees were expected to process the data without challenging the ethical concerns it raised. Many of these workers are young college graduates in Nairobi who depend on the income and worry about losing their employment if they raise objections.

Meta’s user agreements technically disclose that interactions with its AI systems may be reviewed by humans. The company’s terms say that Meta may examine conversations and other content involving its AI tools and that the review may be automated or conducted manually by people. The same policy warns users not to share sensitive information with the system. Critics say the warning does little to address situations where sensitive details appear in the background of recordings made without the knowledge of others nearby.

Tests conducted by Swedish reporters also showed that the glasses rely on an internet connection to function. When the connection was disabled, the AI features stopped working entirely. Any request made to the assistant — such as asking it to identify an object or read a sign — results in an image being captured and sent to Meta’s servers for analysis. There is no option to run those AI features locally on the device.

Meta initially took two months to respond to questions from the journalists. When it did respond, the company pointed reporters toward its privacy policies rather than addressing the specific concerns raised by the investigation. A spokesperson later told Business Standard that the company filters data to protect people’s privacy and takes the protection of user information seriously. However, the accounts from the workers reviewing the footage suggest that sensitive material still makes its way into the datasets.

The company involved in reviewing the data, Sama, is headquartered in California and operates offices in Nairobi. It has previously faced criticism over the nature of the work assigned to its contractors. In 2021 the company handled large amounts of disturbing content while working with OpenAI, including text describing abuse and violence. Reports at the time said workers were paid roughly $1.32 to $2 per hour. Some employees said the job exposed them to emotionally difficult material.

Further controversy followed when workers alleged poor working conditions and attempts to prevent union organizing. Sama later ended its content moderation contract with Meta in 2023 and moved toward computer-vision data labeling — the same type of work now tied to the smart glasses. The company has defended its practices, saying it provides meaningful employment opportunities in lower-income communities.

Concerns about the glasses are not entirely hypothetical. In October 2025, the University of San Francisco issued a safety notice after several women reported being approached by a man wearing the devices. Officials believed he was recording interactions and posting them online under a social media account. The university said it could not identify everyone who might have appeared in the videos.

Privacy advocates have long pointed out that the only signal the glasses give to bystanders is a small white recording light. In bright daylight it can be difficult to notice, and it can potentially be covered. Some people have even started using a smartphone app designed to detect the Bluetooth signals emitted by the glasses so they know when one might be recording nearby.

Another issue raising alarms involves Meta’s plans to add facial recognition capabilities to the glasses. Reports indicate the company is developing a feature internally referred to as “Name Tag,” which would allow users to identify people they see through the device. Internal documents obtained by The New York Times suggest Meta timed the rollout for what it described as a “dynamic political environment,” meaning a period when privacy advocates may be focused on other issues.

The company previously shut down Facebook’s facial recognition system in 2021, citing public concerns. That technology had already drawn years of regulatory scrutiny, including a $5 billion fine from the Federal Trade Commission and biometric privacy settlements exceeding $2 billion in Illinois and Texas.

Legal protections surrounding the glasses vary by location. In California, the California Consumer Privacy Act gives residents certain rights over their personal data and requires companies to disclose how information is collected and used. In February 2026, the Electronic Privacy Information Center asked the California Privacy Protection Agency to investigate Meta’s glasses under the state’s biometric privacy rules.

However, enforcement can be inconsistent, and some critics argue the disclosures in lengthy user agreements may not provide meaningful notice to consumers. European privacy experts have also warned that once personal data becomes part of a training dataset for artificial intelligence systems, individuals may effectively lose control over how that information is used.

Civil liberties groups are particularly concerned about the potential for facial recognition in a device designed to be worn in public spaces. The ACLU has warned that the technology could threaten the everyday anonymity people expect when moving through public places. Reports that a U.S. Customs and Border Protection agent was photographed wearing the glasses during an immigration raid last year have added to those concerns.

The broader issue highlighted by the Swedish investigation is the hidden labor behind modern AI systems. While companies often describe their products as being powered by advanced algorithms, the technology frequently relies on large numbers of human workers reviewing and labeling data. These contractors — often located in countries such as Kenya, Colombia, and India — are responsible for preparing the information that trains AI models.

For many consumers, the idea of artificial intelligence conjures images of automated systems running on distant servers. The reality, as the investigation suggests, may involve human reviewers watching footage captured by devices that people wear every day, sometimes revealing moments that were never meant to be shared.

Original Source

About Post Author

Discover more from The News Beyond Detroit

Subscribe now to keep reading and get access to the full archive.

Continue reading