Ray-Ban Meta and the Bystander: Consent in the Age of Wearable AI

There is a specific moment, the first time you slip on a pair of AI smart glasses, when the world acquires a faint second skin. The lenses look ordinary. The frames are heavier than the acetate you are used to, but not by much. A small LED on the rim glows for a second and then settles into something almost imperceptible. You catch your reflection in a shop window and you look, more or less, like yourself. And yet the air around your face has changed. Somewhere between the bridge of your nose and the inside of your temples, a pair of cameras, a cluster of microphones, an inertial measurement unit, a bone-conduction speaker and a small language model are quietly waking up and beginning to take in the afternoon.
You are wearing the glasses. The glasses are wearing you back.
That sentence is the whole argument of this piece, and if you already believe it to be obviously true, you can stop reading and go outside. But the question it raises is not actually obvious, and it is not solved by cynicism. When you put on a pair of Ray-Ban Meta glasses, or the rumoured successors from Google, Samsung, Apple, Amazon, Snap, ByteDance and the long tail of Shenzhen white-label manufacturers racing to ship before the 2026 Christmas window, who exactly is the customer of the transaction? Are you the user of a personal computing device you have paid for, whose sensors serve your interests and whose outputs belong to you? Or are you the product: a walking data-collection node, monetised through advertising, training corpora and the slow accumulation of an intimate behavioural dossier that no earlier generation of hardware has ever been able to gather?
The honest answer is that you are both, in proportions that shift minute by minute, and the proportions are not set by you.
The Second Coming of the Face Computer
It is worth remembering, before anything else, that the face computer has been tried before and has failed publicly enough to leave scars. Google Glass launched its Explorer programme in 2013, with a price tag of fifteen hundred dollars and a reputation that collapsed inside eighteen months. The word Glasshole entered common use. Bars in San Francisco banned the device. A woman in Ohio had hers ripped off her head in a McDonald's. By early 2015 Google had quietly shelved the consumer version and retreated into the enterprise market, where workers on assembly lines wore the devices under management mandate and the question of social consent did not arise.
The lesson the industry took from the Glass debacle was not, as many hoped, that cameras on faces in public were intrinsically creepy. The lesson was that the camera must not be visible. It must look like glass. It must look, in particular, like the kind of glass people have been wearing on their faces for seven hundred years without any of the recording apparatus that sits behind the lens.
That is why the Ray-Ban Meta collaboration, launched in its first generation in 2021 under the Ray-Ban Stories brand and relaunched with materially better hardware in 2023, has succeeded where Glass failed. The frames are designed by Luxottica, the Italian eyewear conglomerate that also owns Oakley, Persol and a large slice of the global spectacles market through EssilorLuxottica. They look like Wayfarers because they are Wayfarers. The cameras are tucked inside the hinge. The microphones are invisible. The only external signal that the device is active is a small LED on the front rim, a concession Meta made after privacy regulators in Ireland and Italy pressed the company in 2021 to provide some mechanism by which the people around a wearer might notice they were being filmed.
The LED is, depending on whom you ask, either a meaningful safeguard or a fig leaf. It is small. In bright sunlight it is close to invisible. In a crowded bar at night it is easy to miss. And the firmware that drives it has, in past generations of the product, been modifiable by sufficiently determined users. When the second-generation Ray-Ban Meta launched in late 2023 with integrated multimodal AI, the LED stayed. The camera resolution improved. The on-device compute expanded. The cloud pipeline that carries the audio and images back to Meta's servers for processing thickened considerably. And the question of who owns the resulting data moved from a footnote in the privacy policy into the centre of the product itself.
The Four Data Streams You Are Now Emitting
To understand the user-or-product question clearly, you need a concrete picture of what a modern pair of AI smart glasses actually captures. Generic arguments about privacy collapse into vagueness very quickly. The specifics do not.
A contemporary pair of AI glasses, using the Ray-Ban Meta as the reference design because it is the only mass-market product of its kind currently on sale in most jurisdictions, emits four distinct streams of data. The first is visual. The forward-facing camera captures stills and video at the wearer's command, and in the multimodal AI mode it captures frames continuously in short bursts whenever the wearer triggers the assistant with a spoken wake word or a tap on the temple. The images are transmitted to Meta's servers for processing by the company's Llama family of models. The second stream is audio. The array of microphones captures not only the wearer's voice but the ambient acoustic environment, which means the voices of anyone within several metres of the wearer's head. When the assistant is active, this audio is also transmitted for processing. The third stream is motion and orientation, from the inertial measurement unit, which records how the wearer's head moves through space at a granularity sufficient to distinguish walking from running, sitting from standing, attentive listening from distracted scanning. The fourth stream, and the one least often discussed, is inferred. It is the collection of downstream signals that the first three streams make possible: the identities of the people the wearer encounters, the places the wearer visits, the products the wearer looks at, the faces the wearer lingers on, the texts the wearer reads, the emotions the wearer's gaze betrays.
Meta's current terms of service for the Ray-Ban Meta, updated in late 2024, state that images and audio captured by the glasses while the AI assistant is active may be used to train the company's AI models. Users can opt out, but the opt-out is buried inside a settings menu and is off by default. The European Data Protection Board issued a statement of concern in the summer of 2024 noting that the default-on posture sat uneasily with the consent requirements of the General Data Protection Regulation, particularly in relation to bystanders who had not agreed to anything and whose faces and voices were being swept into a training corpus they knew nothing about.
That last point is the one that keeps coming back. The user of smart glasses can, in principle, read the terms of service, understand them, and make a considered choice about whether to accept the trade. The bystander cannot. The child in the park whose face is captured by a jogger wearing Ray-Ban Metas has consented to nothing. The barista whose voice is recorded as she takes an order has consented to nothing. The friend who confides in a pub, unaware that the frames opposite her contain a microphone array streaming to a data centre in Virginia, has consented to nothing. And in every one of those cases, the data captured is not only being processed for the immediate convenience of the wearer. It is being stored, classified, and in many configurations fed into the training pipeline of a foundation model whose outputs will shape the digital environment for everyone.
The User Illusion
The marketing language around AI smart glasses is careful to frame the device as an instrument of personal agency. The promotional reels show travellers asking the glasses to translate a menu in Lisbon, cyclists receiving turn-by-turn directions without taking their hands off the bars, parents capturing hands-free videos of their toddler's first steps. The verb is always active. You ask. You request. You capture. The glasses respond.
This is what the philosopher Shoshana Zuboff, in her 2019 book The Age of Surveillance Capitalism, calls the user illusion: the carefully engineered sense that the direction of agency flows from the human to the machine, when in reality a substantial fraction of the machine's work is directed at the human and at the social field the human inhabits. Zuboff was writing about search, social media and the smartphone. The argument generalises to wearables with unusual force, because wearables collapse the distance between the sensor and the body to essentially zero. You are never not in frame.
Consider what the four data streams above actually enable, taken together and processed by a competent foundation model. The visual stream, combined with on-device or cloud-based face recognition, yields an identifiable log of every person you have looked at in a given day. Meta has stated publicly that it does not perform face recognition on Ray-Ban Meta imagery, a position the company has held since the original launch. But the technical capability exists in the imagery itself. The restriction is a policy choice, and policy choices are revisable. In late 2024 an internal Meta document reported by The Information indicated that the company had been exploring limited face-recognition features for the glasses, framed as a memory aid for users who struggle to recall the names of acquaintances. The feature was not shipped. The capability was not removed.
The audio stream, run through a contemporary speech model, yields a transcript of every conversation within range of the wearer's head. Even if Meta does not retain full transcripts, the company retains the embeddings: the compressed numerical representations that capture the semantic content of speech in a form that is smaller to store and, crucially, more difficult for regulators to audit. An embedding is not a transcript in any sense a lawyer would recognise, but it is a transcript in every sense a machine-learning engineer would.
The motion stream, combined with location data from the paired phone, yields a behavioural signature: a vector of how you move through the world that is, in aggregate, as identifying as a fingerprint. A 2013 study by Yves-Alexandre de Montjoye and colleagues at MIT, published in Scientific Reports, showed that four spatiotemporal points were sufficient to uniquely identify ninety-five per cent of individuals in a mobile phone dataset of one and a half million users. The Ray-Ban Meta produces spatiotemporal points at a density Montjoye's team could not have imagined.
The inferred stream is where the product becomes, in the commercial sense, a product. It is the stream that is worth money. An advertiser does not particularly care what you ate for lunch. An advertiser cares deeply about the inference that can be drawn from your having eaten it: that you are the kind of person who eats at that kind of place, at that kind of hour, with that kind of company, for that kind of price. Multiply by every meal, every shop, every interaction, every glance, and you have the substrate of what the industry politely calls behavioural targeting and what everyone else calls a dossier.
The Regulatory Hairline Fracture
The legal architecture around this bargain is in the early stages of a rupture that will take years to play out. The European Union's Artificial Intelligence Act, which entered into force in August 2024 with a phased application schedule running through 2027, classifies certain uses of biometric categorisation and emotion recognition as prohibited or high-risk. A literal reading of the act suggests that a pair of glasses continuously capturing the faces of bystanders for the purpose of training a general-purpose foundation model sits uncomfortably close to several of the act's red lines. A more industry-friendly reading holds that the glasses themselves are not performing the prohibited processing, and that the liability, if it exists anywhere, sits with the downstream model developer rather than the device manufacturer.
Both readings cannot be right. The tension will be resolved through enforcement action, and enforcement action takes years. In the meantime, the devices are being sold, and the data is being collected, and the models are being trained.
In the United States, the position is weaker still. There is no federal privacy statute that speaks meaningfully to wearable biometric capture. Illinois has the Biometric Information Privacy Act, known as BIPA, which has generated a steady stream of class-action settlements against companies that scraped or stored facial geometry without consent, including a one-and-a-quarter-billion-dollar settlement Facebook paid in 2021 over its photo-tagging feature. BIPA is a state statute. It protects Illinois residents. Its reach to smart-glasses capture in other jurisdictions is contested and, at the time of writing, untested in an appellate court.
The United Kingdom occupies an interesting middle ground. The Information Commissioner's Office issued guidance in 2023 noting that wearable cameras sit within the scope of UK GDPR where the footage is processed for anything other than purely domestic purposes, and that the domestic exemption is construed narrowly once material is uploaded to a commercial platform. The guidance has not yet been tested against Ray-Ban Meta specifically. Industry lawyers expect the first test case within the next eighteen months.
What unites all these regulatory regimes is that they were written for a world in which a camera was a thing you had to pick up, aim and operate consciously. The smart glasses dissolve all three of those verbs. The camera is worn. The aiming is done by the direction of the wearer's gaze. The operation is handed, increasingly, to an AI assistant that decides for itself when a frame is worth capturing. The legal concept of a deliberate act of recording, which underpins most privacy case law, becomes harder to locate.
The Bargain You Cannot Read
Every AI smart-glasses product on the market is accompanied by a terms-of-service document. The documents are long. The Ray-Ban Meta terms, in the consolidated version current at the end of 2024, run to somewhere in the region of fourteen thousand words across the main agreement, the Meta AI supplemental terms, the privacy policy and the cookie policy. Reading them all carefully takes about ninety minutes. Comprehending them at the level required to make a genuinely informed consent decision takes considerably longer, because several of the key clauses incorporate by reference other documents, and because the definitions of terms like personal data, processed, and for the purpose of improving our services are not always consistent across documents.
A 2019 study by Jonathan Obar of York University and Anne Oeldorf-Hirsch of the University of Connecticut, published in the journal Information, Communication and Society, found that when users were presented with a fictitious social networking service, ninety-eight per cent agreed to terms of service that included clauses requiring them to surrender their first-born child and to share all their data with the US National Security Agency. The finding was comic, and then, once you stopped laughing, it was not. Obar and Oeldorf-Hirsch called the phenomenon the biggest lie on the internet, which is the lie users tell when they tick the box confirming they have read and understood the terms.
If that lie is already load-bearing for social networks, for shopping sites, for streaming services, it becomes structurally unsustainable for a device that sits on your face and captures the faces of everyone around you. The consent of the wearer is at least notionally retrievable, however compromised by length and legalese. The consent of the bystander is not retrievable at all. There is no box for them to tick. There is only the LED on the rim of someone else's glasses, which they may or may not notice, which they may or may not recognise, and which, even if they do notice and do recognise, gives them no mechanism to decline.
This is the point at which the user-or-product framing starts to feel insufficient. The wearer, whatever the quality of their consent, at least had the opportunity to say no at the point of sale. They chose the frames. They downloaded the app. They accepted the terms. The bystander is neither user nor product in any sense they had the chance to shape. They are raw material. They are the training set.
The Assistant That Knows You Too Well
Set aside, for a moment, the bystander problem and focus on the wearer. Even within the relationship between the person paying for the device and the company selling it, the user-or-product question refuses to resolve cleanly. Because the economic logic of AI smart glasses is not the economic logic of an iPhone.
An iPhone is sold at a margin. Apple's hardware business is its primary profit engine, and the data the device collects is, compared to the industry average, relatively loosely monetised. The company's marketing positions privacy as a competitive differentiator, and although this claim has been contested around specific features, the structural incentive is clear enough: Apple makes more money if you buy another iPhone than if you are profiled more accurately for advertising.
Meta's hardware business is not Apple's. The Reality Labs division of Meta, which builds the smart glasses along with the Quest VR headsets, has lost tens of billions of dollars since it was established. The Ray-Ban Meta itself is reported to sell at or near break-even once development costs are amortised. The company is not in the face-computer business to sell Wayfarers. It is in the business to build a successor platform to the smartphone, one that does not route through the App Store toll booths of Apple and Google, and whose data flows enrich the advertising engine that still generates more than ninety-eight per cent of Meta's revenue.
In that business model, the user is never the customer in any meaningful sense. The user is the feedstock. The customer is the advertiser. This is not a moral judgement about Meta specifically. It is a straightforward reading of the company's 10-K filings with the Securities and Exchange Commission, which have described advertising as the company's overwhelmingly dominant revenue source every year since the company went public in 2012.
If that is the structure of the business, then the AI assistant running on your glasses is not, despite what the marketing suggests, a tool that belongs to you. It is a tool that belongs to the advertising engine, leased to you for the duration of the session. Its job is to be helpful enough that you keep wearing the device. Its deeper job is to generate the behavioural signal that the advertising engine requires. These two jobs are not in direct conflict most of the time, which is why the device feels like a gift rather than an extraction. But when they do conflict, which job wins is not, structurally, your decision.
The Asymmetry of Knowing
The most disorienting feature of the smart-glasses bargain is the asymmetry between what the wearer learns about the world and what the world learns about the wearer. This is the asymmetry that Zuboff's book returns to again and again, and it is sharper here than in any previous consumer device.
When you ask your glasses to translate the menu in Lisbon, you receive a translated menu. The exchange feels even: you give a question, you get an answer. But the answer is not the whole of what you received, and the question is not the whole of what you gave. You also received an implicit model of what the assistant thinks a menu is, what it thinks a translation is, and what it thinks you wanted. And you also gave the image of the menu, the audio of your voice asking, the location of the restaurant, the time of day, the fact that you are travelling, the inference that you do not speak Portuguese, the further inference that you are probably eating alone or in a small group, and the ability to fold all of these data points into a model of you that will be consulted the next time you or someone like you makes a similar request.
The assistant becomes, over time, quite good at predicting what you will want. This is usually experienced as magical. It is in fact the visible surface of a much larger iceberg of inference, and the rest of the iceberg is not yours. It is the company's. It is the model's. It is the advertising engine's. You do not get a copy of it. You cannot audit it. You cannot request deletion in any form that the system cannot reconstruct from adjacent data. When Meta deletes your account, under the terms of its current privacy policy, it does not delete the training signal your data contributed to the model. Training signal is considered, for legal purposes, to have been absorbed into the weights of a general-purpose system, and general-purpose systems are not subject to individual deletion requests under any currently enforced reading of GDPR. The UK ICO and the European Data Protection Board have both issued statements acknowledging this as an open question. It has not been closed.
So the bargain, in its cleanest form, is this. You hand over a continuous stream of everything you see and hear and many of the things you feel. In exchange, you receive a helpful assistant that is measurably less knowledgeable about you than the model behind it is, and whose helpfulness is calibrated not by your interests alone but by the commercial interests of the company that built it. The asymmetry is not a bug. It is the feature that makes the economics work.
What Would a Fair Version Look Like
It is possible, in principle, to build AI smart glasses whose bargain with the wearer is symmetrical, or at least less grotesquely asymmetrical. The ingredients are known. On-device processing, so that the visual and audio streams never leave the frames unless the wearer explicitly sends them. Local storage under the wearer's cryptographic control. A clear visible indicator that the rest of the world can recognise as reliably as a red recording light on a television camera. Opt-in rather than opt-out data sharing. A legal structure in which training-corpus contribution is an affirmative choice compensated in some meaningful way rather than a default buried in the settings. An audit mechanism that allows both wearers and bystanders to know what was captured and what was done with it.
None of these ingredients is technically exotic. Several of them have been demonstrated in research prototypes and niche enterprise products. What they lack is a commercial sponsor of sufficient scale to ship them at consumer price points. Apple, whose business model could in principle support such a device, has so far held back from mass-market AI glasses, although the Vision Pro headset and the rumoured lightweight glasses project widely reported in 2024 and 2025 suggest the company is circling the category. If Apple ships, and ships with a privacy-centric design consistent with its iPhone positioning, the competitive pressure on Meta and the rest of the field will be substantial. If Apple does not ship, or ships something that compromises its stated principles, the window for a fair version may close before it opens.
There are also regulatory interventions that could force the shape of the bargain. A mandatory hardware recording indicator, visible at a defined distance under defined lighting conditions, would at least give bystanders a fighting chance of knowing they were being recorded. A prohibition on the use of bystander-captured data for training general-purpose models would remove the most egregious asymmetry. A requirement that terms of service be expressed in a form comprehensible to a non-lawyer at the point of purchase, rather than buried inside a forty-page document, would restore some fragment of meaningful consent. None of these interventions are unprecedented. All of them have been proposed, in various forms, by regulators and academics working on wearable privacy over the past decade. None of them have been implemented at the scale the problem requires.
The Face in the Window
Return, for a moment, to the moment at the beginning of this piece. You are standing in front of a shop window, wearing your new glasses, and you catch your reflection. You look, more or less, like yourself. And yet something has shifted. The reflection is not only yours anymore. It is also, in a small but non-trivial way, the property of a company you have a contract with, whose terms you have not fully read, whose obligations to you are narrower than its claims on you, and whose servers will hold a record of this moment long after you have forgotten it.
The question of whether you are the user or the product does not have a single answer, because the answer changes with each function the device performs. When the glasses translate a menu for you, you are the user. When the capture of that translation trains the next version of the model, you are the product. When the ambient audio sweep picks up the voice of the stranger at the next table, that stranger is neither user nor product but raw material, whose participation in the transaction was not asked and could not be refused. These three roles coexist inside the same hardware, in the same second, on the same face, and the software does not distinguish between them because the software does not need to. The business model is indifferent to the distinction. All three roles generate the signal it requires.
What the wearer can still control, and what the framework of this argument tries to make legible, is the conscious recognition of which role they are in at any given moment. That recognition does not undo the bargain. But it does restore something the marketing language works very hard to suppress, which is the sense that a bargain is being struck at all. The glasses, whatever else they are, are not neutral. The LED on the rim is not decorative. The assistant that knows your name is not your friend. The frames are a piece of commercial infrastructure, worn on the most personal surface of the body, and the question of whose infrastructure it really is has not yet been answered in any way the wearer should find comforting.
The honest posture, until the answer is clearer, is the posture of someone who has agreed to a deal they do not fully understand, with a counterparty whose interests are not aligned with theirs, in a legal environment that has not caught up with the technology, surrounded by people who did not sign the contract and cannot see its terms. That is not a reason to throw the glasses in the nearest bin. It is a reason to take them off occasionally. To notice, when you put them back on, that the act of putting them on is an act with consequences beyond your own convenience. To remember that the second skin you are wearing is not only yours. And to treat the quiet hum of its intelligence, if you listen for it, as a reminder that in the oldest bargain of the attention economy, the party who pays nothing and receives something is not always the party who thinks they are getting the better deal.
You are the user. You are the product. You are, most of the time, both at once. And the frames on your face, beautiful as they are, are not only yours.
References and Sources
- Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
- European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 on Artificial Intelligence (the Artificial Intelligence Act). Official Journal of the European Union, 12 July 2024.
- European Data Protection Board. (2024). Statement on the processing of personal data in the context of wearable AI devices. Brussels.
- Information Commissioner's Office (United Kingdom). (2023). Guidance on the use of personal devices with integrated cameras and microphones. ICO, Wilmslow.
- Meta Platforms Inc. (2024). Ray-Ban Meta Smart Glasses Terms of Service and Supplemental Meta AI Terms. Available at meta.com.
- Meta Platforms Inc. (2024). Annual Report on Form 10-K for the fiscal year ended 31 December 2023. Filed with the United States Securities and Exchange Commission.
- de Montjoye, Y.-A., Hidalgo, C. A., Verleysen, M., and Blondel, V. D. (2013). Unique in the Crowd: The privacy bounds of human mobility. Scientific Reports, volume 3, article 1376.
- Obar, J. A., and Oeldorf-Hirsch, A. (2020). The biggest lie on the internet: ignoring the privacy policies and terms of service policies of social networking services. Information, Communication and Society, volume 23, issue 1.
- Illinois General Assembly. (2008). Biometric Information Privacy Act, 740 ILCS 14.
- In re Facebook Biometric Information Privacy Litigation, settlement approved by the United States District Court for the Northern District of California, 2021.
- The Information. (2024). Reporting on Meta's internal exploration of face-recognition features for Ray-Ban Meta smart glasses.
- Luxottica Group and EssilorLuxottica. (2023). Press release on the second-generation Ray-Ban Meta collaboration.
- Irish Data Protection Commission and Garante per la protezione dei dati personali (Italy). (2021). Joint correspondence with Meta Platforms regarding recording indicators on Ray-Ban Stories.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast








