ChatGPT Ads Are Not Contextual: Why Persistent Memory Changes Everything

OpenAI began serving advertisements inside ChatGPT on 9 February 2026. Within six weeks, the pilot had crossed $100 million in annualised revenue, with more than 600 advertisers on board and expansion into Canada, Australia, and New Zealand already under way. The company insists it will “never” sell user data to advertisers, that ads will never influence the chatbot's responses, and that the entire system runs on contextual matching rather than behavioural profiling. The language is careful, the assurances are firm, and the underlying question is enormous: does the distinction between contextual relevance and behavioural profiling survive contact with a system that remembers everything you have ever told it?
That question matters because ChatGPT is not a search engine with a text box. It is a conversational interface layered on top of a persistent memory system. Since April 2025, ChatGPT has referenced not only explicit “saved memories” but also the full archive of a user's past conversations to shape its responses. Memory is enabled by default. The system stores your preferences, your interests, your recurring concerns, your tone, your habits. It knows your dog's name and your dietary restrictions. It knows you have been asking about anxiety management every Thursday evening for the past three months. And now, adjacent to those responses, it serves advertisements that are “matched to conversation topics, past chat history, and previous interactions with ads.”
The privacy implications of this arrangement deserve scrutiny that goes well beyond whether OpenAI is technically compliant with its own terms of service. What is at stake is a fundamental question about what “contextual” means when the context never resets.
The Architecture of Remembering
To understand what makes conversational AI advertising fundamentally different from traditional web advertising, you need to understand how memory works in large language models, and how OpenAI has extended that architecture.
A standard LLM does not, on its own, remember anything between sessions. Each conversation is processed within a context window, a fixed-length buffer of tokens that the model uses to generate its next response. When the conversation ends, the context window is cleared. There is no persistent state, no long-term storage, no continuity. This is the architecture that makes the “contextual advertising” framing feel plausible: if the system only knows what you are saying right now, then matching an advertisement to that topic is no different from placing a kitchen appliance ad next to a recipe article.
But ChatGPT has not operated this way for some time. OpenAI introduced its memory feature in early 2024 and expanded it significantly in April 2025. The system now maintains two parallel layers of persistence. The first is “saved memories,” which are explicit facts the model has been asked to retain or has inferred should be retained. The second, and more consequential, is “chat history,” a mechanism that allows the model to reference the full archive of a user's prior conversations when generating new responses. The system does not retain every word verbatim, but it extracts patterns, preferences, and contextual signals that persist indefinitely.
This is not a context window. It is a profile. It may not be stored in a traditional database as a structured dossier, but functionally, it serves the same purpose. The model knows who you are, what you care about, what you have asked about before, and how those interests have evolved over time. When OpenAI says it matches advertisements to “conversation topics, past chat history, and previous interactions with ads,” it is describing a system that uses longitudinal personal data to determine what commercial messages a user is shown. The fact that this data is processed by a neural network rather than a relational database does not change what it is.
OpenAI has stated that ChatGPT is “actively trained not to remember sensitive information, such as health details,” unless explicitly asked. But critics have pointed out the inadequacy of this safeguard. If health details are excluded, what about financial stress? What about relationship difficulties? What about political leanings inferred from a pattern of questions about immigration policy or housing costs? The granular clarity about which categories of sensitive data are eligible for storage, and which are not, is largely absent from OpenAI's public documentation. The system's own judgement about what counts as sensitive is itself opaque.
The Contextual Alibi
OpenAI's public framing leans heavily on the word “contextual.” The company describes its advertising model as a “contextual retrieval engine” that matches ads to “real-time user queries rather than historical behavioral tracking.” This framing is strategically important because contextual advertising occupies a privileged position in privacy regulation. Under the GDPR, contextual advertising, which targets based on the content a user is currently viewing rather than their historical behaviour, generally does not require the same level of consent as behavioural profiling. It does not involve tracking across sites or building persistent profiles. It is, in regulatory terms, the clean option.
But OpenAI's system does not fit neatly into that category. Traditional contextual advertising operates on a stateless model: a user visits a page about running shoes, and the page displays an ad for running shoes. The advertiser knows nothing about the user beyond the fact that they are currently reading about running shoes. There is no memory, no history, no cross-session inference. In principle, contextual advertising treats consumers who request the same content equally and uses identical messaging for all visitors of a website.
ChatGPT's advertising layer operates on a stateful model. The system has access to a user's saved memories, their full conversation history, and their prior interactions with advertisements. When it selects an ad to display, it is not merely responding to the current query in isolation. It is drawing on a rich, persistent, and deeply personal dataset that has been accumulated over months or years of intimate conversational interaction. Two users asking the same question may see different advertisements, not because of the question itself, but because of everything else the system knows about them.
The distinction matters because the regulatory framework for advertising was built around a binary that no longer holds. Contextual advertising was understood as the privacy-preserving alternative precisely because it did not involve persistent data. Behavioural advertising was understood as the privacy-invasive alternative precisely because it did. When a system uses persistent conversational data to inform ad selection but calls itself “contextual,” it occupies a grey zone that existing regulation was not designed to address.
Researchers at TechPolicy.Press have argued that the line between contextual and behavioural advertising is becoming increasingly blurred as AI-driven systems incorporate ever more sophisticated inference capabilities. As one analysis noted, “privacy violations and privacy concerns are not unique to behavioral advertising. They may also be triggered by novel means put forward as 'contextual.'” The concern is not hypothetical. It describes exactly what is happening inside ChatGPT.
Industry observers have noted that companies claiming to operate contextual advertising systems may rely on session data such as browser and page-level data, device and app-level data, IP addresses, and other highly personal elements. In some cases, this may be combined with contextual information to create a comprehensive picture of the people being targeted. The result is that “contextual” becomes a label of convenience rather than a meaningful description of privacy practice.
What the Regulators See (and What They Miss)
The European Data Protection Board's Opinion 28/2024, adopted in December 2024, provides the most detailed regulatory guidance to date on the intersection of AI models and personal data. The opinion makes several points directly relevant to ChatGPT's advertising model.
First, the EDPB established that personal data used to train AI models does not cease to be personal data merely because it has been transformed into mathematical representations within the model. Even though training data “no longer exists within the model in its original form,” the EDPB considers it still capable of constituting personal data, particularly given that techniques such as model inversion, reconstruction attacks, and membership inference can be used to extract training data.
Second, the EDPB addressed the question of when AI models can be considered anonymous, concluding that anonymity must be assessed on a case-by-case basis and that a model is only anonymous if it is “very unlikely” that individuals can be identified or that personal data can be extracted through queries. The EDPB explicitly rejected the so-called Hamburg thesis, which had proposed that AI models trained on personal data should be treated as anonymous by default. Instead, the Board insisted that anonymity claims require rigorous, case-specific demonstration.
Third, and most relevant to the advertising question, the EDPB clarified that legitimate interest cannot generally serve as the legal basis for processing that involves extensive profiling. This is significant because OpenAI's advertising model, which draws on persistent conversational data to match ads, arguably constitutes a form of profiling under the GDPR's definition: “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's preferences, interests, reliability, behaviour, location or movements.”
The GDPR's definition of profiling does not require that the data be stored in a traditional profile database. It requires that personal data be used to evaluate personal aspects. ChatGPT's memory system does exactly this, continuously and automatically, as a prerequisite for generating personalised responses, and now, as a prerequisite for selecting personalised advertisements.
The Meta precedent is instructive here. In 2023, the EDPB ruled that Meta could not continue targeting advertisements based on users' online activity without affirmative, opt-in consent. The ban was extended permanently across the entire EU and EEA in October of that year, forcing Meta to adopt a consent-based approach and introduce ad-free paid subscriptions at 9.99 euros per month. The ruling established a clear principle: extensive profiling for advertising purposes cannot rely on legitimate interest and requires explicit consent. If that principle applies to Meta's tracking of likes and clicks, it applies with even greater force to OpenAI's processing of intimate conversational data.
Yet regulatory enforcement has been slow to catch up with the specific case of AI advertising. The EDPB created an AI enforcement task force in February 2025 by extending the scope of its existing ChatGPT task force, but concrete enforcement actions specifically targeting AI advertising remain sparse. The EU AI Act, which entered into force in 2024, adds requirements for transparency and human oversight in AI-powered advertising, but its practical application to systems like ChatGPT's ad layer is still being worked out by national regulators and the European AI Office.
A 2024 EU audit found that 63% of ChatGPT user data contained personally identifiable information, with only 22% of users aware of the settings that would allow them to disable data collection. This gap between the theoretical availability of privacy controls and users' actual awareness of them is not a minor implementation detail. It is the central problem.
The Intimacy Problem
There is a qualitative difference between the data that traditional advertising systems collect and the data that conversational AI systems accumulate. Google knows what you search for. Meta knows what you like, share, and comment on. These are signals derived from discrete, observable actions taken in contexts that most users understand, at least in broad terms, to be commercial environments.
ChatGPT knows what you confide. Users interact with conversational AI in a mode that more closely resembles therapy, journalling, or conversation with a trusted friend than it does browsing a website. They discuss their mental health, their relationship problems, their financial anxieties, their career frustrations, their parenting challenges, their creative ambitions. They do so in natural language, with a level of specificity and emotional openness that no search query or social media post would typically capture.
Marketing professor Scott Galloway, commenting on Anthropic's February 2026 Super Bowl advertisement (which carried the tagline “Ads are coming to AI, but not to Claude”), called it a “seminal moment” in the AI industry. Galloway argued that the ad resonated because “the number one use case for AI is therapy, with users routinely sharing their most intimate fears, anxieties, and personal struggles with chatbots.” When the system that receives those disclosures also serves advertisements informed by them, the power asymmetry between platform and user reaches a level that traditional ad-tech never achieved.
A recent controversy involving Meta AI underscored these risks in vivid terms. Users discovered that their private prompts to Meta's AI assistant had been posted to Meta's public “Discover” feed, revealing that people had been sharing deeply personal information with the system under the assumption of confidentiality. The incident demonstrated that users often interact with AI systems as though they are private, even when the platform's architecture does not treat them that way. The chasm between how individuals use these systems and their understanding of the potential implications of such interactions is vast.
The tragic case of Adam Raine, a 16-year-old whose suicide prompted a lawsuit against an AI companionship platform, illustrates the extreme end of this risk. Among the design elements alleged to have contributed to his death was the system's persistent memory capability, which purportedly “stockpiled intimate personal details” about his personality, values, beliefs, and preferences to create a psychological profile that kept him engaged. While ChatGPT's advertising system is not a companionship platform, the underlying mechanism, persistent memory used to build an ever-deepening model of a user's inner life, is architecturally similar.
As TechPolicy.Press observed, “an AI system that gets to know you over your life” is worrisome precisely because “even in human relationships, it is rare for any one person to know us across a lifetime. This limitation serves as an important buffer, constraining the degree of influence that any single individual can exert.” When that buffer is removed, and when the system that knows you most intimately is also the system that serves you commercial messages, the conditions for manipulation become structurally embedded. If long-term memory enhances personalisation, and personalisation increases persuasive power, then the boundary between usefulness and manipulation becomes perilously thin.
The Consent Fiction
OpenAI offers users several mechanisms for controlling how their data is used. Memory can be disabled. Individual memories can be deleted. Chat history can be turned off. Temporary Chat mode allows conversations that are not stored, not used for training, and not referenced by memory. Users on ad-supported tiers can, according to OpenAI, “control the use of memories for ads personalization.” These controls exist. They are documented. They are, in principle, available to anyone who knows where to find them.
The problem is that meaningful consent requires more than the theoretical availability of controls. It requires that users understand what they are consenting to, that they can realistically assess the consequences of their choices, and that the default configuration respects their interests rather than the platform's commercial objectives.
On every one of these criteria, ChatGPT's current design falls short. Memory is enabled by default. Chat history referencing is enabled by default. Ad personalisation, for users on ad-supported tiers, draws on these systems by default. The user who simply opens ChatGPT and starts talking, which is to say the vast majority of ChatGPT's 800 million weekly users, is automatically enrolled in a system that accumulates their personal data, builds a persistent model of their preferences and concerns, and uses that model to select commercial messages.
Disabling these features requires navigating settings menus that most users will never visit. Deleting a chat does not remove saved memories from that conversation. Turning off saved memory does not delete anything already remembered. OpenAI retains logs of deleted saved memories for up to 30 days. The architecture is designed for accumulation, and opting out is an effortful, incomplete, and poorly understood process.
This is not a new problem in technology. The entire history of digital privacy regulation is, in some sense, a response to exactly this pattern: defaults that favour data collection, controls that are technically available but practically invisible, and consent mechanisms that function as legal cover rather than genuine expressions of user preference. But the conversational AI context intensifies the problem in two important ways.
First, the nature of the data is more sensitive. Users disclose things to ChatGPT that they would not type into a Google search bar or post on Facebook. The expectation of privacy in a conversational interface is higher, and the gap between that expectation and the reality of data use is correspondingly wider. Mozilla's Privacy Not Included project has warned that “storing more of your personal information in a tech product is just never a great move for your privacy,” urging users to approach AI memory features with scepticism regardless of how conveniently they are marketed.
Second, the mechanisms of inference are less visible. When Google shows you an ad based on your search history, you can, with some effort, reconstruct the chain of inference. You searched for “best running shoes,” and now you see ads for running shoes. The logic is legible. When ChatGPT shows you an ad based on patterns extracted from months of conversation, the chain of inference is opaque. You do not know which conversations contributed to the selection. You do not know what the system inferred from them. You do not know how those inferences were weighted or combined. The system's reasoning is, by design, not transparent to the user. Users on Hacker News and OpenAI's own community forums have reported that even after disabling all personalisation and memory, ChatGPT appeared to “know things” about them, raising questions about whether the platform's data practices fully match its public documentation.
The Competitive Landscape and What It Reveals
OpenAI is not operating in isolation. Google reportedly told advertisers in late 2025 that it planned to introduce ads into Gemini in 2026. Microsoft's Copilot already serves sponsored results in certain contexts. Perplexity, the AI-powered search engine, has introduced labelled promotional placements. The movement towards advertising in conversational AI is industry-wide, and it is driven by the same economic logic that has governed the internet for two decades: the marginal cost of serving free users is high, subscription conversion rates are low, and advertising is the proven mechanism for monetising attention at scale.
Anthropic's decision to position Claude as an ad-free alternative is commercially significant but strategically ambiguous. Its Super Bowl campaign framed the absence of advertising as a core value proposition. The broadcast version softened the online tagline, settling on “there is a time and place for ads, and AI chats aren't it.” Sam Altman responded publicly, calling the original framing “dishonest” and “deceptive,” arguing that OpenAI would “never run ads in the way Anthropic depicts them.” The exchange revealed a genuine disagreement about the future of AI monetisation, but it also revealed something more important: neither company has fully addressed the underlying privacy question.
Anthropic does not serve ads. But Claude also has memory features and persistent context capabilities. If the absence of advertising is the only privacy safeguard, then the question of what happens to the data accumulated through persistent memory remains unanswered. The risk is not limited to what is monetised today. It extends to what could be monetised tomorrow, or what could be compromised, subpoenaed, or repurposed at any point in the future. OpenAI itself acknowledges that while it states user data is not sold or shared for advertising, it “may disclose your information to affiliates, law enforcement, and the government.”
OpenAI's financial trajectory makes the expansion of advertising virtually certain. Despite achieving $12.7 billion in annual recurring revenue in 2025, the company posted cumulative losses exceeding $13.5 billion in the first half of that year alone. Internal documents project that free-user monetisation will generate $1 billion in 2026 and nearly $25 billion by 2029. Truist analysts have called 2026 an “inflection year” for LLM-powered ads, projecting that within several years, “LLM-powered ad channels will become one of the most important pillars of the digital ad industry.” These are not the projections of a company that plans to keep its advertising footprint modest.
The hiring pattern tells the same story. OpenAI appointed Fidji Simo, the former Meta executive and Instacart CEO who built Instacart's advertising business, as CEO of Applications. Kate Rouch, formerly of Meta and Coinbase, became the company's first Chief Marketing Officer. David Dugan, another former Meta ads executive, was named to lead global advertising solutions in March 2026. Kevin Weil, OpenAI's Chief Product Officer, previously built ad-supported products at Instagram and X. CFO Sarah Friar, hired from Nextdoor in 2024, told the Financial Times that the company would be “thoughtful” about implementing ads, before subsequently tempering expectations. Within fourteen months, the ads were live. This is not a leadership team assembled to keep advertising peripheral.
Where Contextual Becomes Profiling
The core question is not whether OpenAI is acting in bad faith. It may well be sincere in its commitment to keeping ads separate from responses, to never selling conversation data directly, and to giving users controls over memory and personalisation. The core question is whether those commitments are sufficient to prevent contextual advertising from functioning as behavioural profiling when the context is a persistent, intimate, and ever-expanding conversational archive.
The answer, under any honest assessment, is no. The GDPR defines profiling as automated processing that uses personal data to evaluate personal aspects including preferences, interests, and behaviour. ChatGPT's memory system does exactly this. The fact that ad selection happens in real time, based on the current conversation plus the accumulated context, does not make it contextual in the regulatory sense. It makes it a hybrid that combines the real-time matching of contextual advertising with the persistent data accumulation of behavioural profiling. This hybrid is, in many respects, more invasive than either model in isolation, because it operates on data that is more intimate, more detailed, and less visible to the user than anything traditional ad-tech has collected.
The European Parliament's research service has warned that “policymakers need to carefully examine this rapidly evolving space and establish a clear definition of what contextual advertising should entail,” precisely because AI-driven systems are incorporating user-level data and content preference insights while still describing themselves as contextual. The Electronic Frontier Foundation has gone further, arguing that “ad tracking, profiling, and targeting violates privacy, warps technology development, and has discriminatory impacts on users,” and that behavioural advertising online should be banned outright.
These are not fringe positions. They reflect a growing recognition that the categories underpinning privacy regulation, contextual versus behavioural, stateless versus persistent, anonymous versus identified, are losing their coherence in the face of systems that operate across all of these boundaries simultaneously.
Towards Structural Accountability
The path out of this impasse is not more granular privacy settings or more detailed terms of service. Users cannot be expected to manage the boundary between contextual relevance and behavioural profiling through toggle switches in a settings menu. The asymmetry of information is too great. The mechanisms of inference are too opaque. The defaults are too permissive.
What is needed is structural accountability: regulatory frameworks that recognise the unique risks of advertising in conversational AI and impose constraints that do not depend on user vigilance. Several principles should guide this effort.
First, the definition of “contextual advertising” in privacy regulation must be updated to exclude systems that draw on persistent user data, regardless of whether that data is processed by a neural network or a traditional database. If ad selection is informed by anything beyond the current session, it is not contextual. It is profiling.
Second, memory systems in ad-supported AI products should be opt-in rather than opt-out. The current default, where memory is enabled automatically and users must actively navigate settings to disable it, reverses the burden of privacy protection. Users who choose to enable memory for the benefits of personalisation should do so with clear, specific, and genuine informed consent.
Third, regulators should require transparency about the inference chain. When a user sees an advertisement in ChatGPT, they should be able to understand, in concrete terms, what data contributed to its selection, which conversations were referenced, and what preferences or interests were inferred. The current “why am I seeing this ad” mechanism, which OpenAI says it will provide, must go beyond the vague category labels that have characterised similar features on other platforms.
Fourth, independent auditing of AI advertising systems should be mandatory. The opacity of neural network inference means that neither users nor regulators can verify claims about how ad selection works without access to the underlying systems. Third-party audits, conducted by entities with genuine independence and technical capability, are essential.
The stakes are not abstract. OpenAI's advertising system is, as of March 2026, a $100-million-and-growing commercial operation that serves ads to hundreds of millions of users based on the most intimate data any technology platform has ever accumulated. The company's assurances about contextual matching and user control are, at best, an incomplete description of a system that blurs the line between relevance and surveillance. At worst, they are a privacy fig leaf draped over the most sophisticated profiling engine ever built.
The question is not whether contextual advertising in conversational AI is acceptable. It is whether the concept of “contextual” retains any meaningful content when the context is your entire conversational history, your persistent memories, your evolving preferences, and your most private thoughts, all held by a system that has every commercial incentive to remember.
Sources and References
- OpenAI, “Our approach to advertising and expanding access to ChatGPT,” OpenAI Blog, January 2026.
- CNBC, “OpenAI to begin testing ads on ChatGPT in the U.S.,” 16 January 2026.
- CNBC, “OpenAI ads pilot tops $100 million in annualized revenue in under 2 months,” 26 March 2026.
- OpenAI, “Memory and new controls for ChatGPT,” OpenAI Blog, 2024.
- OpenAI Help Center, “Memory FAQ,” updated 2025.
- OpenAI Help Center, “What is Memory?,” updated 2025.
- European Data Protection Board, “Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models,” 18 December 2024.
- EDPB, “EDPB opinion on AI models: GDPR principles support responsible AI,” press release, December 2024.
- EDPB, “EDPB adopts statement on age assurance, creates a task force on AI enforcement,” February 2025.
- European Parliament Research Service, “Regulating targeted and behavioural advertising in digital services,” Study, 2021.
- TechPolicy.Press, “What We Risk When AI Systems Remember,” 21 October 2025.
- TechPolicy.Press, “Is So-Called Contextual Advertising the Cure to Surveillance-Based 'Behavioral' Advertising?,” 2024.
- Electronic Frontier Foundation, “A Promising New GDPR Ruling Against Targeted Ads,” December 2022.
- Benzinga, “'Ads Are Coming To AI But Not To Claude:' Anthropic's Super Bowl Spot Challenges OpenAI,” February 2026.
- The Wrap, “OpenAI Considers Ads, Wants to Be 'Thoughtful' About Serving Them With Chat Responses,” December 2024.
- eWeek, “OpenAI's CFO Discusses Potential ChatGPT Ads While CEO Calls It 'Last Resort',” December 2024.
- The Information, “Exclusive: OpenAI Surpasses $100 Million Annualized Revenue From Ads Pilot,” March 2026.
- European Business Magazine, “OpenAI's ChatGPT Embraces Advertising for Revenue Growth,” 2026.
- Mozilla Foundation, “How to Protect Your Privacy from ChatGPT and Other Chatbots,” Privacy Not Included, 2025.
- OpenAI, “ChatGPT Privacy Settings,” Consumer Privacy page, 2026.
- European Data Protection Supervisor, “Revised Guidance on Generative AI,” October 2025.
- Regulation (EU) 2024/1689, the EU AI Act, entered into force 2024.
- GDPR, Article 4(4), definition of profiling; Article 6(1)(a) and (f), lawful bases for processing; Article 22, automated individual decision-making.
- Private Internet Access, “Contextual Advertising Should Be Great for Privacy, But It Risks Being Undermined,” 2025.
- DLA Piper Privacy Matters, “EU: EDPB Opinion on AI Provides Important Guidance though Many Questions Remain,” January 2025.
- EDPB ruling on Meta behavioural advertising, October 2023; Meta consent-based advertising rollout, EU/EEA.
- CNBC, “ChatGPT's ad pilot has the industry excited, but some insiders are frustrated with the slow rollout,” 20 March 2026.
- OpenAI Community Forum, “Privacy Concerns in ChatGPT's Memory System,” 2025.
- Hacker News discussion, “Why is OpenAI lying about the data it's collecting on users?,” 2025.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk