Your Phone Heard Everything: AI Assistants and the Privacy Reckoning

Every morning, roughly two billion people wake up and talk to their phones. They ask about the weather. They dictate messages to lovers, colleagues, and therapists. They request directions to clinics they would rather not name aloud. They ask questions about symptoms they have not yet mentioned to a doctor. They do all of this without pausing to consider a simple, uncomfortable fact: every one of those queries is now processed by artificial intelligence systems so vast and so opaque that not even the engineers who built them can fully explain what happens to the data once it enters the pipeline.

In January 2026, Apple and Google formalised a partnership that sent tremors through the technology industry. Apple would pay Google approximately one billion dollars per year to license a custom version of Gemini, Google's 1.2-trillion-parameter large language model, to power the next generation of Siri. The announcement was framed as a triumph of engineering collaboration. Apple's chief executive, Tim Cook, declared during the company's first-quarter 2026 earnings call that Google's AI technology would “provide the most capable foundation for Apple Foundation Models.” What neither company dwelt on was the extraordinary privacy implications of routing the intimate queries of more than a billion iPhone users through a model built by the world's largest advertising company.

Meanwhile, in the United Kingdom and Ireland, regulators were already mobilising against a different AI assistant gone rogue. Elon Musk's Grok, the chatbot integrated into X (formerly Twitter), had sparked a global backlash after users discovered they could instruct it to generate sexualised images of real people, including children. By February 2026, the UK's Information Commissioner's Office, Ofcom, and Ireland's Data Protection Commission had all launched formal investigations. The question was no longer hypothetical. It was legal, political, and deeply personal: how much of your private life are you unknowingly handing over every time you ask your phone a question?

The Billion-Dollar Handshake

To understand the stakes of the Apple-Google deal, you first need to understand the architecture. When you ask the new Siri a complex question, your device determines whether it can handle the request locally. Simple tasks remain on the iPhone. But anything requiring deeper reasoning, summarisation, or multi-step planning gets routed to Apple's Private Cloud Compute infrastructure, where the Gemini model now sits at the core. Apple's previous cloud-based models used 150 billion parameters. The jump to 1.2 trillion represents not just an increase in scale but a qualitative shift in what the system can do with your data.

Apple has built Private Cloud Compute around five core principles: stateless computation, meaning no data is stored after the task completes; enforceable guarantees, meaning only designated code touches user data; no privileged access, meaning not even Apple employees can see requests; non-targetability, meaning requests cannot be traced to individuals; and verifiable transparency, meaning security researchers can inspect the system. The servers run on Apple silicon, use the same Secure Enclave architecture found in iPhones, and process data ephemerally in memory only. Apple has opened its Private Cloud Compute software to external researchers and offered significant security bounty payouts for anyone who can demonstrate a privacy breach.

On paper, this is formidable. Apple has published a comprehensive security guide, released source code for key components, and created a Virtual Research Environment that allows anyone with a Mac to test the system. No other major technology company has offered anything comparable in terms of transparency around cloud AI processing. The system is, by any reasonable measure, the most sophisticated privacy architecture ever deployed for cloud AI at scale.

But paper guarantees and real-world guarantees are different things entirely. The structural tension in the deal is inescapable. Google, whose core business depends on data collection and targeted advertising, is now providing the intelligence layer for the world's most privacy-focused consumer technology company. Apple insists that Siri interactions sent to Gemini are anonymised and that data is never stored or used to train Google's future models. Google has confirmed it will not receive Apple user data under the arrangement. Cook himself stated during the earnings call that Apple is “not changing our privacy rules.”

Security experts remain sceptical. The concern, articulated by multiple researchers in the weeks following the announcement, centres on what has been called the “weakest link problem.” Private Cloud Compute is only as private as its most vulnerable component. If Google retains any pathway to usage data, whether for model improvement, debugging, or quality assurance, the privacy guarantee fundamentally breaks down. And crucially, Apple has declined to release the full details of its agreement with Google. Cook confirmed during the same earnings call that Apple would not be “releasing the details” of the deal to the public. For a company that has made transparency a cornerstone of its privacy messaging, the refusal to disclose the terms of its most significant AI partnership is a striking omission.

There is also a subtler concern about what researchers have termed “behavioural sovereignty.” Once Siri's cognitive engine comes from Gemini, the question shifts from where data sits to who controls the behaviour of the model that hundreds of millions of people talk to every day. Apple does not control the biases embedded in Google's model architecture, the training data Google used, or the value judgements encoded in the model's responses. This creates what one analysis described as a potential for “problematic experiences that do not align with Apple's core values.” When the model that shapes how your phone responds to your most personal questions was built by a company whose business model depends on knowing everything about you, the architecture of privacy matters less than the architecture of incentives.

The irony is not lost on privacy advocates. Apple regularly runs advertising campaigns contrasting its approach to privacy with competitors who monetise user data. It has updated its App Store guidelines to require apps to disclose and obtain user permission before sharing personal data with third-party AI systems. Yet its most significant AI partnership is with the very company that epitomises the data-driven advertising model Apple claims to oppose. Apple also already pays Google approximately 20 billion dollars per year to be the default search engine on iPhones. The Gemini deal deepens an entanglement that privacy advocates have long viewed with suspicion.

What Your Voice Actually Reveals

The privacy risks of AI assistants extend far beyond the question of whether your specific query reaches a particular server. The deeper issue is what AI systems can infer from the patterns of your behaviour, even when individual requests appear innocuous.

A landmark study published in 2025 by researchers at Northeastern University and the University of Southern California, titled “Echoes of Privacy: Uncovering the Profiling Practices of Voice Assistants,” examined exactly this question. Led by Northeastern's Mon(IoT)r Research Group, the research team conducted 1,171 experiments involving nearly 25,000 voice queries over 20 months across Google Assistant, Amazon Alexa, and Apple's Siri. They created fresh user accounts, trained them with curated sets of voice queries designed to simulate various user personas, and then examined what profiling labels each platform assigned. The lead authors, Tina Khezresmaeilzadeh and Elaine Zhu, along with their colleagues, published their findings in the Proceedings on Privacy Enhancing Technologies, Volume 2025, Issue 2.

The findings were striking in their divergence. Google Assistant exhibited the most aggressive profiling behaviour, compiling information on users based on their queries, including inferred gender, age range, relationship status, and income bracket. Profiling occurred even without direct user interactions, with arbitrary and sometimes inaccurate labels appearing at different times for identical queries. Amazon Alexa showed more moderate profiling, though the researchers found that Amazon provided no tools for users to selectively remove or correct mislabelled profiling data. When users opted out of profiling on Amazon's platform, it worked as expected and limited further label creation, but existing labels could not be rectified. Apple's Siri produced no profiling labels whatsoever, making it the least invasive platform in the study.

But even Apple's relatively clean record on profiling does not eliminate risk. Voice assistants continuously listen for their wake words. Despite assurances that devices only record after detecting the trigger phrase, instances of accidental activation have been well documented, resulting in the capture of private conversations that users never intended to share. And the data that voice assistants do collect intentionally is remarkably revealing. Siri's “request history” includes transcripts, audio for users who have opted in to the Improve Siri programme, contact names, names of installed apps, device specifications, and approximate location. Each of these data points, individually unremarkable, creates a mosaic of personal information when aggregated over weeks and months.

The economic value of this data is immense and growing. Google's advertising revenue per user has increased by approximately 1,800 per cent since 2001, from $1.07 to $36.20 by 2019, and the figure has climbed further since. According to multiple surveys conducted in 2025, 92 per cent of internet users are tracked by Google's behavioural data collection systems. And as Consumer Reports noted in a 2025 analysis, Google's privacy controls affect data sharing between platforms, not collection itself. The settings restrict targeting precision, not profiling capability. Many data streams do not require “Web and App Activity” to be enabled; they form the baseline substrate on which Google's entire business model depends.

The shift to trillion-parameter models makes this dynamic significantly more concerning. Earlier AI assistants could handle only simple pattern matching and keyword routing. A model with 1.2 trillion parameters can draw inferences across vast contextual landscapes. It can connect a medical query from Tuesday morning with a pharmacy search that afternoon and a life insurance question the following week. It can identify emotional states from word choice and sentence structure. It can infer relationships, financial situations, and health conditions from the texture of ordinary conversation. The International AI Safety Report, published in January 2025 by 96 experts led by Yoshua Bengio and commissioned by the 30 nations attending the 2023 Bletchley Park AI Safety Summit, explicitly identified these inference capabilities as a significant privacy risk, noting that “several harms from general-purpose AI are already well established, including privacy violations” and that “no combination of techniques can fully resolve them.”

A Ledger of Broken Promises

The history of AI assistant privacy violations reveals a pattern that should give any user pause. In July 2019, a whistleblower revealed that Apple employed third-party contractors to review Siri audio recordings as part of a quality evaluation process called the Voice Grading Programme. The contractors, the whistleblower told journalists, “regularly hear confidential medical information, drug deals, and recordings of couples having sex.” The recordings were accompanied by user data showing location, contact details, and app data. Apple had not disclosed this practice in its consumer terms and conditions.

Apple suspended the programme, issued a formal apology, and laid off more than 300 contractors who had been working on Siri grading in Europe. The company implemented new policies requiring explicit user opt-in for audio review and restricted the work to Apple employees rather than third-party contractors. But the damage was lasting. In January 2025, a federal judge approved a 95-million-dollar class action settlement in the case of Fumiko Lopez v. Apple. The plaintiffs alleged that Siri had been activated without the “Hey Siri” trigger, recording private conversations and sharing data with advertisers. Two plaintiffs reported receiving targeted advertisements for products they had only discussed verbally, including Air Jordan trainers and Olive Garden restaurants. A third said he received adverts for a surgical procedure he had discussed privately with his doctor. Apple denied wrongdoing but agreed to permanently delete all individual Siri audio recordings collected before October 2019.

The settlement covered approximately 138.5 million potentially eligible devices, though 97 per cent of eligible users never filed a claim. A separate case under Illinois's Biometric Information Privacy Act, with a class of 2.6 to 3.9 million users, was certified in January 2026 and remains ongoing. That law provides statutory damages of 1,000 to 5,000 dollars per violation.

Amazon's track record is similarly troubled. In May 2023, the Federal Trade Commission and the US Department of Justice charged Amazon with violating children's privacy laws by retaining Alexa voice recordings indefinitely and using them to improve its algorithms, even after parents explicitly requested deletion. The FTC found that when parents requested data deletion, Amazon deleted files in some databases while maintaining them in others, keeping the information available for the company's own purposes. Amazon paid a 25-million-dollar civil penalty. In a separate case, Amazon paid an additional 5.8 million dollars over Ring doorbell camera privacy violations after it emerged that employees and contractors had full access to customers' video streams. In the most disturbing instances, hackers broke into Ring's two-way video streams to sexually proposition people, call children racial slurs, and physically threaten families for ransom.

These are not edge cases. They represent systematic failures at three of the largest technology companies in the world. And they occurred with AI systems that were orders of magnitude less capable than the trillion-parameter models now being deployed.

Grok and the Regulatory Reckoning

If the Apple-Google deal represents the sophisticated end of the AI privacy spectrum, the Grok controversy represents the catastrophic failure mode. And the regulatory response to Grok is already reshaping the legal landscape that all AI assistants will have to navigate.

The crisis began in late December 2025, when users on X discovered that Grok's image generation capabilities could be weaponised. The chatbot's “Spicy Mode” allowed users to instruct it to “undress” images of women, generating AI deepfakes with no consent and no meaningful safeguards. A study by AI Forensics, based on 50,000 tweets mentioning Grok published between 25 December 2025 and 1 January 2026, found that over 53 per cent contained individuals in minimal attire. Researchers reported that some of the generated images appeared to include children.

The UK's Ofcom moved first. On 5 January 2026, the regulator urgently contacted X and set a firm deadline of 9 January for the company to explain what steps it had taken to comply with its duties under the Online Safety Act. By 12 January, Ofcom had opened a formal investigation examining whether X conducted the required risk assessments before deploying Grok's image generation features, whether it took adequate steps to prevent the distribution of non-consensual intimate imagery and child sexual abuse material, and whether it implemented age verification measures to protect children. Ofcom's enforcement powers include fines of up to 18 million pounds or 10 per cent of a company's qualifying global revenue, whichever is higher. In the most serious cases, it can seek court orders requiring internet service providers or payment firms to withdraw services or block access in the UK.

The Information Commissioner's Office followed on 3 February 2026, launching its own investigation focused specifically on data protection. William Malcolm, the ICO's head of regulatory risk and innovation, stated that “the reports about Grok raise deeply troubling questions about how people's personal data has been used to generate intimate or sexualised images without their knowledge or consent.” He added: “Losing control of personal data in this way can cause immediate and significant harm, particularly where children are involved.”

Ireland's Data Protection Commission opened a parallel investigation under the GDPR, given that X holds its European Union operations in Ireland. The DPC's investigation focuses on the processing of personal data and Grok's potential to produce harmful sexualised images involving Europeans, including children. Under the GDPR, the DPC can levy fines of up to four per cent of a company's global revenue.

The regulatory net extends further still. French police raided X's Paris offices in early February as part of a widening criminal inquiry. Both Elon Musk and former X chief executive Linda Yaccarino were summoned for questioning. Governments and regulators in at least eight countries confirmed action against X and xAI. And the UK government fast-tracked provisions under section 138 of the Data (Use and Access) Act 2025, which came into force on 6 February 2026, creating new criminal offences for creating or requesting the creation of non-consensual intimate images, including AI-generated deepfakes of adults. The legislation also criminalises requesting someone else to create such images, closing a significant gap in English law that had previously left the initial creation of non-consensual intimate images outside the scope of criminal liability.

X responded by limiting the image editing feature to paid subscribers and announcing it would no longer allow users to edit images of real people in revealing clothing in jurisdictions where it is illegal. But by mid-January, reports indicated that the images were still being produced on X in the UK, France, and Belgium. The gap between corporate promises and technical reality is precisely what regulators are now probing.

Regulatory Fragmentation and the Enforcement Gap

The regulatory landscape for AI privacy is evolving rapidly but remains deeply fragmented, and that fragmentation itself is a privacy risk. In the European Union, the AI Act, adopted in 2024, creates a risk-based framework that subjects high-risk AI systems to specific obligations. It works in concert with the GDPR, which remains the world's most comprehensive data protection regulation. But in November 2025, the European Commission proposed a Digital Omnibus package that would amend both the GDPR and the AI Act, introducing changes that critics describe as significant deregulation driven by industry lobbying.

Among the most contentious proposals is a provision that would explicitly recognise the processing of personal data for AI training as a “legitimate interest” under the GDPR, removing the need for explicit consent. Another would narrow the definition of personal data, potentially stripping many pseudonymous identifiers, such as advertising IDs and cookies, of GDPR protection entirely. The deadlines for compliance with the AI Act's requirements for high-risk systems would be pushed back to December 2027 and August 2028. The obligation for AI providers and deployers to teach AI literacy to their users would be dropped altogether. Analysis published by Corporate Europe Observatory in January 2026 traced the influence of major technology companies on these proposals, characterising them as a systematic rollback of EU digital rights shaped “article by article” by Big Tech lobbying.

In the United Kingdom, the regulatory framework is being shaped in real time by the Grok investigations. The Online Safety Act 2023 created new duties for platforms to protect users from illegal content, and the Data (Use and Access) Act 2025 introduced criminal offences for creating non-consensual intimate images. But enforcement remains a challenge. Ofcom acknowledged that “because of the way the Act relates to chatbots,” it is currently unable to investigate the creation of illegal images by the standalone Grok service, only its distribution on X. The government has signalled it will table an amendment to the Crime and Policing Bill to require AI chatbot providers not currently in scope of the Online Safety Act to protect their users from illegal content. But legislation moves slowly, and AI moves fast.

In the United States, there is still no comprehensive federal privacy law. By February 2025, 19 states had enacted their own privacy laws, creating a patchwork of regulations that technology companies must navigate. The FTC has used its existing enforcement powers aggressively, securing settlements from Amazon and others, but its authority remains limited compared to European regulators. The structural problem is clear: AI systems operate globally, processing data across jurisdictions with incompatible legal frameworks. A query made by a user in London might be processed on servers in the United States using a model trained on data from dozens of countries. The legal protections that apply depend on where the user sits, where the server sits, where the company is incorporated, and which regulators choose to act.

The Invisible Bargain

What makes AI assistant privacy so difficult to address is that the bargain is almost entirely invisible to the user. When you install a social media app, you are at least dimly aware that you are exchanging personal information for a service. When you ask Siri to set a timer or check the weather, the transactional nature of the interaction is hidden. The service feels like a utility, not a data exchange.

But it is a data exchange. Every query you make generates metadata: when you asked, where you were, what device you used, what you asked before and after. Even if your specific words are anonymised and deleted, the patterns they create persist. In the AI era, privacy risks are increasingly metadata risks. As one legal analysis noted, AI makes inference cheaper and more accurate, meaning that even seemingly innocuous data points can reveal sensitive information when processed by a system optimised to find patterns. Your query history reveals your daily routines, your anxieties, your relationships, your health concerns, and your financial worries. Aggregated over months and years, this data constitutes a remarkably detailed portrait of your inner life.

The International AI Safety Report identified this dynamic explicitly, noting that Retrieval-Augmented Generation, a common technique used to personalise AI responses by feeding systems current and personal data beyond the model's original training set, “creates additional privacy risks” even when the underlying model itself is secure. The report also warned that AI can infer identities from indirect data even after de-identification efforts, and that privacy risks may extend to people who are not users of the system but whose personal information might be inferred through advanced data analysis.

IBM's 2025 data breach report added another dimension to the problem. It revealed that one in five organisations experienced breaches through “shadow AI,” which occurs when employees paste sensitive information, source code, meeting notes, and customer data into unauthorised AI tools. These breaches added an average of 670,000 dollars to breach costs. The risk is not limited to corporate settings. Any user who dictates a sensitive message through Siri, asks a health question through Alexa, or discusses financial details with Google Assistant is feeding data into a system whose ultimate disposition of that information depends on architectural decisions, corporate policies, and regulatory frameworks that the user cannot see and may not understand.

Survey data consistently reflects growing public unease with this reality. According to multiple industry surveys from 2025, 82 per cent of consumers reported being highly concerned about how their data is collected and used. Seventy per cent expressed little to no trust in companies to make responsible decisions about AI in their products. Fifty-seven per cent of people worldwide identified the use of AI in collecting and processing personal data as a serious privacy risk. Yet despite this anxiety, fewer than one in four American smartphone users reported feeling in control of their personal data online. The gap between concern and agency is the defining feature of the AI privacy landscape.

Apple's approach to this challenge is, by industry standards, genuinely ambitious. Private Cloud Compute represents a serious engineering effort to process AI queries without creating a permanent record. The company's willingness to open its systems to external security researchers and to offer bounties for discovered vulnerabilities distinguishes it from virtually every competitor. Users can generate reports of requests their iPhone has sent to Private Cloud Compute through Settings, covering periods of the last 15 minutes or the last seven days. But even the most robust privacy architecture cannot fully eliminate the risks inherent in routing the world's most personal queries through a model that Apple did not build, using training data Apple did not curate, with architectural decisions Apple did not make.

The AI assistant on your phone is no longer a simple voice-activated search engine. It is a system capable of understanding, inferring, and connecting information in ways that previous generations of technology could not. The 1.2-trillion-parameter brain inside your phone is extraordinarily powerful. But power, in the context of personal data, has always been a question of who holds it and what they choose to do with it. Right now, the answer to that question is: you do not hold it, you cannot fully verify what is being done with it, and the regulatory systems designed to protect you are still catching up to a technology that is already inside your pocket, already listening, and already far more capable than most people realise.

That should concern anyone who has ever asked their phone a question they would rather not say out loud.


References and Sources

  1. CNBC, “Apple picks Google's Gemini to run AI-powered Siri coming this year,” 12 January 2026. https://www.cnbc.com/2026/01/12/apple-google-ai-siri-gemini.html

  2. Bloomberg, “Apple Plans to Use 1.2 Trillion Parameter Google Gemini Model to Power New Siri,” 5 November 2025. https://www.bloomberg.com/news/articles/2025-11-05/apple-plans-to-use-1-2-trillion-parameter-google-gemini-model-to-power-new-siri

  3. MacRumors, “Apple Explains How Gemini-Powered Siri Will Work,” 30 January 2026. https://www.macrumors.com/2026/01/30/apple-explains-how-gemini-powered-siri-will-work/

  4. Apple Insider, “Tim Cook: Apple won't change privacy rules with Google Gemini partnership,” 29 January 2026. https://appleinsider.com/articles/26/01/29/tim-cook-apple-wont-change-privacy-rules-with-google-gemini-partnership

  5. Apple Insider, “Google confirms that it won't get Apple user data in new Siri deal,” 12 January 2026. https://appleinsider.com/articles/26/01/12/google-confirms-that-it-wont-get-apple-user-data-in-new-siri-deal

  6. TheStreet, “Apple's new Siri runs on Gemini, and there's an invisible catch,” 2026. https://www.thestreet.com/technology/apples-new-siri-runs-on-gemini-and-theres-an-invisible-catch

  7. Apple Security Research, “Private Cloud Compute: A new frontier for AI privacy in the cloud.” https://security.apple.com/blog/private-cloud-compute/

  8. Apple Security Research, “Security research on Private Cloud Compute.” https://security.apple.com/blog/pcc-security-research/

  9. Khezresmaeilzadeh, T., Zhu, E., Grieco, K., Dubois, D., Psounis, K., and Choffnes, D. “Echoes of Privacy: Uncovering the Profiling Practices of Voice Assistants.” Proceedings on Privacy Enhancing Technologies, Volume 2025, Issue 2, Pages 71-87. https://petsymposium.org/popets/2025/popets-2025-0050.php

  10. Northeastern University News, “Your voice assistant is profiling you, just not in the way you expect, new research finds,” 17 March 2025. https://news.northeastern.edu/2025/03/17/voice-assistant-profiling-research/

  11. Ofcom, “Ofcom launches investigation into X over Grok sexualised imagery,” 12 January 2026. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-launches-investigation-into-x-over-grok-sexualised-imagery

  12. ICO, “ICO announces investigation into Grok,” 3 February 2026. https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/02/ico-announces-investigation-into-grok/

  13. Euronews, “Ireland investigates Elon Musk's Grok AI over sexualised images,” 17 February 2026. https://www.euronews.com/next/2026/02/17/ireland-launches-large-scale-probe-into-elon-musks-grok-over-ai-generated-sexual-images

  14. CNN Business, “Grok AI: Europe's privacy watchdog launches 'large-scale' probe into Elon Musk's X,” 17 February 2026. https://edition.cnn.com/2026/02/17/business/grok-ai-sexualized-images-eu-probe-intl

  15. TechPolicy.Press, “Regulators Are Going After Grok and X – Just Not Together,” 2026. https://www.techpolicy.press/regulators-are-going-after-grok-and-x-just-not-together/

  16. Data (Use and Access) Act 2025, Section 138, UK Parliament. https://www.legislation.gov.uk/ukpga/2025/18/section/138

  17. FTC, “FTC and DOJ Charge Amazon with Violating Children's Privacy Law by Keeping Kids' Alexa Voice Recordings Forever,” 31 May 2023. https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-doj-charge-amazon-violating-childrens-privacy-law-keeping-kids-alexa-voice-recordings-forever

  18. NPR, “Amazon to pay over $30 million to settle claims Ring, Alexa invaded user privacy,” 1 June 2023. https://www.npr.org/2023/06/01/1179381126/amazon-alexa-ring-settlement

  19. IAPP, “European Commission proposes significant reforms to GDPR, AI Act,” November 2025. https://iapp.org/news/a/european-commission-proposes-significant-reforms-to-gdpr-ai-act

  20. Corporate Europe Observatory, “Article by article, how Big Tech shaped the EU's roll-back of digital rights,” January 2026. https://corporateeurope.org/en/2026/01/article-article-how-big-tech-shaped-eus-roll-back-digital-rights

  21. CMS LawNow, “Grok in deep trouble over deepfakes? What Ofcom's recent investigation means for online platforms,” February 2026. https://cms-lawnow.com/en/ealerts/2026/02/grok-in-deep-trouble-over-deepfakes-what-ofcom-s-recent-investigation-means-for-online-platforms

  22. Apple Newsroom, “Improving Siri's privacy protections,” August 2019. https://www.apple.com/newsroom/2019/08/improving-siris-privacy-protections/

  23. NPR, “Apple to pay $95 million to settle Siri privacy lawsuit,” 3 January 2025. https://www.npr.org/2025/01/03/g-s1-40940/apple-settle-lawsuit-siri-privacy

  24. Courthouse News Service, “Judge approves $95 million Apple settlement over Siri privacy case,” October 2025. https://www.courthousenews.com/judge-approves-95-million-apple-settlement-over-siri-privacy-case/

  25. International AI Safety Report 2025, published January 2025. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025

  26. Private AI, “What the International AI Safety Report 2025 has to say about Privacy Risks from General Purpose AI,” 2025. https://www.private-ai.com/en/blog/ai-safety-report-2025-privacy-risks

  27. Ofcom, “Investigation into X Internet Unlimited Company,” January 2026. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/investigation-into-x-internet-unlimited-company-and-its-compliance-with-duties-to-protect-its-users-from-illegal-content-and-child-users-from-harmful-content

  28. Lewis Silkin, “Online safety reforms to be fast-tracked amid rising AI risks,” February 2026. https://www.lewissilkin.com/insights/2026/02/23/online-safety-reforms-to-be-fast-tracked-amid-rising-ai-risks-102mk2r


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...