You Are Not Choosing: AI and the Invisible Architecture of Daily Decisions

You woke up this morning and checked your phone. Before your first cup of tea had brewed, you had already been nudged, filtered, ranked, and sorted by artificial intelligence dozens of times. The news headlines surfaced to your lock screen were algorithmically curated. The playlist that accompanied your commute was assembled by machine learning models analysing your listening history, mood patterns, and the time of day. The product recommendations that caught your eye during a two-minute scroll through an online shop were generated by systems that, according to McKinsey research, already account for roughly 35 per cent of everything purchased on Amazon. And you noticed none of it.

According to IDC's landmark “Data Age 2025” whitepaper, produced in partnership with Seagate, the average connected person now engages in nearly 4,900 digital data interactions every single day. That is roughly one interaction every 18 seconds across every waking hour. The figure has grown dramatically from just 298 interactions per day in 2010 to 584 in 2015, climbing through an estimated 1,426 by 2020. Today, more than five billion consumers interact with data daily, and that number is projected to reach six billion, or 75 per cent of the world's population, by the end of 2025. The vast majority of these touchpoints are mediated, shaped, or outright determined by artificial intelligence systems operating beneath the surface of your awareness. The question is no longer whether AI influences your daily life. The question is whether you still recognise the difference between a choice you made and a choice that was made for you.

The Architecture of Invisible Influence

To understand the scale of what is happening, consider the platforms that structure most people's digital existence. Netflix reports that more than 80 per cent of the content its subscribers watch is discovered through its recommendation engine, a figure the company has cited consistently since at least 2017. The platform, which serves over 260 million subscribers globally across more than 190 countries, reports that its personalisation engine saves users a collective total of over 1,300 hours per day in search time alone. On Spotify, algorithmic features including Discover Weekly, Release Radar, and personalised mixes account for approximately 40 per cent of all new artist discoveries, according to the platform's own Fan Study released in April 2024. Since its launch, users have listened to over 2.3 billion hours of music from Discover Weekly alone. These are not peripheral features bolted onto the side of the product. They are the product.

The sophistication of these systems has advanced well beyond simple collaborative filtering, the technique that once powered the familiar “customers who bought this also bought” prompt. Modern recommendation engines deploy deep learning architectures that analyse hundreds of signals simultaneously: your viewing history, obviously, but also how long you hovered over a thumbnail, whether you watched to completion or abandoned at the 23-minute mark, what time of day you tend to prefer certain genres, and how your consumption patterns correlate with those of millions of other users whose behaviour the system has already mapped. According to McKinsey, effective personalisation based on user behaviour can increase customer satisfaction by 20 per cent and conversion rates by 10 to 15 per cent, while retailers implementing advanced recommendation algorithms report a 22 per cent increase in customer lifetime value.

What makes this consequential is not the technology itself but its invisibility. The philosopher and legal scholar Cass Sunstein, co-author of the influential book “Nudge” with Nobel laureate Richard Thaler, has written extensively about how “choice architecture” shapes human decisions. A nudge, in their definition, is any design element that alters people's behaviour in a predictable way without restricting their options or significantly changing their economic incentives. The critical insight is that choice architecture cannot be avoided. Every interface, every default setting, every ordering of options on a screen constitutes a form of choice architecture. The only question is whether it is designed transparently and in the user's interest, or opaquely and in the interest of the platform.

In the digital realm, that question has taken on extraordinary urgency. A European Commission study published in 2022 found that 97 per cent of the most popular websites and apps used by EU consumers deployed at least one “dark pattern,” a design technique that manipulates users into decisions they might not otherwise make. A subsequent investigation by the United States Federal Trade Commission, published in July 2024, examined 642 websites and apps and found that more than three quarters employed at least one deceptive pattern, with nearly 67 per cent deploying multiple such techniques simultaneously. These are not outlier findings. They describe the default condition of the digital environment in which billions of people make thousands of decisions every day.

Your Feed Is Not a Window; It Is a Mirror

Perhaps the most profound form of invisible AI influence operates through the news and social media feeds that billions of people consult daily. The global number of active social media users surpassed 5 billion in 2024, with the average user spending approximately 2 hours and 21 minutes per day on social platforms, according to DataReportal and Global WebIndex. Mobile devices dominate, accounting for 92 per cent of all social media screen time in 2025. The average user engages with approximately 6.8 different platforms per month. During that time, every piece of content encountered has been selected, ranked, and sequenced by algorithmic systems optimising for engagement.

The consequences of this optimisation have been the subject of intense academic scrutiny. A systematic review published in MDPI's “Societies” journal in 2025 synthesised a decade of peer-reviewed research examining the interplay between filter bubbles, echo chambers, and algorithmic bias, documenting a sharp increase in scholarly concern after 2018.

The distinction between filter bubbles and echo chambers matters. Filter bubbles, a term coined by internet activist Eli Pariser in 2011, describe environments where algorithmic curation immerses users in attitude-consistent information without their knowledge. Echo chambers emphasise active selection, where individuals choose to interact primarily with like-minded sources. A 2024 study in the Journal of Computer-Mediated Communication found that user query formulation, not algorithmic personalisation, was the primary driver of divergent search results. The way people phrase their questions matters more than the algorithm's filtering.

Yet this finding does not absolve the algorithms. A study on “Algorithmic Amplification of Biases on Google Search” published on arXiv found that individuals with opposing views on contentious topics receive different search results, and that users unconsciously express their beliefs through vocabulary choices, which the algorithm then reinforces. The researchers demonstrated that differences in vocabulary serve as unintentional implicit signals, communicating pre-existing attitudes to the search engine and resulting in personalised results that confirm those attitudes. The algorithm does not create the bias, but it amplifies it.

On TikTok, these dynamics are particularly pronounced. A major algorithmic audit published on arXiv in January 2025 conducted 323 independent experiments testing partisan content recommendations during the lead-up to the 2024 United States presidential election. The researchers analysed more than 340,000 videos over a 27-week period using controlled accounts across three states with varying political demographics. Their findings indicated that TikTok's recommendation algorithm skewed towards Republican content during that period, a result with significant implications given that, according to Tufts University's CIRCLE, 25 per cent of young people named TikTok as one of their top three sources of political information during the 2024 election cycle. The platform has already been fined 345 million euros by the Irish Data Protection Commission because its preselection of “public-by-default” accounts was deemed a deceptive design pattern.

The Quiet Colonisation of Consumer Choice

The influence extends far beyond politics. AI-powered recommendation systems are fundamentally reshaping how people discover, evaluate, and purchase products. A McKinsey survey found that half of consumers now intentionally seek out AI-powered search engines, with a majority reporting that AI is the top digital source they use to make buying decisions. Among people who use AI for shopping, the technology has become the second most influential source, surpassing retailer websites, apps, and even recommendations from friends and family. McKinsey projects that by 2028, 750 billion dollars in United States revenue will flow through AI-powered search, while brands unprepared for this shift may see traditional search traffic decline by 20 to 50 per cent.

The numbers from the Interactive Advertising Bureau (IAB) reinforce this pattern. Their research found that 44 per cent of AI-powered search users describe it as their primary source of purchasing insight, compared to 31 per cent for traditional search, 9 per cent for retailer or brand websites, and just 6 per cent for review sites. Nearly 90 per cent of AI-assisted shoppers report that the technology helps them discover products they would not have found otherwise, and 64 per cent had AI surface a new product during a single shopping session.

What is striking is the degree of satisfaction users express. According to Bloomreach consumer surveys, 81 per cent of AI-assisted shoppers say the technology made their purchasing decisions easier, 77 per cent say it made them feel more confident, and 85 per cent agree that recommendations feel personalised. Over 70 per cent say AI often anticipates their needs before they even articulate them. From the consumer's perspective, the system is working brilliantly. The experience is frictionless.

But “frictionless” is precisely the word that should give us pause. When a system removes all friction from a decision, it also removes the cognitive engagement that constitutes genuine deliberation. A 2025 study published in PMC on AI's cognitive costs found that prolonged AI use was significantly associated with mental exhaustion, attention strain, and information overload (with a correlation coefficient of 0.905), while being inversely associated with decision-making self-confidence (r = -0.360). The researchers concluded that while AI integration improved efficiency in the short term, prolonged utilisation precipitated cognitive fatigue, diminished focus, and attenuated user agency.

This is the paradox at the heart of AI-mediated consumer life. The system makes choices easier in the moment while gradually eroding the capacity and inclination to make them independently.

Surveillance Capitalism and the Business of Behaviour Modification

To understand why these systems operate as they do, it is essential to examine the economic logic that drives them. Shoshana Zuboff, the Harvard Business School professor emerita whose 2019 book “The Age of Surveillance Capitalism” has become a foundational text in the field, argues that major technology companies have pioneered a new form of capitalism that “unilaterally claims human experience as free raw material for translation into behavioural data.” The excess data generated by users, what Zuboff terms “proprietary behavioural surplus,” is fed into machine learning systems and fabricated into prediction products that anticipate what users will do, think, feel, and buy.

Crucially, Zuboff's analysis extends beyond mere data collection. She documents how surveillance capitalists discovered that the most predictive behavioural data come not from passively observing behaviour but from actively intervening to “nudge, coax, tune, and herd behaviour toward profitable outcomes.” The goal, she writes, is no longer to automate information flows about people. “The goal now is to automate us.” This represents what Zuboff calls “instrumentarian power,” a form of control that operates not through coercion or ideology but through knowledge, prediction, and the subtle shaping of behaviour at scale. Unlike traditional totalitarian systems based on fear, surveillance capitalism operates through continuous, invisible behavioural guidance towards economically profitable ends.

In 2024, Zuboff and Mathias Risse, director of the Carr Center for Human Rights Policy, launched a programme at Harvard Kennedy School titled “Surveillance Capitalism or Democracy?” The initiative brought together figures including EU antitrust chief Margrethe Vestager, Nobel Prize-winning journalist Maria Ressa, and Baroness Beeban Kidron. Vestager emphasised at the September 2024 forum that “it's not too late” to curb the exploitation of personal data.

A December 2024 research paper published on ResearchGate, drawing on frameworks from both Zuboff and technology critic Evgeny Morozov, examined how AI systems facilitate the extraction, analysis, and commercialisation of behavioural data. The paper concluded that platforms and Internet of Things devices construct sophisticated mechanisms for behavioural modification, and advocated for a balance between technological innovation and social protection.

The relevance of this framework has only intensified as generative AI has matured. In 2025, AI no longer merely analyses clicks or searches. It anticipates needs before individuals are fully aware of them. Large language models and predictive systems function as accelerators of behavioural surplus, capable of absorbing vast quantities of human data to create economic value. Meanwhile, new regulatory initiatives such as the European AI Act confirm one of Zuboff's central contentions: without political regulation, the market does not self-correct.

The Neurological Dimension: How Algorithms Rewire Attention

The invisible influence of AI extends to the most fundamental level of human cognition. Research published in the journal Cureus in 2025 examined the neurobiological impact of prolonged social media use, focusing on how it affects the brain's reward, attention, and emotional regulation systems. The study found that frequent engagement with social media platforms alters dopamine pathways, a critical component in reward processing, fostering dependency patterns analogous to substance addiction. Changes in brain activity within the prefrontal cortex and amygdala suggested increased emotional sensitivity and compromised decision-making abilities.

A key 2024 paper by Hannah Metzler and David Garcia, published in Perspectives on Psychological Science, examined these algorithmic mechanisms directly. The researchers noted that algorithms could contribute to increasing depression, anxiety, loneliness, body dissatisfaction, and suicides by facilitating unhealthy social comparisons, addiction, poor sleep, cyberbullying, and harassment, especially among teenagers and girls. However, they cautioned that the debate frequently conflates the effects of time spent on social media with the specific effects of algorithms, making it difficult to isolate algorithmic causality.

The concept of “brain rot,” named the Oxford Word of the Year for 2024, captures the cultural dimension of this neurological reality. Research published in PMC in 2025 defined brain rot as the cognitive decline and mental exhaustion experienced by individuals due to excessive exposure to low-quality online materials. The study linked it to negative behaviours including doomscrolling, zombie scrolling, and social media addiction, all associated with psychological distress, anxiety, and depression. These factors impair executive functioning skills, including memory, planning, and decision-making.

The attention economy, as a theoretical framework, helps explain why platforms are designed to produce these effects. A paper published in the journal Futures applied an attention economic perspective to predict societal trends and identified what the authors described as “a spiral of attention scarcity.” They predicted an information environment that increasingly targets citizens with attention-grabbing content; a continuing trend towards excessive media consumption; and a continuing trend towards inattentive uses of information.

This spiral has measurable consequences. Research published in the Journal of Quantitative Description: Digital Media in 2025 documented that 39 per cent of respondents across 47 countries reported feeling “worn out” by the amount of news in 2024, up from 28 per cent in 2019. The phenomenon of “digital amnesia,” whereby individuals forget readily available information due to reliance on search engines and AI assistants, further illustrates how algorithmic mediation is altering basic cognitive processes. A systematic review published in March 2025 concluded that the digital age has significantly altered human attention, with increased multitasking, information overload, and algorithm-driven biases collectively impacting productivity, cognitive load, and decision-making.

The Chatbot in the Room: Large Language Models as New Echo Chambers

The emergence of large language models has introduced an entirely new dimension to the problem of invisible AI influence. A 2025 study published in Big Data and Society by Christo Jacob, Paraic Kerrigan, and Marco Bastos introduced the concept of the “chat-chamber effect,” describing how AI chatbots like ChatGPT may create personalised information environments that function simultaneously as filter bubbles and echo chambers.

The researchers argued that algorithmic bias and media effects combine to create a prospect of AI chatbots providing politically congruent information to isolated subgroups, triggering effects that result from both algorithmic filtering and active user-AI communication. This dynamic is compounded by the persistent challenge of hallucination in large language models. The study cited research indicating that ChatGPT generates reference data with a hallucination rate as high as 25 per cent.

Given the capacity of large language models to mimic human communication, the researchers warned that incorporating hallucinating AI chatbots into daily information consumption may create feedback loops that isolate individuals in bubbles with limited access to counterattitudinal information. The ability of these systems to sound authoritative while producing fabricated content represents a qualitatively different kind of information risk than anything previously encountered in the history of media.

This concern gains additional weight when set alongside the growing use of AI for everyday decision-making. According to Bloomreach surveys, nearly 60 per cent of consumers report using AI to help them shop. Among frequent shoppers (those who purchase more than once a week), 66 per cent regularly use AI assistants such as ChatGPT to inform their purchase decisions. The IAB found that among AI shoppers, 46 per cent use AI “most or every time” they shop, and 80 per cent expect to rely on it more in the future. Research from the California Management Review at UC Berkeley has found that consumers prefer AI recommendations for practical, utilitarian purchases while favouring human guidance for more emotional or experiential ones, suggesting that the boundary between human and algorithmic judgment is becoming increasingly contextual.

The implications are significant. If the tools people use to make decisions are themselves shaped by biases, trained on data reflecting existing inequalities, and prone to generating plausible but inaccurate information, then the decisions emerging from those interactions are compromised at their foundation.

The Regulatory Response: Too Little, Too Late?

Governments and regulatory bodies have begun to respond, though the pace of regulation consistently lags behind the pace of technological deployment. The European Union has been the most aggressive actor in this space. The Digital Services Act (DSA), effective since 2024, explicitly prohibits a range of dark pattern techniques on digital platforms. The Digital Markets Act (DMA) bars designated gatekeepers from using “behavioural techniques or interface design” to circumvent their regulatory obligations.

Most significantly, the EU's Artificial Intelligence Act, adopted in June 2024, represents the world's first comprehensive legal framework for regulating AI. The regulation entered into force on 1 August 2024 and introduces a risk-based classification system. AI systems deemed to pose unacceptable risk, including those that manipulate human behaviour through subliminal techniques or exploit vulnerabilities based on age, disability, or socioeconomic status, are banned outright. The prohibition on banned AI systems took effect on 2 February 2025, with remaining obligations phasing in through 2027.

The EU has also launched consultations for a Digital Fairness Act, following an October 2024 “Fitness Check” in which the European Commission found that consumers remain inadequately protected against manipulative design elements. The proposed legislation would establish a binding EU-wide definition of dark patterns, categorised by severity, functionality, and potential impact on user decision-making. A public consultation was launched on 17 July 2025, with the final legislative proposal expected in the third quarter of 2026.

In the United States, enforcement has been more piecemeal. The FTC has pursued action against individual companies under Section 5 of the FTC Act. Notable cases include the ongoing proceedings against Amazon for allegedly using dark patterns to trick consumers into enrolling in Amazon Prime subscriptions, the December 2023 settlement requiring Credit Karma to pay three million dollars for misleading “pre-approved” credit card offers, and the 245 million dollar refund order against Epic Games for using dark patterns to induce children into making unintended in-game purchases in Fortnite.

At the state level, New York passed the Stop Addictive Feeds Exploitation (SAFE) Act to protect children from addictive algorithmic feeds, and Utah enacted legislation in 2024 to hold companies accountable for mental health impacts from algorithmically curated content.

Yet regulation, by its nature, operates reactively. By the time a law is drafted, debated, passed, and enforced, the technology it targets has typically evolved beyond its original scope. The EU AI Act's phased implementation, which will not be fully operative until 2027, illustrates this temporal mismatch. Legal scholars have noted the inherent difficulty: dark patterns operate in the grey zone between legitimate persuasion and outright manipulation, while EU consumer legislation still largely assumes that consumers are rational economic actors.

What You Do Not Know You Do Not Know

The most insidious aspect of invisible AI influence is not that it exists but that it operates below the threshold of awareness. A 2025 study published in Humanities and Social Sciences Communications introduced a system to evaluate population knowledge about algorithmic personalisation. Using data from 1,213 Czech respondents, it revealed significant demographic disparities in digital media literacy, underscoring what the researchers described as an urgent need for targeted educational programmes.

The research consistently shows that informed users can better evaluate privacy risks, guard against manipulation through tailored content, and adjust their online behaviour for more balanced information exposure. But achieving that awareness requires recognising the influence in the first place, which is precisely what these systems are designed to prevent.

The research also reveals a generational dimension. According to data from DemandSage and DataReportal, Generation Z users spend an average of 3 hours and 18 minutes daily on social media, with United States teenagers averaging 4 hours and 48 minutes. Millennials follow at 2 hours and 47 minutes, while Generation X averages 1 hour and 53 minutes. These are the individuals whose political views, consumer preferences, cultural tastes, and understanding of the world are being most intensively shaped by algorithmic curation, and the youngest among them have never known a world where such curation did not exist.

Trust in AI continues to grow even as evidence of its limitations accumulates. According to the Attest 2025 Consumer Adoption of AI Report, 43 per cent of consumers now trust information provided by AI chatbots or tools, up from 40 per cent the previous year. Trust in companies' handling of AI-collected data rose from 29 per cent in 2024 to 33 per cent in 2025. Among 18 to 30 year olds, 37 per cent trust AI companies with their data, compared to 27 per cent of those over 50. There is also a notable gender dimension: men are significantly more likely than women to use AI for purchasing decisions, at 52 per cent versus 43 per cent.

Reclaiming Agency in an Algorithmic World

The picture that emerges from this research is not one of helpless individuals trapped in algorithmic prisons. It is something more nuanced. The algorithms are not imposing preferences from without; they are amplifying tendencies from within. They do not create desires; they detect, reinforce, and commercialise them. The filter bubble is not a wall erected around you; it is a mirror held up to your existing inclinations, polished and magnified until it becomes difficult to distinguish reflection from reality.

This distinction matters because it shifts the locus of responsibility. If algorithms merely reflected an objective external reality, the solution would be straightforward: fix the algorithm. But if they are amplifying subjective internal states, the challenge requires not only better technology and stronger regulation but also a form of cognitive self-defence that most people have never been taught to practise.

The academic literature offers some grounds for cautious optimism. A commentary published in Big Data and Society explored the concept of “protective filter bubbles,” documenting cases where algorithmic curation has provided safe spaces for feminist groups, gay men in China, and political dissidents in countries with restricted press freedom. The technology is not inherently destructive; its impact depends on the intentions and incentives of those who deploy it.

Researchers are also exploring technical solutions. A 2025 study published by Taylor and Francis proposed an “allostatic regulator” for recommendation systems, based on opponent process theory from psychology. The approach can be applied to the output layer of any existing recommendation algorithm to dynamically restrict the proportion of potentially harmful or polarised content recommended to users, offering a pathway for platforms to mitigate echo chamber effects without fundamentally redesigning their systems.

Recommendations from across the research literature converge on several themes. Greater transparency in how algorithms operate and what data they collect is consistently identified as essential. Educational programmes that build digital media literacy, particularly among younger users, are repeatedly advocated. Regulatory frameworks that keep pace with technological development are widely called for. And individual practices, including controlling screen time, curating digital content deliberately, and engaging in non-digital activities, are recommended as personal countermeasures against cognitive overload.

The nearly 5,000 daily digital interactions that now characterise modern connected life are not going to decrease. If anything, as the Internet of Things expands and AI systems become more deeply embedded in everyday objects and services, that number will continue to climb. The challenge is not to retreat from the digital world but to inhabit it with greater awareness of the forces shaping our experience within it.

Every time you open an app, scroll a feed, accept a recommendation, or ask an AI assistant for advice, you are participating in a system designed to learn from you and, in learning, to shape you. The transaction is invisible by design. But the fact that you cannot see it does not mean it is not happening. The first and most essential act of resistance is simply to notice.

References and Sources

  1. IDC and Seagate, “Data Age 2025: The Evolution of Data to Life-Critical” (2017) and “The Digitization of the World: From Edge to Core” (2018). Authors: David Reinsel, John Gantz, John Rydning. Available at: https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf

  2. Statista, “Data interactions per connected person per day worldwide 2010-2025.” Available at: https://www.statista.com/statistics/948840/worldwide-data-interactions-daily-per-capita/

  3. Netflix recommendation statistics. ResearchGate citation: “Statistics show that up to 80% of watches on Netflix come from recommendations.” Available at: https://www.researchgate.net/figure/Statistics-show-that-up-to-80-of-watches-on-Netflix-come-from-recommendations-and-the_fig1_386513037

  4. Spotify Fan Study (April 2024) on artist discovery through algorithmic features. Spotify Research: https://research.atspotify.com/search-recommendations

  5. McKinsey, “New front door to the internet: Winning in the age of AI search.” Available at: https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/new-front-door-to-the-internet-winning-in-the-age-of-ai-search

  6. Amazon recommendation engine and 35 per cent revenue attribution. Firney: https://www.firney.com/news-and-insights/ai-product-recommendations-from-amazons-35-revenue-model-to-your-e-commerce-platform

  7. Cass R. Sunstein, “Nudging and Choice Architecture: Ethical Considerations” (2015). SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2551264

  8. Richard H. Thaler and Cass R. Sunstein, “Nudge: Improving Decisions about Health, Wealth, and Happiness” (2008). Yale University Press.

  9. European Commission, Deceptive Patterns Study (2022), finding 97 per cent of websites and apps used at least one dark pattern.

  10. United States Federal Trade Commission, Dark Patterns Study (July 2024), examining 642 websites and apps. Available at: https://www.ftc.gov

  11. DataReportal and Global WebIndex, social media usage statistics (2024-2025). Available at: https://www.statista.com/statistics/433871/daily-social-media-usage-worldwide/

  12. MDPI Societies, “Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth” (2025). Available at: https://www.mdpi.com/2075-4698/15/11/301

  13. Journal of Computer-Mediated Communication, “It matters how you google it? Using agent-based testing to assess the impact of user choices in search queries and algorithmic personalization on political Google Search results” (2024). Available at: https://academic.oup.com/jcmc/article/29/6/zmae020/7900879

  14. ArXiv, “Algorithmic Amplification of Biases on Google Search” (2024). Available at: https://arxiv.org/html/2401.09044v1

  15. ArXiv, “TikTok's recommendations skewed towards Republican content during the 2024 U.S. presidential race” (January 2025). Available at: https://arxiv.org/html/2501.17831v1

  16. Tufts University CIRCLE, “Youth Rely on Digital Platforms, Need Media Literacy to Access Political Information” (2024). Available at: https://circle.tufts.edu/latest-research/youth-rely-digital-platforms-need-media-literacy-access-political-information

  17. Interactive Advertising Bureau (IAB), “AI Ranks Among Consumers' Most Influential Shopping Sources” (2025). Available at: https://www.iab.com/news/ai-ranks-among-consumers-most-influential-shopping-sources-according-to-new-iab-study/

  18. Bloomreach consumer surveys on AI shopping behaviour (2025). Referenced via: https://news.darden.virginia.edu/2025/06/17/nearly-60-use-ai-to-shop-heres-what-that-means-for-brands-and-buyers/

  19. PMC, “The Cognitive Cost of AI: How AI Anxiety and Attitudes Influence Decision Fatigue in Daily Technology Use” (2025). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC12367725/

  20. Shoshana Zuboff, “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power” (2019). PublicAffairs. Harvard Business School faculty page: https://www.hbs.edu/faculty/Pages/item.aspx?num=56791

  21. Harvard Magazine, “Ending Surveillance Capitalism” (September 2024). Available at: https://www.harvardmagazine.com/2024/09/information-civilization

  22. ResearchGate, “Artificial Intelligence and the Commodification of Human Behavior: Insights on Surveillance Capitalism from Shoshana Zuboff and Evgeny Morozov” (December 2024). Available at: https://www.researchgate.net/publication/387502050

  23. Cureus, “Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations” (2025). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11804976/

  24. PMC, “Demystifying the New Dilemma of Brain Rot in the Digital Era: A Review” (2025). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11939997/

  25. Futures, “An attention economic perspective on the future of the information age” (2024). Available at: https://www.sciencedirect.com/science/article/pii/S0016328723001477

  26. Journal of Quantitative Description: Digital Media, news fatigue statistics across 47 countries (2025). Available at: https://journalqd.org/article/download/9064/7658

  27. Big Data and Society, “The chat-chamber effect: Trusting the AI hallucination” (2025). Christo Jacob, Paraic Kerrigan, Marco Bastos. Available at: https://journals.sagepub.com/doi/10.1177/20539517241306345

  28. Attest, “2025 Consumer Adoption of AI Report.” Available at: https://www.askattest.com/blog/articles/2025-consumer-adoption-of-ai-report

  29. European Parliament, “EU AI Act: first regulation on artificial intelligence.” Available at: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

  30. European Parliament, “Regulating dark patterns in the EU: Towards digital fairness” (2025). Available at: https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/767191/EPRS_ATA(2025)767191_EN.pdf

  31. Humanities and Social Sciences Communications, “Algorithmic personalization: a study of knowledge gaps and digital media literacy” (2025). Available at: https://www.nature.com/articles/s41599-025-04593-6

  32. Metzler, H. and Garcia, D., “Social Drivers and Algorithmic Mechanisms on Digital Media,” Perspectives on Psychological Science (2024). Available at: https://journals.sagepub.com/doi/10.1177/17456916231185057

  33. Big Data and Society, “Rethinking the filter bubble? Developing a research agenda for the protective filter bubble” (2024). Jacob Erickson. Available at: https://journals.sagepub.com/doi/10.1177/20539517241231276

  34. DemandSage, “Average Time Spent On Social Media” (2026 update). Available at: https://www.demandsage.com/average-time-spent-on-social-media/

  35. RSISINTERNATIONAL, “A Systematic Review of the Impact of Artificial Intelligence, Digital Technology, and Social Media on Cognitive Functions” (2025). Available at: https://rsisinternational.org/journals/ijriss/articles/a-systematic-review-of-the-impact-of-artificial-intelligence-digital-technology-and-social-media-on-cognitive-functions/

  36. California Management Review, “Humans or AI: How the Source of Recommendations Influences Consumer Choices for Different Product Types” (2024). Available at: https://cmr.berkeley.edu/2024/12/humans-or-ai-how-the-source-of-recommendations-influences-consumer-choices-for-different-product-types/

  37. Taylor and Francis, “Reducing echo chamber effects: an allostatic regulator for recommendation algorithms” (2025). Available at: https://www.tandfonline.com/doi/full/10.1080/29974100.2025.2517191

  38. Irish Data Protection Commission, TikTok fine of 345 million euros for deceptive design patterns affecting children. Referenced via: https://cbtw.tech/insights/illegal-dark-patterns-europe


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...