The Browser Wars Are Back: AI Rewrites the Rules of Digital Literacy

The internet browser, that most mundane of digital tools, is having a moment. After years of relative stagnation, the humble gateway to the web is being radically reimagined. At the vanguard sits a new breed of AI-powered browsers that promise to fundamentally alter how we discover information, complete tasks, and navigate digital space. These aren't mere improvements; they represent an entirely different philosophy about what a browser should be and how humans should interact with the internet.
Consider Dia, the AI-first browser from The Browser Company that launched into beta in June 2025. Unlike Chrome or Safari, Dia doesn't centre the URL bar as a simple address field. Instead, that bar functions as a conversational interface to an AI assistant that can search the web, summarise your open tabs, draft emails based on browsing history, and even add products from your email to an Amazon shopping cart. The browser isn't just displaying web pages; it's actively interpreting, synthesising, and acting on information on your behalf.
Dia isn't alone. In October 2025, OpenAI launched Atlas, an AI-powered browser allowing users to query ChatGPT about search results and browse websites within the chatbot interface. Perplexity introduced Comet, placing an AI answer engine at the heart of browsing, generating direct answers rather than lists of blue links. Opera unveiled Browser Operator, promising contextual awareness and autonomous task completion. Even Google is adapting: AI Overviews now appear in more than 50 per cent of search results, up from 25 per cent ten months prior.
These developments signal more than a new product category. They represent a fundamental shift in how information is mediated between the internet and the human mind, with profound implications for digital literacy, critical thinking, and the very nature of knowledge in the 21st century.
From Navigation to Conversation
For three decades, the web browser operated on a consistent model: users input queries or URLs, the browser retrieves and displays information, and users navigate through hyperlinks to find what they seek. This placed the cognitive burden squarely on users, who had to formulate effective queries, evaluate credibility, read full articles, synthesise information across sources, and determine relevance.
AI-powered browsers fundamentally invert this relationship. Rather than presenting raw materials, they serve finished products. Ask Dia to “find me a winter coat” and it activates a shopping skill that knows your browsing history on Amazon and Anthropologie, then presents curated recommendations. Request an email draft and a writing skill analyses your previous emails and favourite authors to generate something in your voice.
This shift represents what analysts call “agentic browsing,” where browsers act as autonomous agents making decisions on your behalf. According to University of South Florida research, users spend 30 per cent more time with AI search engines not because they're less efficient, but because the interaction model has changed from retrieval to dialogue.
The numbers prove this isn't marginal. In the six months leading to October 2025, ChatGPT captured 12.5 per cent of general information searches. Google's dominance slipped from 73 per cent to 66.9 per cent. More tellingly, 27 per cent of US users and 13 per cent of UK users now routinely use AI tools instead of traditional search engines, according to Higher Visibility research. Daily AI usage more than doubled from 14 per cent to 29.2 per cent, whilst “never” users dropped from 28.5 per cent to 16.3 per cent.
Yet this isn't simple replacement. The same research found 99 per cent of AI platform users continued using traditional search engines, indicating hybrid search behaviours rather than substitution. Users are developing intuitive sense for when conversation serves better than navigation.
The New Digital Literacy Challenge
This hybrid reality poses unprecedented challenges for digital literacy. Traditional curricula focused on teaching effective search queries, identifying credible sources through domain analysis, recognising bias, and synthesising information. But what happens when an AI intermediary performs these tasks?
Consider a practical example: a student researching climate change impacts. Traditionally, they might start with “climate change effects UK agriculture,” examine results, refine to “climate change wheat yield projections UK 2030,” evaluate sources by domain and date, click through to papers and reports, and synthesise across sources. This taught query refinement, source evaluation, and synthesis as integrated skills.
With an AI browser, that student simply asks: “How will climate change affect UK wheat production in the next decade?” The AI returns a synthesised answer citing three sources. Information arrives efficiently, but bypasses the query refinement teaching precise thinking, the source evaluation developing critical judgement, and the synthesis building deep understanding. The answer comes quickly; the learning evaporates.
When Google returns links, users examine domains, check dates, look for credentials, compare claims. When Dia or Comet returns synthesised answers from multiple sources, that evaluation becomes opaque. You see an answer, perhaps citations, but didn't see retrieval, didn't evaluate alternatives, didn't make credibility judgements.
Research in Frontiers in Education (January 2025) found that individuals with deeper technical understanding of generative AI expressed more caution towards its acceptance in higher education, recognising limitations and ethical implications. Meanwhile, the study revealed digital literacy frameworks have been “slow to react to artificial intelligence,” leaving a dangerous gap between technological capability and educational preparedness.
The challenge intensifies with AI hallucinations. A 2024 study found GPT-4 hallucinated approximately 3 per cent of the time, whilst GPT-3.5 reached 40 per cent. Even sophisticated retrieval-augmented systems like Perplexity aren't immune; a GPTZero investigation found users encounter AI-generated sources containing hallucinations within just three queries. Forbes and Wired found Perplexity “readily spouts inaccuracies and garbled or uncredited rewrites.”
Most concerning, Columbia Journalism Review research found ChatGPT falsely attributed 76 per cent of 200 quotes from journalism sites, indicating uncertainty in only 7 of 153 errors. The system got things wrong with confidence, exactly the authoritative tone discouraging verification.
This creates a profound problem: how do you teach verification when the process hides inside an AI black box? How do you encourage scepticism when interfaces project confidence?
The Erosion of Critical Thinking
The concern extends beyond verification to fundamental cognitive processes. A significant 2024 study in the journal Societies investigated AI tool usage and critical thinking, surveying 666 participants across diverse demographics. Findings were stark: significant negative correlation between frequent AI usage and critical thinking, mediated by increased cognitive offloading.
Cognitive offloading refers to relying on external tools rather than internal mental processes. We've always done this; writing, calculators, calendars are cognitive offloading. But AI tools create a qualitatively different dynamic. When a calculator performs arithmetic, you understand what's happening; when an AI browser synthesises information from twenty sources, the process remains opaque.
The 2024 study found cognitive offloading strongly correlates with reduced critical thinking (correlation coefficient -0.75). More troublingly, younger participants exhibited higher AI dependence and lower critical thinking scores, suggesting those growing up with these tools may be most vulnerable.
University of Pennsylvania research reinforces concerns. Turkish high school students using ChatGPT to practise maths performed worse on exams than those who didn't. Whilst AI-assisted students answered correctly 48 per cent more practise problems, concept understanding test scores were 17 per cent lower. They got better at producing right answers but worse at understanding concepts.
Another Pennsylvania university study divided 73 information science undergraduates into two groups: one engaged in pre-testing before using AI; the control used AI directly. Pre-testing improved retention and engagement, but prolonged AI exposure led to memory decline across both groups. The tools made students more productive immediately but interfered with longer-term learning.
These findings point to what researchers term “the cognitive paradox of AI in education”: tension between enhancement and erosion. AI browsers make us efficient at completing tasks, but that efficiency may cost the deeper cognitive engagement building genuine understanding and transferable skills.
The Hidden Cost of Convenience
AI-powered browsers introduce profound privacy implications. To personalise responses and automate tasks, these browsers need vastly more data than traditional browsers. They see every website visited, read page content, analyse patterns, and often store information to provide context over time.
This creates the “surveillance bargain” at AI-powered browsing's heart: convenience for comprehensive monitoring. Implications extend far beyond cookies and tracking pixels.
University College London research (August 2025) examined ten popular AI-powered browser assistants, finding widespread privacy violations. All tested assistants except Perplexity AI showed signs they collect data for user profiling, potentially violating privacy rules. Several transmitted full webpage content, including visible information, to servers. Merlin even captured form inputs including online banking details and health data.
Researchers found some assistants violated US data protection laws including HIPAA and FERPA by collecting protected health and educational information. Given stricter EU and UK privacy regulations, these violations likely extend to those jurisdictions.
Browser extensions like Sider and TinaMind shared user questions and identifying information such as IP addresses with Google Analytics, enabling cross-site tracking and ad targeting. ChatGPT for Google, Copilot, Monica, and Sider demonstrated ability to infer user attributes including age, gender, income, and interests from browsing behaviour.
Menlo Security's 2025 report revealed shadow AI use in browsers surged 68 per cent in enterprises, often without governance or oversight. Workers integrate AI into workflows without IT knowledge or consent, creating security vulnerabilities and compliance risks organisations struggle to manage.
This privacy crisis presents another digital literacy challenge. Users need understanding not just of information evaluation, but the data bargain when adopting these tools. The convenience of AI drafting emails from browsing history means that browser read and stored that history. Form auto-fill requires transmitting sensitive information to remote servers.
Traditional digital literacy addressed privacy through cookies, tracking, and secure connections. The AI browser era demands sophisticated understanding of data flows, server-side processing, algorithmic inference, and trade-offs between personalisation and privacy. Users must recognise these systems don't just track where you go online; they read what you read, analyse what you write, and build comprehensive profiles of interests, behaviours, and thought patterns.
The Educational Response
Recognising these challenges, educational institutions and international organisations have begun updating digital literacy frameworks. In September 2024, UNESCO launched groundbreaking AI Competency Frameworks for Teachers and Students, guiding policymakers, educators, and curriculum developers.
The UNESCO AI Competency Framework for Students outlines 12 competencies across four dimensions: human-centred mindset, ethics of AI, AI techniques and applications, and AI system design. These span three progression levels: understand, apply, create. Rather than treating AI as merely another tool, the framework positions AI literacy as encompassing both technical understanding and broader societal impacts, including fairness, transparency, privacy, and accountability.
The AI Competency Framework for Teachers addresses knowledge, skills, and values educators must master. Developed with principles protecting teachers' rights, enhancing human agency, and promoting sustainability, it outlines 15 competencies across five core areas. Both frameworks are available in English, French, Portuguese, Spanish, and Vietnamese, reflecting UNESCO's commitment to global educational equity.
Yet implementation remains challenging. Future in Educational Research found AI integration presents significant obstacles, including comprehensive educator training needs and curriculum adaptation. Many teachers face limited AI knowledge, time constraints, and resource availability, especially outside computer science classes. Teachers must simplify morally complex topics like prejudice in AI systems, privacy concerns, and socially responsible AI use for young learners.
Research also highlighted persistent equity concerns. AI has potential to democratise education but might exacerbate inequalities and limit accessibility for underprivileged students lacking access to AI educational technologies. Opportunity, social, and digital inequities can impede equitable access, creating a new dimension to the long-standing digital divide.
Digital Promise, an educational non-profit, proposed an AI literacy framework (June 2024) emphasising teaching students to understand, evaluate, and use emerging technology critically rather than passively. Students must become informed consumers and creators of AI-powered technologies, recognising both capabilities and limitations.
This represents crucial educational philosophy shift. Rather than teaching students to avoid AI tools or use them uncritically, effective digital literacy in the AI era must teach sceptical and strategic engagement, understanding when they're appropriate, how they work, where they fail, and what risks they introduce.
The Changing Nature of Discovery
Beyond formal education, AI-powered browsers transform how professionals, researchers, and curious individuals engage with information. Traditional online research involved iterative query refinement, source evaluation, and synthesis across multiple documents. Time-consuming and cognitively demanding, but it built deep familiarity and exposed researchers to unexpected connections and serendipitous discoveries.
AI-powered browsers promise dramatic streamlining. Opera's Browser Operator handles tasks like researching, shopping, and writing code, even whilst users are offline. Fellou, described as the first agentic browser, automates workflows like deep research, report generation, and multi-step web tasks, acting proactively rather than responsively.
A user behaviour study of AI Mode found that in roughly 75 per cent of sessions, users never left the AI Mode pane, and 77.6 per cent of sessions had zero external visits. Users got answers without visiting source websites. Whilst remarkably efficient, this means users never encountered broader context, never saw what else sources published, never experienced serendipitous discovery driving innovation and insight.
Seer Interactive research found Google's AI Overviews reduce clicks to publisher websites by as much as 70 per cent. For simple queries, users get summarised answers directly, no need to click through. This threatens publishers' business models whilst altering the information ecosystem in ways we're only beginning to understand.
Gartner predicts web searches will decrease around 25 per cent in 2026 due to AI chatbots and virtual agents. If accurate, we'll see significant information discovery shift from direct source engagement to mediated AI intermediary interaction.
This raises fundamental questions about information diversity and filter bubbles. Traditional search algorithms already shape encountered information, but operate primarily through ranking and retrieval. AI-powered browsers make more substantive editorial decisions, choosing not just which sources to surface but what information to extract, how to synthesise, and what to omit. These are inherently subjective judgements, reflecting training data, reward functions, and design choices embedded in AI systems.
The loss of serendipity deserves particular attention. Some of humanity's most significant insights emerged from unexpected connections, from stumbling across information whilst seeking something else. When AI systems deliver precisely what you asked for and nothing more, they optimise for efficiency but eliminate productive accidents fuelling creativity and discovery.
The Paradox of User Empowerment
Proponents frame AI-powered browsers as democratising technology, making vast web information resources accessible to users lacking time or skills for traditional research. Why should finding a winter coat require clicking through dozens of pages when AI can curate options based on preferences? Why should drafting routine emails require starting from blank pages when AI can generate something in your voice?
These are legitimate questions, and for many tasks, AI-mediated browsing genuinely empowers users. Research indicates AI can assist students analysing large datasets and exploring alternative solutions. Generative AI tools positively impact critical thinking in specific contexts, facilitating research and idea generation, enhancing engagement and personalised learning.
Yet this empowerment is partial and provisional. You're empowered to complete tasks efficiently but simultaneously rendered dependent on systems you don't understand and can't interrogate. You gain efficiency but sacrifice agency. You receive answers but lose opportunity to develop skills finding answers yourself.
This paradox recalls earlier technology debates. Calculators made arithmetic easier but raised numeracy concerns. Word processors made writing efficient but changed how people compose text. Each technology involved trade-offs between capability and understanding, efficiency and skill development.
What makes AI-powered browsers different is mediation scope and opacity. Calculators perform defined operations users understand. AI browsers make judgements about relevance, credibility, synthesis, and presentation across unlimited knowledge domains, using processes even creators struggle to explain. The black box is bigger and darker than ever.
The empowerment paradox poses particularly acute educational challenges. If students can outsource research and writing to AI, what skills should schools prioritise teaching? If AI provides instant answers to most questions, what role remains for knowledge retention and recall? These aren't hypothetical concerns; they're urgent questions educators grapple with right now.
A New Digital Literacy Paradigm
If AI-powered browsers represent an irreversible shift in information access, then digital literacy must evolve accordingly. This doesn't mean abandoning traditional skills like source evaluation and critical reading, but requires adding new competencies specific to AI-mediated information environments.
First, users need “AI transparency literacy,” the ability to understand, conceptually, how AI systems work. This includes grasping that large language models are prediction engines, not knowledge databases, that they hallucinate with confidence, that outputs reflect training data patterns rather than verified truth. Users don't need to understand transformer architectures but do need mental models sufficient for appropriate scepticism.
Second, users require “provenance literacy,” the habit of checking where AI-generated information comes from. When AI browsers provide answers, users should reflexively look for citations, click through to original sources when available, and verify claims seeming important or counterintuitive. This represents crucial difference between passive consumption and active verification.
Third, we need “use case discernment,” recognising when AI mediation is appropriate versus when direct engagement serves better. AI browsers excel at routine tasks, factual questions with clear answers, and aggregating information from multiple sources. They struggle with nuanced interpretation, contested claims, and domains where context and subtext matter. Users need intuitions about these boundaries.
Fourth, privacy literacy must extend beyond traditional concerns about tracking and data breaches to encompass AI system-specific risks: what data they collect, where it's processed, how it's used for training or profiling, what inferences might be drawn. Users should understand “free” AI services are often subsidised by data extraction and that convenience comes with surveillance.
Finally, we need to preserve what we might call “unmediated information literacy,” the skills involved in traditional research, exploration, and discovery. Just as some photographers still shoot film despite digital cameras' superiority, and some writers draft longhand despite word processors' efficiency, we should recognise value in sometimes navigating the web without AI intermediaries, practising cognitive skills that direct engagement develops.
The Browser as Battleground
The struggle over AI-powered browsers isn't just about technology; it's about who controls information access and how that access shapes human cognition and culture. Microsoft, Google, OpenAI, Perplexity, and The Browser Company aren't just building better tools; they're competing to position themselves as the primary interface between humans and the internet, the mandatory checkpoint through which information flows.
This positioning has enormous implications. When a handful of companies control both AI systems mediating information access and vast datasets generated by that mediation, they wield extraordinary power over what knowledge circulates, how it's framed, and who benefits from distribution.
The Browser Company's trajectory illustrates both opportunities and challenges. After building Arc, a browser beloved by power users but too complex for mainstream adoption, the company pivoted to Dia, an AI-first approach designed for accessibility. In May 2025, it placed Arc into maintenance mode, receiving only security updates whilst focusing entirely on Dia. Then, in September 2025, Atlassian announced it would acquire The Browser Company for approximately $610 million, bringing the project under a major enterprise software company's umbrella.
This acquisition reflects broader industry dynamics. AI-powered browsers require enormous resources: computational infrastructure for running AI models, data for training and improvement, ongoing development to stay competitive. Only large technology companies or well-funded start-ups can sustain these investments, creating natural centralisation pressures.
Centralisation in the browser market has consequences for information diversity, privacy, and user agency. Traditional browsers, for all their flaws, were relatively neutral interfaces displaying whatever the web served, leaving credibility and relevance judgements to users. AI-powered browsers make these judgements automatically, based on algorithmic criteria reflecting creators' values, priorities, and commercial interests.
This doesn't make AI browsers inherently malicious or manipulative, but does make them inherently political, embodying choices about how information should be organised, accessed, and presented. Digital literacy in this environment requires not just individual skills but collective vigilance about technological power concentration and its implications for information ecosystems.
Living in the Hybrid Future
Despite concerns about cognitive offloading, privacy violations, and centralised control, AI-powered browsers aren't going away. Efficiency gains are too substantial, user experience too compelling, competitive pressures too intense. Within a few years, AI capabilities will be standard browser features, like tabs and bookmarks.
The question isn't whether we'll use AI-mediated browsing but how we'll use it, what safeguards we'll demand, what skills we'll preserve. Data suggests we're already developing hybrid behaviours, using AI for certain tasks whilst returning to traditional search for others. This flexibility represents our best hope for maintaining agency in an AI-mediated information landscape.
Educational institutions face the critical task of preparing students for this hybrid reality. This means teaching both how to use AI tools effectively and how to recognise limitations, how to verify AI-generated information and when to bypass AI mediation entirely, how to protect privacy whilst benefiting from personalisation, how to think critically about information ecosystems these tools create.
Policymakers and regulators have crucial roles. Privacy violations uncovered in AI browser research demand regulatory attention. Cognitive impacts deserve ongoing study and public awareness. Competitive dynamics need scrutiny to prevent excessive market concentration. Digital literacy cannot be left entirely to individual responsibility; it requires institutional support and regulatory guardrails.
Technology companies building these tools bear perhaps the greatest responsibility. They must prioritise transparency about data collection and use, design interfaces encouraging verification rather than passive acceptance, invest in reducing hallucinations and improving accuracy, support independent research into cognitive and social impacts.
The emerging hybrid model suggests a path forward. Rather than choosing between traditional browsers and AI-powered alternatives, users might develop sophisticated practices deploying each approach strategically. Quick factual lookups might go to AI; deep research requiring source evaluation might use traditional search; sensitive queries involving private information might avoid AI entirely.
The Long View
Looking forward, we can expect AI-powered browsers to become increasingly sophisticated. The Browser Company's roadmap for Dia includes voice-driven actions, local AI agents, predictive task planning, and context memory across sessions. Other browsers will develop similar capabilities. Soon, browsers won't just remember what you were researching; they'll anticipate what you need next.
This trajectory intensifies both opportunities and risks. More capable AI agents could genuinely transform productivity, making complex tasks accessible to users currently lacking skills or resources. But more capable agents also mean more extensive data collection, more opaque decision-making, more potential for manipulation and control.
The key to navigating this transformation lies in maintaining what researchers call “human agency,” the capacity to make informed choices about how we engage with technology. This requires digital literacy going beyond technical skills to encompass critical consciousness about systems mediating our information environments.
We need to ask not just “How does this work?” but “Who built this and why?” Not just “Is this accurate?” but “What perspective does this reflect?” Not just “Is this efficient?” but “What am I losing by taking this shortcut?”
These questions won't stop the evolution of AI-powered browsers, but they might shape that evolution in directions preserving rather than eroding human agency, that distribute rather than concentrate power, that enhance rather than replace human cognitive capabilities.
The browser wars are back, but the stakes are higher than market share or technical specifications. This battle will determine how the next generation learns, researches, and thinks, how they relate to information and knowledge. Digital literacy in the AI era isn't about mastering specific tools; it's about preserving the capacity for critical engagement in an environment designed to make such engagement unnecessary.
Within a decade, today's AI browsers will seem as quaint as Netscape Navigator does now. The question isn't whether technology will advance, but whether our collective digital literacy will advance alongside it, whether we'll maintain the critical faculties to interrogate systems that increasingly mediate our relationship with knowledge itself.
That's a challenge we can't afford to fail.
Sources and References
Academic Research
Cazzamatta, R., & Sarısakaloğlu, A. (2025). “AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices Across Brazil, Germany, and the United Kingdom.” SAGE Journals. https://journals.sagepub.com/doi/10.1177/27523543251344971
Gonsalves, C. (2024). “Generative AI's Impact on Critical Thinking: Revisiting Bloom's Taxonomy.” SAGE Journals. https://journals.sagepub.com/doi/10.1177/02734753241305980
Gerlich, M. (2024). “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” MDPI Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
University College London. (2025, August). “AI web browser assistants raise serious privacy concerns.” UCL News. https://www.ucl.ac.uk/news/2025/aug/ai-web-browser-assistants-raise-serious-privacy-concerns
“Frontiers | Impact of digital media literacy on attitude toward generative AI acceptance in higher education.” (2025). Frontiers in Education. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1563148/full
“Frontiers | The cognitive paradox of AI in education: between enhancement and erosion.” (2025). Frontiers in Psychology. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1550621/full
Yim, et al. (2024). “Teachers' perceptions, attitudes, and acceptance of artificial intelligence (AI) educational learning tools: An exploratory study on AI literacy for young students.” Future in Educational Research. https://onlinelibrary.wiley.com/doi/full/10.1002/fer3.65
Yeter, et al. (2024). “Global initiatives and challenges in integrating artificial intelligence literacy in elementary education: Mapping policies and empirical literature.” Future in Educational Research. https://onlinelibrary.wiley.com/doi/full/10.1002/fer3.59
Industry Reports and Analysis
TechCrunch. (2025, June 11). “The Browser Company launches its AI-first browser, Dia, in beta.” https://techcrunch.com/2025/06/11/the-browser-company-launches-its-ai-first-browser-dia-in-beta/
TechCrunch. (2025, October 21). “OpenAI launches an AI-powered browser: ChatGPT Atlas.” https://techcrunch.com/2025/10/21/openai-launches-an-ai-powered-browser-chatgpt-atlas/
TechCrunch. (2025, October 21). “As the browser wars heat up, here are the hottest alternatives to Chrome and Safari in 2025.” https://techcrunch.com/2025/10/21/as-the-browser-wars-heat-up-here-are-the-hottest-alternatives-to-chrome-and-safari-in-2025/
TechCrunch. (2025, May 27). “The Browser Company mulls selling or open sourcing Arc Browser amid AI-focused pivot.” https://techcrunch.com/2025/05/27/the-browser-company-mulls-selling-or-open-sourcing-arc-browser-amid-ai-focused-pivot/
Menlo Security. (2025). “2025 Enterprise Shadow AI Report.” Referenced in multiple sources.
Xponent21. (2024). “Google's AI Overviews Surpass 50% of Queries, Doubling Since August 2024.” https://xponent21.com/insights/googles-ai-overviews-surpass-50-of-queries-doubling-since-august-2024/
Orbit Media Studios. (2024). “Are AI Chatbots Replacing Search Engines? AI vs Google [New Research].” https://www.orbitmedia.com/blog/ai-vs-google/
Higher Visibility. (2024). “How People Search Today: Evolving Search Behaviors (Study).” https://www.highervisibility.com/seo/learn/how-people-search/
Seer Interactive. (Referenced in multiple sources). Research on AI Overviews impact on click-through rates.
GPTZero. “Second-Hand Hallucinations: Investigating Perplexity's AI-Generated Sources.” https://gptzero.me/news/gptzero-perplexity-investigation/
G2 Learning Hub. “How Strong Is AI When Hallucinations Haunt?” https://learn.g2.com/tech-signals-ai-hallucinations-and-research
International Organisation Frameworks
UNESCO. (2024, September). “What you need to know about UNESCO's new AI competency frameworks for students and teachers.” https://www.unesco.org/en/articles/what-you-need-know-about-unescos-new-ai-competency-frameworks-students-and-teachers
UNESCO. (2024). “AI Competency Framework for Students.” https://www.unesco.org/en/articles/ai-competency-framework-students
UNESCO. (2024). “AI Competency Framework for Teachers.” https://www.unesco.org/en/articles/ai-competency-framework-teachers
UNESCO IITE. “New UNESCO policy brief on Media and Information Literacy Responses to Generative AI.” https://iite.unesco.org/news/new-unesco-policy-brief-on-media-and-information-literacy-responses-to-generative-ai/
Digital Promise. (2024, June 18). “AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology.” https://digitalpromise.org/2024/06/18/ai-literacy-a-framework-to-understand-evaluate-and-use-emerging-technology/
News and Technology Media
gHacks Tech News. (2025, June 12). “Dia browser beta launched with AI features.” https://www.ghacks.net/2025/06/12/dia-browser-beta-launched-with-ai-features/
gHacks Tech News. (2025, May 27). “Arc Browser has been discontinued, but the company's building a new browser: Dia.” https://www.ghacks.net/2025/05/27/arc-browser-has-been-discontinued-but-the-companys-building-a-new-browser-dia/
9to5Mac. (2025, June 11). “Dia, The Browser Company's AI-first browser, launches Mac beta.” https://9to5mac.com/2025/06/11/dia-the-browser-companys-ai-first-browser-launches-mac-beta/
The Register. (2025, May 27). “Arc frozen as The Browser Company pivots to AI-powered Dia.” https://www.theregister.com/2025/05/27/arc_browser_development_ends/
Euronews. (2025, August 13). “AI browsers share sensitive personal data, new study finds.” https://www.euronews.com/next/2025/08/13/ai-browsers-share-sensitive-personal-data-new-study-finds
Axios. (2024, June 24). “ChatGPT and generative AI can't tell the truth.” https://www.axios.com/2024/06/24/chat-gpt-generative-ai-perplexity-hallucinations
eCampus News. (2024, December 17). “Information literacy is critical in the digital AI age.” https://www.ecampusnews.com/teaching-learning/2024/12/17/information-literacy-is-critical-in-the-digital-ai-age/
Malwarebytes. (2025, September). “AI browsers or agentic browsers: a look at the future of web surfing.” https://www.malwarebytes.com/blog/ai/2025/09/ai-browsers-or-agentic-browsers-a-look-at-the-future-of-web-surfing
Research Methodology Resources
University of South Florida Libraries. “Generative AI Reliability and Validity – AI Tools and Resources.” https://guides.lib.usf.edu/c.php?g=1315087&p=9678779
Northwestern University Research Guides. “Evaluating AI Generated Content – Using AI Tools in Your Research.” https://libguides.northwestern.edu/ai-tools-research/evaluatingaigeneratedcontent
TechTarget. “GenAI search vs. traditional search engines: How they differ.” https://www.techtarget.com/whatis/feature/GenAI-search-vs-traditional-search-engines-How-they-differ
Nielsen Norman Group. (2024). “How AI Is Changing Search Behaviors.” https://www.nngroup.com/articles/ai-changing-search-behaviors/
Nielsen Norman Group. “AI Hallucinations: What Designers Need to Know.” https://www.nngroup.com/articles/ai-hallucinations/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk