Seeing Isn't Believing: The Right to Know

The line between reality and simulation has never been more precarious. In 2024, an 82-year-old retiree lost 690,000 euros to a deepfake video of Elon Musk promoting a cryptocurrency scheme. That same year, a finance employee at Arup, a global engineering firm, transferred £25.6 million to fraudsters after a video conference where every participant except the victim was an AI-generated deepfake. Voters in New Hampshire received robocalls featuring President Joe Biden's voice urging them not to vote, a synthetic fabrication designed to suppress turnout.

These incidents signal a fundamental shift in how information is created, distributed, and consumed. As deepfakes online increased tenfold from 2022 to 2023, society faces an urgent question: how do we balance AI's innovative potential and free expression with the public's right to know what's real?

The answer involves complex negotiation between technology companies, regulators, media organisations, and civil society, each grappling with preserving authenticity when the concept itself is under siege. At stake is the foundation of informed democratic participation and the integrity of the information ecosystem underpinning it.

The Synthetic Media Explosion

Creating convincing synthetic media now takes minutes with consumer-grade applications. Deloitte's 2024 survey found 25.9% of executives reported deepfake incidents targeting their organisations' financial data in the preceding year. The first quarter of 2025 alone saw 179 recorded deepfake incidents, surpassing all of 2024 by 19%.

The advertising industry has embraced generative AI enthusiastically. Research in the Journal of Advertising identifies deepfakes as “controversial and emerging AI-facilitated advertising tools,” with studies showing high-quality deepfake advertisements appraised similarly to originals. When properly disclosed, these synthetic creations trigger an “emotion-value appraisal process” that doesn't necessarily diminish effectiveness.

Yet the same technology erodes media trust. Getty Images' 2024 report covering over 30,000 adults across 25 countries found almost 90% want to know whether images are AI-created. More troubling, whilst 98% agree authentic images and videos are pivotal for trust, 72% believe AI makes determining authenticity difficult.

For journalism, synthetic content poses existential challenges. Agence France-Presse and other major news organisations deployed AI-supported verification tools, including Vera.ai and WeVerify, to detect manipulated content. But these solutions are locked in an escalating arms race with the AI systems creating the synthetic media they're designed to detect.

The Blurring Boundaries

AI-generated content scrambles the distinction between journalism and advertising in novel ways. Native advertising, already controversial for mimicking editorial content whilst serving commercial interests, becomes more problematic when content itself may be synthetically generated without clear disclosure.

Consider “pink slime” websites, AI-generated news sites that exploded across the digital landscape in 2024. Identified by Virginia Tech researchers and others, these platforms deploy AI to mass-produce articles mimicking legitimate journalism whilst serving partisan or commercial agendas. Unlike traditional news organisations with editorial standards and transparency about ownership, these synthetic newsrooms operate in shadows, obscured by automation layers.

The European Union's AI Act, entering force on 1 August 2024 with full enforcement beginning 2 August 2026, addresses this through comprehensive transparency requirements. Article 50 mandates that providers of AI systems generating synthetic audio, image, video, or text ensure outputs are marked in machine-readable format and detectable as artificially generated. Deployers creating deepfakes must clearly disclose artificial creation, with limited exemptions for artistic works and law enforcement.

Yet implementation remains fraught. The AI Act requires technical solutions be “effective, interoperable, robust and reliable as far as technically feasible,” whilst acknowledging “specificities and limitations of various content types, implementation costs and generally acknowledged state of the art.” This reveals fundamental tension: the law demands technical safeguards that don't yet exist at scale or may prove economically prohibitive.

The Paris Charter on AI and Journalism, unveiled by Reporters Without Borders and 16 partner organisations, represents journalism's attempt to establish ethical guardrails. The charter, drafted by a 32-person commission chaired by Nobel laureate Maria Ressa, comprises 10 principles emphasising transparency, human agency, and accountability. As Ressa observed, “Artificial intelligence could provide remarkable services to humanity but clearly has potential to amplify manipulation of minds to proportions unprecedented in history.”

Free Speech in the Algorithmic Age

AI content regulation collides with fundamental free expression principles. In the United States, First Amendment jurisprudence generally extends speech protections to AI-generated content on grounds it's created or adopted by human speakers. As legal scholars at the Foundation for Individual Rights and Expression note, “AI-generated content is generally treated similarly to human-generated content under First Amendment law.”

This raises complex questions about agency and attribution. Yale Law School professor Jack Balkin, a leading AI and constitutional law authority, observes courts must determine “where responsibility lies, because the AI program itself lacks human intentions.” In 2024 research, Balkin and economist Ian Ayres characterise AI as creating “risky agents without intentions,” challenging traditional legal frameworks built around human agency.

The tension becomes acute in political advertising. The Federal Communications Commission proposed 2024 rules requiring AI-generated content disclosure in political advertisements, arguing transparency furthers rather than abridges First Amendment goals. Yet at least 25 states enacted laws restricting AI in political advertisements since 2019, with courts blocking some on First Amendment grounds, including a California statute targeting election deepfakes.

Commercial speech receives less robust First Amendment protection, creating greater regulatory latitude. The Federal Trade Commission moved aggressively, announcing its final rule 14 August 2024 prohibiting fake AI-generated consumer reviews, testimonials, and celebrity endorsements. The rule, effective 21 October 2024, subjects violators to civil penalties up to $51,744 per violation. Through “Operation AI Comply,” launched September 2024, the FTC pursued enforcement against companies making unsubstantiated AI claims, targeting DoNotPay, Rytr, and Evolv Technologies.

The FTC's approach treats disclosure requirements as permissible commercial speech regulation rather than unconstitutional content restrictions, framing transparency as necessary consumer protection context. Yet the American Legislative Exchange Council warns overly broad AI regulations may “chill protected speech and innovation,” particularly when disclosure requirements are vague.

Platform Responsibilities and Technical Realities

Technology platforms find themselves central to the authenticity crisis: simultaneously AI tool creators, user-generated content hosts, and intermediaries responsible for labelling synthetic media. Their response has been halting and incomplete.

Meta announced February 2024 plans to label AI-generated images on Facebook, Instagram, and Threads by detecting invisible markers using Coalition for Content Provenance and Authenticity (C2PA) and IPTC standards. The company rolled out “Made with AI” labels May 2024, applying them to content with industry standard AI indicators or identified as AI by creators. From July, Meta shifted towards “more labels, less takedowns,” ceasing removal of AI-generated content solely based on manipulated video policy unless violating other standards.

Meta's scale is staggering. During 1-29 October 2024, Facebook recorded over 380 billion user label views on AI-labelled organic content; Instagram tallied over 1 trillion. Yet critics note significant limitations: policies focus primarily on images and video, largely overlooking AI-generated text, whilst Meta places disclosure burden on users and AI tool creators.

YouTube implemented similar requirements 18 March 2024, mandating creator disclosure when realistic content uses altered or synthetic media. The platform applies “Altered or synthetic content” labels to flagged material, visible on the October 2024 GOP advertisement featuring AI-generated Chuck Schumer footage. Yet YouTube's system, like Meta's, relies heavily on creator self-reporting.

OpenAI announced February 2024 it would label DALL-E 3 images using C2PA standard, with metadata embedded to verify origins. However, OpenAI acknowledged metadata “is not a silver bullet” and can be easily removed accidentally or intentionally, a candid admission undermining confidence in technical labelling solutions.

C2PA represents the industry's most ambitious comprehensive technical standard for content provenance. Formed 2021, the coalition brings together major technology companies, media organisations, and camera manufacturers to develop “a nutrition label for digital content,” using cryptographic hashing and signing to create tamper-evident records of content creation and editing history.

Through early 2024, Google and other C2PA members collaborated on version 2.1, including stricter technical requirements resisting tampering. Google announced plans integrating Content Credentials into Search, Google Images, Lens, Circle to Search, and advertising systems. The specification expects ISO international standard status by 2025 and W3C examination for browser-level adoption.

Yet C2PA faces significant challenges. Critics note the standard can compromise privacy through extensive metadata collection. Security researchers documented methods bypassing C2PA safeguards by altering provenance metadata, removing or forging watermarks, and mimicking digital fingerprints. Most fundamentally, adoption remains minimal: very little internet content employs C2PA markers, limiting practical utility.

Research published early 2025 examining fact-checking practices across Brazil, Germany, and the United Kingdom found whilst AI shows promise detecting manipulated media, “inability to grasp context and nuance can lead to false negatives or positives.” The study concluded journalists must remain vigilant, ensuring AI complements rather than replaces human expertise.

The Public's Right to Know

Against these technical and commercial realities stands a fundamental democratic governance question: do citizens have a right to know when content is synthetically generated? This transcends individual privacy or consumer protection, touching conditions necessary for informed public discourse.

Survey data reveals overwhelming transparency support. Getty Images' research found 77% want to know if content is AI-created, with only 12% indifferent. Trusting News found 94% want journalists to disclose AI use.

Yet surveys reveal a troubling trust deficit. YouGov's UK survey of over 2,000 adults found nearly half (48%) distrust AI-generated content labelling accuracy, compared to just a fifth (19%) trusting such labels. This scepticism appears well-founded given current labelling system limitations and metadata manipulation ease.

Trust erosion consequences extend beyond individual deception. Deloitte's 2024 Connected Consumer Study found half of respondents more sceptical of online information than a year prior, with 68% concerned synthetic content could deceive or scam them. A 2024 Gallup survey found only 31% of Americans had “fair amount” or “great deal” of media confidence, a historic low partially attributable to AI-generated misinformation concerns.

Experts warn of the “liar's dividend,” where deepfake prevalence allows bad actors to dismiss authentic evidence as fabricated. As AI-generated content becomes more convincing, the public will doubt genuine audio and video evidence, particularly when politically inconvenient. This threatens not just media credibility but evidentiary foundations of democratic accountability.

The challenge is acute during electoral periods. 2024 saw record national elections globally, with approximately 1.5 billion people voting amidst AI-generated political content floods. The Biden robocall in New Hampshire represented one example of synthetic media weaponised for voter suppression. Research on generative AI's impact on disinformation documents how AI tools lower barriers to creating and distributing political misinformation at scale.

Some jurisdictions responded with specific electoral safeguards. Texas and California enacted laws prohibiting malicious election deepfakes, whilst Arizona requires “clear and conspicuous” disclosures alongside synthetic media within 90 days of elections. Yet these state-level interventions create patchwork regulatory landscapes potentially inadequate for digital content crossing jurisdictional boundaries instantly.

Ethical Frameworks and Professional Standards

Without comprehensive legal frameworks, professional and ethical standards offer provisional guidance. Major news organisations developed internal AI policies attempting to preserve journalistic integrity whilst leveraging AI capabilities. The BBC, RTVE, and The Guardian published guidelines emphasising transparency, human oversight, and editorial accountability.

Research in Journalism Studies examining AI ethics across newsrooms identified transparency as core principle, involving disclosure of “how algorithms operate, data sources, criteria used for information gathering, news curation and personalisation, and labelling AI-generated content.” The study found whilst AI offers efficiency benefits, “maintaining journalistic standards of accuracy, transparency, and human oversight remains critical for preserving trust.”

The International Center for Journalists, through its JournalismAI initiative, facilitated collaborative tool development. Team CheckMate, a partnership involving journalists and technologists from News UK, DPA, Data Crítica, and the BBC, developed a web application for real-time fact-checking of live or recorded broadcasts. Similarly, Full Fact AI offers tools transcribing audio and video with real-time misinformation detection, flagging potentially false claims.

These initiatives reflect “defensive AI,” deploying algorithmic tools to detect and counter AI-generated misinformation. Yet this creates an escalating technological arms race where detection and generation capabilities advance in tandem, with no guarantee detection will keep pace.

The advertising industry faces its own reckoning. New York became the first state passing the Synthetic Performer Disclosure Bill, requiring clear disclosures when advertisements include AI-generated talent, responding to concerns AI could enable unauthorised likeness use whilst displacing human workers. The Screen Actors Guild negotiated contract provisions addressing AI-generated performances, establishing consent and compensation precedents.

Case Studies in Deception and Detection

The Arup deepfake fraud represents perhaps the most sophisticated AI-enabled deception to date. The finance employee joined what appeared to be a routine video conference with the company's CFO and colleagues. Every participant except the victim was an AI-generated simulacrum, convincing enough to survive live video call scrutiny. The employee authorised 15 transfers totalling £25.6 million before discovering the fraud.

The incident reveals traditional verification method inadequacy in the deepfake age. Video conferencing had been promoted as superior to email or phone for identity verification, yet Arup demonstrates even real-time video interaction can be compromised. Fraudsters likely used publicly available footage combined with voice cloning technology to generate convincing deepfakes of multiple executives simultaneously.

Similar techniques targeted WPP when scammers attempted deceiving an executive using a voice clone of CEO Mark Read during a Microsoft Teams meeting. Unlike Arup, the targeted executive grew suspicious and avoided the scam, but the incident underscores sophisticated professionals struggle distinguishing synthetic from authentic media under pressure.

The Taylor Swift deepfake case highlights different dynamics. In 2024, AI-generated explicit images of the singer appeared on X, Reddit, and other platforms, completely fabricated without consent. Some posts received millions of views before removal, sparking renewed debate about platform moderation responsibilities and stronger protections against non-consensual synthetic intimate imagery.

The robocall featuring Biden's voice urging New Hampshire voters to skip the primary demonstrated how easily voice cloning technology can be weaponised for electoral manipulation. Detection efforts have shown mixed results: in 2024, experts were fooled by some AI-generated videos despite sophisticated analysis tools. Research examining deepfake detection found whilst machine learning models can identify many synthetic media examples, they struggle with high-quality deepfakes and can be evaded through adversarial techniques.

The case of “pink slime” websites illustrates how AI enables misinformation at industrial scale. These platforms deploy AI to generate thousands of articles mimicking legitimate journalism whilst serving partisan or commercial interests. Unlike individual deepfakes sometimes identified through technical analysis, AI-generated text often lacks clear synthetic origin markers, making detection substantially more difficult.

The Regulatory Landscape

The European Union emerged as global AI regulation leader through the AI Act, a comprehensive framework addressing transparency, safety, and fundamental rights. The Act categorises AI systems by risk level, with synthetic media generation falling into “limited risk” category subject to specific transparency obligations.

Under Article 50, providers of AI systems generating synthetic content must implement technical solutions ensuring outputs are machine-readable and detectable as artificially generated. The requirement acknowledges technical limitations, mandating effectiveness “as far as technically feasible,” but establishes clear legal expectation of provenance marking. Non-compliance can result in administrative fines up to €15 million or 3% of worldwide annual turnover, whichever is higher.

The AI Act includes carve-outs for artistic and creative works, where transparency obligations are limited to disclosure “in an appropriate manner that does not hamper display or enjoyment.” This attempts balancing authenticity concerns against expressive freedom, though “artistic” versus “commercial” content boundaries remain contested.

In the United States, regulatory authority is fragmented across agencies and government levels. The FCC's proposed political advertising disclosure rules represent one strand; the FTC's fake AI-generated review prohibition constitutes another. State legislatures enacted diverse requirements from political deepfakes to synthetic performer disclosures, creating complex patchworks digital platforms must navigate.

The AI Labeling Act of 2023, introduced in the Senate, would establish comprehensive federal disclosure requirements for AI-generated content. The bill mandates generative AI systems producing image, video, audio, or multimedia content include clear and conspicuous disclosures, with text-based AI content requiring permanent or difficult-to-remove disclosures. As of early 2025, legislation remains under consideration, reflecting ongoing congressional debate about appropriate AI regulation scope and stringency.

The COPIED Act directs the National Institute of Standards and Technology to develop watermarking, provenance, and synthetic content detection standards, effectively tasking a federal agency with solving technical challenges that have vexed the technology industry. California positioned itself as regulatory innovator through multiple AI-related statutes. The AI Transparency Act requires covered providers with over one million monthly users to make AI detection tools available at no cost, effectively mandating platforms creating AI content also provide users with identification means.

Internationally, other jurisdictions are developing frameworks. The United Kingdom published AI governance guidance emphasising transparency and accountability, whilst China implemented synthetic media labelling requirements in certain contexts. This emerging global regulatory landscape creates compliance challenges for platforms operating across borders.

Future Implications and Emerging Challenges

The trajectory of AI capabilities suggests synthetic content will become simultaneously more sophisticated and accessible. Deloitte's 2025 predictions note “videos will be produced quickly and cheaply, with more people having access to high-definition deepfakes.” This democratisation of synthetic media creation, whilst enabling creative expression, also multiplies vectors for deception.

Several technological developments merit attention. Multimodal AI systems generating coordinated synthetic video, audio, and text create more convincing fabrications than single-modality deepfakes. Real-time generation capabilities enable live deepfakes rather than pre-recorded content, complicating detection and response. Adversarial techniques designed to evade detection algorithms ensure synthetic media creation and detection remain locked in perpetual competition.

Economic incentives driving AI development largely favour generation over detection. Companies profit from selling generative AI tools and advertising on platforms hosting synthetic content, creating structural disincentives for robust authenticity verification. Detection tools generate limited revenue, making sustained investment challenging absent regulatory mandates or public sector support.

Implications for journalism appear particularly stark. As AI-generated “news” content proliferates, legitimate journalism faces heightened scepticism alongside increased verification and fact-checking costs. Media organisations with shrinking resources must invest in expensive authentication tools whilst competing against synthetic content created at minimal cost. This threatens to accelerate the crisis in sustainable journalism precisely when accurate information is most critical.

Employment and creative industries face their own disruptions. If advertising agencies can generate synthetic models and performers at negligible cost, what becomes of human talent? New York's Synthetic Performer Disclosure Bill represents an early attempt addressing this tension, but comprehensive frameworks balancing innovation against worker protection remain undeveloped.

Democratic governance itself may be undermined if citizens lose confidence distinguishing authentic from synthetic content. The “liar's dividend” allows political actors to dismiss inconvenient evidence as deepfakes whilst deploying actual deepfakes to manipulate opinion. During electoral periods, synthetic content can spread faster than debunking efforts, particularly given social media viral dynamics.

International security dimensions add complexity. Nation-states have deployed synthetic media in information warfare and influence operations. Attribution challenges posed by AI-generated content create deniability for state actors whilst complicating diplomatic and military responses. As synthesis technology advances, the line between peacetime information operations and acts of war becomes harder to discern.

Towards Workable Solutions

Addressing the authenticity crisis requires coordinated action across technical, legal, and institutional domains. No single intervention will suffice; instead, a layered approach offering multiple verification methods and accountability mechanisms offers the most promising path.

On the technical front, continuing investment in detection capabilities remains essential despite inherent limitations. Ensemble approaches combining multiple detection methods, regular updates to counter adversarial evasion, and human-in-the-loop verification can improve reliability. Provenance standards like C2PA require broader adoption and integration into content creation tools, distribution platforms, and end-user interfaces, potentially demanding regulatory incentives or mandates.

Platforms must move beyond user self-reporting towards proactive detection and labelling. Meta's “more labels, less takedowns” philosophy offers a model, though implementation must extend beyond images and video to encompass text and audio. Transparency about labelling accuracy, including false positive and negative rates, would enable users to calibrate trust appropriately.

Legal frameworks should establish baseline transparency requirements whilst preserving innovation and expression space. Mandatory disclosure for political and commercial AI content, modelled on the EU AI Act, creates accountability without prohibiting synthetic media outright. Penalties for non-compliance must incentivise good-faith efforts whilst avoiding severity chilling legitimate speech.

Educational initiatives deserve greater emphasis and resources. Media literacy programmes teaching citizens to critically evaluate digital content, recognise manipulation techniques, and verify sources can build societal resilience against synthetic deception. These efforts must extend beyond schools to reach all age groups, with particular attention to populations most vulnerable to misinformation.

Journalism organisations require verification capability support. Public funding for fact-checking infrastructure, collaborative verification networks, and investigative reporting can help sustain quality journalism amidst economic pressures. The Paris Charter's emphasis on transparency and human oversight offers a professional framework, but resources must follow principles to enable implementation.

Professional liability frameworks may help align incentives. If platforms, AI tool creators, and synthetic content deployers face legal consequences for harms caused by undisclosed deepfakes, market mechanisms may drive more robust authentication practices. This parallels product liability law, treating deceptive synthetic content as defective products with allocable supply chain responsibility.

International cooperation on standards and enforcement will prove critical given digital content's borderless nature. Whilst comprehensive global agreement appears unlikely given divergent national interests and values, narrow accords on technical standards, attribution methodologies, and cross-border enforcement mechanisms could provide partial solutions.

The Authenticity Imperative

The challenge posed by AI-generated content reflects deeper questions about technology, truth, and trust in democratic societies. Creating convincing synthetic media isn't inherently destructive; the same tools enabling deception also facilitate creativity, education, and entertainment. What matters is whether society can develop norms, institutions, and technologies preserving the possibility of distinguishing real from simulated when distinctions carry consequence.

Stakes extend beyond individual fraud victims to encompass epistemic foundations of collective self-governance. Democracy presupposes citizens can access reliable information, evaluate competing claims, and hold power accountable. If synthetic content erodes confidence in perception itself, these democratic prerequisites crumble.

Yet solutions cannot be outright prohibition or heavy-handed censorship. The same First Amendment principles protecting journalism and artistic expression shield much AI-generated content. Overly restrictive regulations risk chilling innovation whilst proving unenforceable given AI development's global and decentralised nature.

The path forward requires embracing transparency as fundamental value, implemented through technical standards, legal requirements, platform policies, and professional ethics. Labels indicating AI generation or manipulation must become ubiquitous, reliable, and actionable. When content is synthetic, users deserve to know. When authenticity matters, provenance must be verifiable.

This transparency imperative places obligations on all information ecosystem participants. AI tool creators must embed provenance markers in outputs. Platforms must detect and label synthetic content. Advertisers and publishers must disclose AI usage. Regulators must establish clear requirements and enforce compliance. Journalists must maintain rigorous verification standards. Citizens must cultivate critical media literacy.

The alternative is a world where scepticism corrodes all information. Where seeing is no longer believing, and evidence loses its power to convince. Where bad actors exploit uncertainty to escape accountability whilst honest actors struggle to establish credibility. Where synthetic content volume drowns out authentic voices, and verification cost becomes prohibitive.

Technology has destabilised markers we once used to distinguish real from fake, genuine from fabricated, true from false. Yet the same technological capacities creating this crisis might, if properly governed and deployed, help resolve it. Provenance standards, detection algorithms, and verification tools offer at least partial technical solutions. Legal frameworks establishing transparency obligations and accountability mechanisms provide structural incentives. Professional standards and ethical commitments offer normative guidance. Educational initiatives build societal capacity for critical evaluation.

None of these interventions alone will suffice. The challenge is too complex, too dynamic, and too fundamental for any single solution. But together, these overlapping and mutually reinforcing approaches might preserve the possibility of authentic shared reality in an age of synthetic abundance.

The question is whether society can summon collective will to implement these measures before trust erodes beyond recovery. The answer will determine not just advertising and journalism's future, but truth-based discourse's viability in democratic governance. In an era where anyone can generate convincing synthetic media depicting anyone saying anything, the right to know what's real isn't a luxury. It's a prerequisite for freedom itself.


Sources and References

European Union. (2024). “Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act).” Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

Federal Trade Commission. (2024). “Rule on Fake Reviews and Testimonials.” 16 CFR Part 465. Final rule announced August 14, 2024, effective October 21, 2024. https://www.ftc.gov/news-events/news/press-releases/2024/08/ftc-announces-final-rule-banning-fake-reviews-testimonials

Federal Communications Commission. (2024). “FCC Makes AI-Generated Voices in Robocalls Illegal.” Declaratory Ruling, February 8, 2024. https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal

U.S. Congress. “Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act).” Introduced by Senators Maria Cantwell, Marsha Blackburn, and Martin Heinrich. https://www.commerce.senate.gov/2024/7/cantwell-blackburn-heinrich-introduce-legislation-to-combat-ai-deepfakes-put-journalists-artists-songwriters-back-in-control-of-their-content

New York State Legislature. “Synthetic Performer Disclosure Bill” (A.8887-B/S.8420-A). Passed 2024. https://www.nysenate.gov/legislation/bills/2023/S6859/amendment/A

Primary Research Studies

Ayres, I., & Balkin, J. M. (2024). “The Law of AI is the Law of Risky Agents without Intentions.” Yale Law School. Forthcoming in University of Chicago Law Review Online. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4862025

Cazzamatta, R., & Sarısakaloğlu, A. (2025). “AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices Across Brazil, Germany, and the United Kingdom.” Emerging Media, Vol. 2, No. 3. https://journals.sagepub.com/doi/10.1177/27523543251344971

Porlezza, C., & Schapals, A. K. (2024). “AI Ethics in Journalism (Studies): An Evolving Field Between Research and Practice.” Emerging Media, Vol. 2, No. 3, September 2024, pp. 356-370. https://journals.sagepub.com/doi/full/10.1177/27523543241288818

Journal of Advertising. “Examining Consumer Appraisals of Deepfake Advertising and Disclosure” (2025). https://www.tandfonline.com/doi/full/10.1080/00218499.2025.2498830

Aljebreen, A., Meng, W., & Dragut, E. C. (2024). “Analysis and Detection of 'Pink Slime' Websites in Social Media Posts.” Proceedings of the ACM Web Conference 2024. https://dl.acm.org/doi/10.1145/3589334.3645588

Industry Reports and Consumer Research

Getty Images. (2024). “Nearly 90% of Consumers Want Transparency on AI Images finds Getty Images Report.” Building Trust in the Age of AI. Survey of over 30,000 adults across 25 countries. https://newsroom.gettyimages.com/en/getty-images/nearly-90-of-consumers-want-transparency-on-ai-images-finds-getty-images-report

Deloitte. (2024). “Half of Executives Expect More Deepfake Attacks on Financial and Accounting Data in Year Ahead.” Survey of 1,100+ C-suite executives, May 21, 2024. https://www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/deepfake-attacks-on-financial-and-accounting-data-rising.html

Deloitte. (2025). “Technology, Media and Telecom Predictions 2025: Deepfake Disruption.” https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/gen-ai-trust-standards.html

YouGov. (2024). “Can you trust your social media feed? UK public concerned about AI content and misinformation.” Survey of 2,128 UK adults, May 1-2, 2024. https://business.yougov.com/content/49550-labelling-ai-generated-digitally-altered-content-misinformation-2024-research

Gallup. (2024). “Americans' Trust in Media Remains at Trend Low.” Poll conducted September 3-15, 2024. https://news.gallup.com/poll/651977/americans-trust-media-remains-trend-low.aspx

Trusting News. (2024). “New research: Journalists should disclose their use of AI. Here's how.” Survey of 6,000+ news audience members, July-August 2024. https://trustingnews.org/trusting-news-artificial-intelligence-ai-research-newsroom-cohort/

Technical Standards and Platform Policies

Coalition for Content Provenance and Authenticity (C2PA). (2024). “C2PA Technical Specification Version 2.1.” https://c2pa.org/

Meta. (2024). “Labeling AI-Generated Images on Facebook, Instagram and Threads.” Announced February 6, 2024. https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/

OpenAI. (2024). “C2PA in ChatGPT Images.” Announced February 2024 for DALL-E 3 generated images. https://help.openai.com/en/articles/8912793-c2pa-in-dall-e-3

Journalism and Professional Standards

Reporters Without Borders. (2023). “Paris Charter on AI and Journalism.” Unveiled November 10, 2023. Commission chaired by Nobel laureate Maria Ressa. https://rsf.org/en/rsf-and-16-partners-unveil-paris-charter-ai-and-journalism

International Center for Journalists – JournalismAI. https://www.journalismai.info/

Case Studies (Primary Documentation)

Arup Deepfake Fraud (£25.6 million, Hong Kong, 2024): CNN: “Arup revealed as victim of $25 million deepfake scam involving Hong Kong employee” (May 16, 2024) https://edition.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk

Biden Robocall New Hampshire Primary (January 2024): NPR: “A political consultant faces charges and fines for Biden deepfake robocalls” (May 23, 2024) https://www.npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative

Taylor Swift Deepfake Images (January 2024): CBS News: “X blocks searches for 'Taylor Swift' after explicit deepfakes go viral” (January 27, 2024) https://www.cbsnews.com/news/taylor-swift-deepfakes-x-search-block-twitter/

Elon Musk Deepfake Crypto Scam (2024): CBS Texas: “Deepfakes of Elon Musk are contributing to billions of dollars in fraud losses in the U.S.” https://www.cbsnews.com/texas/news/deepfakes-ai-fraud-elon-musk/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...