Vibe Coding Threatens Journalism: Why Newsrooms Need Governance Now

In February 2025, artificial intelligence researcher Andrej Karpathy, co-founder of OpenAI and former AI leader at Tesla, posted a provocative observation on social media. “There's a new kind of coding I call 'vibe coding',” he wrote, “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” By November of that year, Collins Dictionary had named “vibe coding” its Word of the Year, recognising how the term had come to encapsulate a fundamental shift in humanity's relationship with technology. As Alex Beecroft, managing director of Collins, explained: “The selection of 'vibe coding' as Collins' Word of the Year perfectly captures how language is evolving alongside technology.”

The concept is beguilingly simple. Rather than writing code line by line, users describe what they want in plain English, and large language models generate the software. Karpathy himself described the workflow with disarming candour: “I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like 'decrease the padding on the sidebar by half' because I'm too lazy to find it. I 'Accept All' always, I don't read the diffs anymore.” Or, as he put it more succinctly: “The hottest new programming language is English.”

For newsrooms, this represents both an extraordinary opportunity and a profound challenge. The Generative AI in the Newsroom project, a collaborative effort examining when and how to use generative AI in news production, has been tracking these developments closely. Their assessment suggests that 2026's most significant newsroom innovation will not emerge from development teams but from journalists who can now create their own tools. The democratisation of software development promises to unlock creativity and efficiency at unprecedented scale. But it also threatens to expose news organisations to security vulnerabilities, regulatory violations, and ethical failures that could undermine public trust in an industry already battling credibility challenges.

The stakes could hardly be higher. Journalism occupies a unique position in the information ecosystem, serving as a watchdog on power while simultaneously handling some of society's most sensitive information. From whistleblower communications to investigative documents, from source identities to personal data about vulnerable individuals, newsrooms are custodians of material that demands the highest standards of protection. When the barriers to building software tools collapse, the question becomes urgent: how do organisations ensure that the enthusiasm of newly empowered creators does not inadvertently compromise the very foundations of trustworthy journalism?

The Democratisation Revolution

Kerry Oslund, vice president of AI strategy at The E.W. Scripps Company, captured the zeitgeist at a recent industry panel when he declared: “This is the revenge of the English major.” His observation points to a fundamental inversion of traditional power structures in newsrooms. For decades, journalists with story ideas requiring custom tools had to queue for limited development resources, often watching their visions wither in backlogs or emerge months later in compromised form. Vibe coding tools like Lovable, Claude, Bubble AI, and Base44 have shattered that dependency.

The practical implications are already visible. At Scripps, the organisation has deployed over 300 AI “agents” handling complex tasks that once required significant human oversight. Oslund described “agent swarms” where multiple AI agents pass tasks to one another, compiling weekly reports, summarising deltas, and building executive dashboards without human intervention until the final review. The cost savings are tangible: “We eliminated all third-party voice actors and now use synthetic voice with our own talent,” Oslund revealed at a TV News Check panel.

During the same industry gathering, leaders from Gray Media, Reuters, and Stringr discussed similar developments. Gray Media is using AI to increase human efficiency in newsrooms, allowing staff to focus on higher-value journalism while automated systems handle routine tasks.

For community journalism, the potential is even more transformative. The Nieman Journalism Lab's predictions for 2026 emphasise how vibe coding tools have lowered the cost and technical expertise required to build prototypes, creating space for community journalists to experiment with new roles and collaborate with AI specialists. By translating their understanding of audience needs into tangible prototypes, journalists can instruct large language models on the appearance, features, and data sources they require for new tools.

One prominent data journalist, quoted in coverage of the vibe coding phenomenon, expressed the reaction of many practitioners: “Oh my God, this vibe coding thing is insane. If I had this during our early interactive news days, it would have been a godsend. Once you get the hang of it, it's like magic.”

But magic, as any journalist knows, demands scrutiny. As programmer Simon Willison clarified in his analysis: “If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book. That's using an LLM as a typing assistant.” The distinction matters enormously. True vibe coding, where users accept AI-generated code without fully comprehending its functionality, introduces risks that newsrooms must confront directly.

The Security Imperative and Shadow AI

The IBM 2025 Cost of Data Breach Report revealed statistics that should alarm every news organisation considering rapid AI tool adoption. Thirteen percent of organisations reported breaches of AI models or applications, and of those compromised, a staggering 97% reported lacking AI access controls. Perhaps most troubling: one in five organisations reported breaches due to shadow AI, the unsanctioned use of AI tools by employees outside approved governance frameworks.

The concept of shadow AI represents an evolution of the “shadow IT” problem that has plagued organisations for decades. As researchers documented in Strategic Change journal, the progression from shadow IT to shadow AI introduces new threat vectors. AI systems possess intrinsic security vulnerabilities, from the potential compromising of training data to the exploitation of AI models and networks. When employees use AI tools without organisational oversight, these vulnerabilities multiply.

For newsrooms, the stakes are uniquely high. Journalists routinely handle information that could endanger lives if exposed: confidential sources, whistleblower identities, leaked documents revealing government or corporate malfeasance. The 2014 Sony Pictures hack demonstrated how devastating breaches can be, with hackers releasing salaries of employees and Hollywood executives alongside sensitive email traffic. Data breaches in media organisations are particularly attractive to malicious actors because they often contain not just personal information but intelligence with political or financial value.

The Gartner research firm predicts that by 2027, more than 40% of AI-related data breaches will be caused by improper use of generative AI across borders. The swift adoption of generative AI technologies by end users has outpaced the development of data governance and security measures. According to the Cloud Security Alliance, only 57% of organisations have acceptable use policies for AI tools, and fewer still have implemented access controls for AI agents and models, activity logging and auditing, or identity governance for AI entities.

The media industry's particular vulnerability compounds these concerns. As authentication provider Auth0 documented in an analysis of major data breaches affecting media companies: “Data breaches have become commonplace, and the media industry is notorious for being a magnet for cyberthieves.” With billions of users consuming news online, the attack surface for criminals continues to expand. Media companies frequently rely on external vendors, making it difficult to track third-party security practices even when internal processes are robust.

Liability in the Age of AI-Generated Code

When software fails, who bears responsibility? This question becomes extraordinarily complex when the code was generated by an AI and deployed by someone with no formal engineering training. The legal landscape remains unsettled, but concerning patterns are emerging.

Traditional negligence and product liability principles still apply, but courts have yet to clarify how responsibility should be apportioned between AI tool developers and the organisations utilising these tools. Most AI providers prominently display warnings such as “AI can make mistakes and verify the output” while including warranty disclaimers that push due diligence burdens back onto the businesses integrating AI-generated code. The RAND Corporation's analysis of liability for AI system harms notes that “AI developers might also be held liable for malpractice should courts find there to be a recognised professional standard of care that a developer then violated.”

Copyright and intellectual property considerations add further complexity. In the United States, copyright protection hinges on human authorship. Both case law and the U.S. Copyright Office agree that copyright protection is available only for works created through human creativity. When code is produced solely by an AI without meaningful human authorship, it is not eligible for copyright protection.

Analysis by the Software Freedom Conservancy found that approximately 35% of AI-generated code samples contained licensing irregularities, potentially exposing organisations to significant legal liabilities. This “licence contamination” problem has already forced several high-profile product delays and at least two complete codebase rewrites at major corporations. In the United States, a lawsuit against GitHub Copilot (Doe v. GitHub, Inc.) argues that the tool suggests code without including necessary licence attributions. As of spring 2025, litigation continued.

For news organisations, the implications extend beyond licensing. In journalism, tools frequently interact with personal data protected under frameworks like the General Data Protection Regulation. Article 85 of the GDPR requires Member States to adopt exemptions balancing data protection with freedom of expression, but these exemptions are not blanket protections. The Austrian Constitutional Court declared the Austrian journalistic exemption unconstitutional, ruling that it was illegitimate to entirely exclude media data processing from data protection provisions. When Romanian journalists published videos and documents for an investigation, the data protection authority asked for information that could reveal sources, under threat of penalties reaching 20 million euros.

A tool built through vibe coding that inadvertently logs source communications or retains metadata could expose a news organisation to regulatory action and, more critically, endanger the individuals who trusted journalists with sensitive information.

Protecting Vulnerable Populations and Investigative Workflows

Investigative journalism depends on systems of trust that have been carefully constructed over decades. Sources risk their careers, freedom, and sometimes lives to expose wrongdoing. The Global Investigative Journalism Network's guidance emphasises that “most of the time, sources or whistleblowers do not understand the risks they might be taking. Journalists should help them understand this, so they are fully aware of how publication of the information they have given could impact them.”

Digital security has become integral to this protective framework. SecureDrop, an open-source platform for operating whistleblowing systems, has become standard in newsrooms committed to source protection. Encrypted messaging applications like Signal offer end-to-end protection. These tools emerged from years of security research and have been vetted by experts who understand both the technical vulnerabilities and the human factors that can compromise even robust systems.

When a journalist vibe codes a tool for an investigation, they may inadvertently undermine these protections without recognising the risk. As journalist James Risen of The Intercept observed: “We're being forced to act like spies, having to learn tradecraft and encryption and all the new ways to protect sources. So, there's going to be a time when you might make a mistake or do something that might not perfectly protect a source. This is really hard work.”

The Perugia Principles for Journalists, developed in partnership with 20 international journalists and experts, establish twelve principles for working with whistleblowers in the digital age. First among them: “First, protect your sources. Defend anonymity when it is requested. Provide safe ways for sources to make 'first contact' with you, where possible.” A vibe-coded tool, built without understanding of metadata, logging, or network traffic patterns, could create exactly the kind of traceable communication channel that puts sources at risk.

Research from the Center for News, Technology and Innovation documents how digital security threats have become more important than ever for global news media. Journalists and publishers have become high-profile targets for malware, spyware, and digital surveillance. These threats risk physical safety, privacy, and mental health while undermining whistleblower protection and source confidentiality.

The resource disparity across the industry compounds these challenges. News organisations in wealthier settings are generally better resourced and more able to adopt protective technologies. Smaller, independent, and freelance journalists often lack the means to defend against threats. Vibe coding might seem to level this playing field by enabling under-resourced journalists to build their own tools, but without security expertise, it may instead expose them to greater risk.

Governance Frameworks for Editorial and Technical Leadership

The challenge for news organisations is constructing governance frameworks that capture the benefits of democratised development while mitigating its risks. Research on AI guidelines and policies from 52 media organisations worldwide, analysed by journalism researchers and published through Journalist's Resource, offers insights into emerging best practices.

The findings emphasise the need for human oversight throughout AI-assisted processes. As peer-reviewed analysis notes: “The maintenance of a 'human-in-the-loop' principle, where human judgment, creativity, and editorial oversight remain central to the journalistic process, is vital.” The Guardian requires senior editor approval for significant AI-generated content. The CBC has committed not to use AI-powered identification tools for investigative journalism without proper permissions.

The NIST AI Risk Management Framework provides a structured approach applicable to newsroom contexts. It guides organisations through four repeatable actions: identifying how AI systems are used and where risks may appear (Map), evaluating risks using defined metrics (Measure), applying controls to mitigate risks (Manage), and establishing oversight structures to ensure accountability (Govern). The accompanying AI RMF Playbook offers practical guidance that organisations can adapt to their specific needs.

MIT Sloan researchers have proposed a “traffic light” framework for categorising AI use cases by risk level. Red-light use cases are prohibited entirely. Green-light use cases, such as chatbots for general customer service, present low risk and can proceed with minimal oversight. Yellow-light use cases, which comprise most AI applications, require enhanced review and human judgment at critical decision points.

For newsrooms, this framework might translate as follows:

Green-light applications might include internal productivity tools, calendar management systems, or draft headline generators where errors create inconvenience rather than harm.

Yellow-light applications would encompass data visualisations for publication, interactive features using public datasets, and transcription tools for interviews with non-sensitive subjects. These require review by someone with technical competence before deployment.

Red-light applications would include anything touching source communications, whistleblower data, investigative documents, or personal information about vulnerable individuals. These should require professional engineering oversight and security review regardless of how they were initially prototyped.

Building Decision Trees for Non-Technical Staff

Operationalising these distinctions requires clear decision frameworks that non-technical staff can apply independently. The Poynter Institute's guidance on newsroom AI ethics policies emphasises the need for organisations to create AI committees and designate senior staff to lead ongoing governance efforts. “This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised.”

A practical decision tree for vibe-coded projects might begin with a series of questions:

First, does this tool handle any data that is not already public? If so, escalate to technical review.

Second, could a malfunction in this tool result in publication of incorrect information, exposure of source identity, or violation of individual privacy? If yes, professional engineering oversight is required.

Third, will this tool be used by anyone other than its creator, or persist beyond a single use? Shared tools and long-term deployments require enhanced scrutiny.

Fourth, does this tool connect to external services, databases, or APIs? External connections introduce security considerations that require expert evaluation.

Fifth, would failure of this tool create legal liability, regulatory exposure, or reputational damage? Legal and compliance review should accompany technical review for such applications.

The Cloud Security Alliance's Capabilities-Based Risk Assessment framework offers additional granularity, suggesting that organisations apply proportional safeguards based on risk classification. Low-risk AI applications receive lightweight controls, medium-risk applications get enhanced monitoring, and high-risk applications require full-scale governance including regular audits.

Bridging the Skills Gap Without Sacrificing Speed

The tension at the heart of vibe coding governance is balancing accessibility against accountability. The speed and democratisation that make vibe coding attractive would be undermined by bureaucratic review processes that reimpose the old bottlenecks. Yet the alternative, allowing untrained staff to deploy tools handling sensitive information, creates unacceptable risks.

Several approaches can help navigate this tension.

Tiered review processes can match the intensity of oversight to the risk level of the application. Simple internal tools might require only a checklist review by the creator themselves. Published tools or those handling non-public data might need peer review by a designated “AI champion” with intermediate technical knowledge. Tools touching sensitive information would require full security review by qualified professionals.

Pre-approved templates and components can provide guardrails that reduce the scope for dangerous errors. News organisations can work with their development teams to create vetted building blocks: secure form handlers, properly configured database connections, privacy-compliant analytics modules. Journalists can be directed to incorporate these components rather than generating equivalent functionality from scratch.

Sandboxed development environments can allow experimentation without production risk. Vibe-coded prototypes can be tested and evaluated in isolated environments before any decision about broader deployment. This preserves the creative freedom that makes vibe coding valuable while creating a checkpoint before tools reach users or sensitive data.

Mandatory training programmes should ensure that all staff using vibe coding tools understand basic security concepts, data handling requirements, and the limitations of AI-generated code. This training need not make everyone a programmer, but it should cultivate healthy scepticism about what AI tools produce and awareness of the questions to ask before deployment.

The Emerging Regulatory Landscape

News organisations cannot develop governance frameworks in isolation from the broader regulatory environment. The European Union's AI Act, adopted in 2024, establishes requirements that will affect media organisations using AI tools. While journalism itself is not classified as high-risk under the Act, AI systems used in media that could manipulate public opinion or spread disinformation face stricter oversight. AI-generated content, including synthetic media, must be clearly labelled.

The Dynamic Coalition on the Sustainability of Journalism and News Media released its 2024-2025 Annual Report on AI and Journalism, calling for shared strategies to safeguard journalism's integrity in an AI-driven world. The report urges decision-makers to “move beyond reactive policy-making and invest in forward-looking frameworks that place human rights, media freedom, and digital inclusion at the centre of AI governance.”

In the United States, the regulatory landscape is more fragmented. More than 1,000 AI-related bills have been introduced across state legislatures in 2024-2025. California, Colorado, New York, and Illinois have adopted or proposed comprehensive AI and algorithmic accountability laws addressing transparency, bias mitigation, and sector-specific safeguards. News organisations operating across multiple jurisdictions must navigate a patchwork of requirements.

The Center for News, Technology and Innovation's review of 188 national and regional AI strategies found that regulatory attempts rarely directly address journalism and vary dramatically in their frameworks, enforcement capacity, and international coordination. This uncertainty places additional burden on news organisations to develop robust internal governance rather than relying on external regulatory guidance.

Cultural Transformation and Organisational Learning

Technical governance alone cannot address the challenges of democratised development. Organisations must cultivate cultures that balance innovation with responsibility.

IBM's research on shadow AI governance emphasises that employees should be “encouraged to disclose how they use AI, confident that transparency will be met with guidance, not punishment. Leadership, in turn, should celebrate responsible experimentation as part of organisational learning.” Punitive approaches to unsanctioned AI use tend to drive it underground, where it becomes invisible to governance processes.

News organisations have particular cultural advantages in addressing these challenges. Journalism is built on verification, scepticism, and accountability. The same instincts that lead journalists to question official sources and demand evidence should be directed at AI-generated outputs. Newsroom cultures that emphasise “trust but verify” can extend this principle to tools and code as readily as to sources and documents.

The Scripps approach, which Oslund described as starting with “guardrails and guidelines to prevent missteps,” offers a model. “It all starts with public trust,” Oslund emphasised, noting Scripps' commitment to accuracy and human oversight of AI outputs. Embedding AI governance within broader commitments to editorial integrity may prove more effective than treating it as a separate technical concern.

The Accountability Question

When something goes wrong with a vibe-coded tool, who is responsible? This question resists easy answers but demands organisational clarity.

The journalist who created the tool bears some responsibility, but their liability should be proportional to what they could reasonably have been expected to understand. An editor who approved deployment shares accountability, as does any technical reviewer who cleared the tool. The organisation itself, having enabled vibe coding without adequate governance, may bear ultimate responsibility.

Clear documentation of decision-making processes becomes essential. When a tool is deployed, records should capture: who created it, what review it received, who approved it, what data it handles, and what risk assessment was performed. This documentation serves both as a protection against liability and as a learning resource when problems occur.

As professional standards for AI governance in journalism emerge, organisations that ignore them may face enhanced liability exposure. The development of industry norms creates benchmarks against which organisational practices will be measured.

Recommendations for News Organisations

Based on the analysis above, several concrete recommendations emerge for news organisations navigating the vibe coding revolution.

Establish clear acceptable use policies for AI development tools, distinguishing between permitted, restricted, and prohibited use cases. Make these policies accessible and understandable to non-technical staff.

Create tiered review processes that match oversight intensity to risk level. Not every vibe-coded tool needs security audit, but those handling sensitive data or reaching public audiences require appropriate scrutiny.

Designate AI governance leadership within the organisation, whether through an AI committee, a senior editor with oversight responsibility, or a dedicated role. This leadership should have authority to pause or prohibit deployments that present unacceptable risk.

Invest in training that builds basic security awareness and AI literacy across editorial staff. Training should emphasise the limitations of AI-generated code and the questions to ask before deployment.

Develop pre-approved components for common functionality, allowing vibe coders to build on vetted foundations rather than generating security-sensitive code from scratch.

Implement sandbox environments for development and testing, creating separation between experimentation and production systems handling real data.

Maintain documentation of all AI tool deployments, including creation, review, approval, and risk assessment records.

Conduct regular audits of deployed tools, recognising that AI-generated code may contain latent vulnerabilities that only become apparent over time.

Engage with regulatory developments at national and international levels, ensuring that internal governance anticipates rather than merely reacts to legal requirements.

Foster cultural change that treats AI governance as an extension of editorial integrity rather than a constraint on innovation.

Vibe coding represents neither utopia nor dystopia for newsrooms. It is a powerful capability that, like any technology, will be shaped by the choices organisations make about its use. The democratisation of software development can expand what journalism is capable of achieving, empowering practitioners to create tools tailored to their specific needs and audiences. But this empowerment carries responsibility.

The distinction between appropriate prototyping and situations requiring professional engineering oversight is not always obvious. Decision frameworks and governance structures can operationalise this distinction, but they require ongoing refinement as technology evolves and organisational learning accumulates. Liability, compliance, and ethical accountability gaps are real, particularly where published tools interface with sensitive data, vulnerable populations, or investigative workflows.

Editorial and technical leadership must work together to ensure that speed and accessibility gains do not inadvertently expose organisations to data breaches, regulatory violations, or reputational damage. The journalists building tools through vibe coding are not the enemy; they are practitioners seeking to serve their audiences and advance their craft. But good intentions are insufficient protection against technical vulnerabilities or regulatory requirements.

As the Generative AI in the Newsroom project observes, the goal is “collaboratively figuring out how and when (or when not) to use generative AI in news production.” That collaborative spirit, extending across editorial and technical domains, offers the best path forward. Newsrooms that get this balance right will harness vibe coding's transformative potential while maintaining the trust that makes journalism possible. Those that do not may find that the magic of democratised development comes with costs their organisations, their sources, and their audiences cannot afford.


References and Sources

  1. Karpathy, A. (2025). “Vibe Coding.” X (formerly Twitter). https://x.com/karpathy/status/1886192184808149383

  2. Collins Dictionary. (2025). “Word of the Year 2025: Vibe Coding.” https://www.collinsdictionary.com/us/woty

  3. CNN. (2025). “'Vibe coding' named Collins Dictionary's Word of the Year.” https://www.cnn.com/2025/11/06/tech/vibe-coding-collins-word-year-scli-intl

  4. Generative AI in the Newsroom. (2025). “Vibe Coding for Newsrooms.” https://generative-ai-newsroom.com/vibe-coding-for-newsrooms-6848b17dac99

  5. Nieman Journalism Lab. (2025). “Rise of the vibecoding journalists.” https://www.niemanlab.org/2025/12/rise-of-the-vibecoding-journalists/

  6. TV News Check. (2025). “Agent Swarms And Vibe Coding: Inside The New Operational Reality Of The Newsroom.” https://tvnewscheck.com/ai/article/agent-swarms-and-vibe-coding-inside-the-new-operational-reality-of-the-newsroom/

  7. The E.W. Scripps Company. (2024). “Scripps creates AI team to lead strategy, business development and operations across company.” https://scripps.com/press-releases/scripps-creates-ai-team-to-lead-strategy-business-development-and-operations-across-company/

  8. IBM Newsroom. (2025). “IBM Report: 13% Of Organizations Reported Breaches Of AI Models Or Applications.” https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications

  9. Gartner. (2025). “Gartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027.” https://www.gartner.com/en/newsroom/press-releases/2025-02-17-gartner-predicts-forty-percent-of-ai-data-breaches-will-arise-from-cross-border-genai-misuse-by-2027

  10. Auth0. (2024). “11 of the Worst Data Breaches in Media.” https://auth0.com/blog/11-of-the-worst-data-breaches-in-media/

  11. Threatrix. (2025). “Software Liability in 2025: AI-Generated Code Compliance & Regulatory Risks.” https://threatrix.io/blog/threatrix/software-liability-in-2025-ai-generated-code-compliance-regulatory-risks/

  12. MBHB. (2025). “Navigating the Legal Landscape of AI-Generated Code: Ownership and Liability Challenges.” https://www.mbhb.com/intelligence/snippets/navigating-the-legal-landscape-of-ai-generated-code-ownership-and-liability-challenges/

  13. European Data Journalism Network. (2024). “Data protection in journalism: a practical handbook.” https://datavis.europeandatajournalism.eu/obct/data-protection-handbook/gdpr-applied-to-journalism.html

  14. Global Investigative Journalism Network. (2025). “Expert Advice to Keep Your Sources and Whistleblowers Safe.” https://gijn.org/stories/gijc25-tips-keep-sources-whistleblowers-safe/

  15. Journalist's Resource. (2024). “Researchers compare AI policies and guidelines at 52 news organizations.” https://journalistsresource.org/home/generative-ai-policies-newsrooms/

  16. SAGE Journals. (2024). “AI Ethics in Journalism (Studies): An Evolving Field Between Research and Practice.” https://journals.sagepub.com/doi/10.1177/27523543241288818

  17. Poynter Institute. (2024). “Your newsroom needs an AI ethics policy. Start here.” https://www.poynter.org/ethics-trust/2024/how-to-create-newsroom-artificial-intelligence-ethics-policy/

  18. Center for News, Technology and Innovation. (2024). “Journalism's New Frontier: An Analysis of Global AI Policy Proposals and Their Impacts on Journalism.” https://cnti.org/reports/journalisms-new-frontier-an-analysis-of-global-ai-policy-proposals-and-their-impacts-on-journalism/

  19. Media Rights Agenda. (2025). “DC-Journalism Launches 2024/2025 Annual Report on Artificial Intelligence, Journalism.” https://mediarightsagenda.org/dc-journalism-launches-2024-2025-annual-report-on-artificial-intelligence-journalism/

  20. NIST. (2024). “AI Risk Management Framework.” https://www.nist.gov/itl/ai-risk-management-framework

  21. Cloud Security Alliance. (2025). “Capabilities-Based AI Risk Assessment (CBRA) for AI Systems.” https://cloudsecurityalliance.org/artifacts/capabilities-based-risk-assessment-cbra-for-ai-systems

  22. Palo Alto Networks. (2025). “What Is Shadow AI? How It Happens and What to Do About It.” https://www.paloaltonetworks.com/cyberpedia/what-is-shadow-ai

  23. IBM. (2025). “What Is Shadow AI?” https://www.ibm.com/think/topics/shadow-ai

  24. Help Net Security. (2025). “Shadow AI risk: Navigating the growing threat of ungoverned AI adoption.” https://www.helpnetsecurity.com/2025/11/12/delinea-shadow-ai-governance/

  25. Wikipedia. (2025). “Vibe coding.” https://en.wikipedia.org/wiki/Vibe_coding

  26. Simon Willison. (2025). “Not all AI-assisted programming is vibe coding (but vibe coding rocks).” https://simonwillison.net/2025/Mar/19/vibe-coding/

  27. RAND Corporation. (2024). “Liability for Harms from AI Systems: The Application of U.S. Tort Law.” https://www.rand.org/pubs/research_reports/RRA3243-4.html

  28. Center for News, Technology and Innovation. (2024). “Journalists & Cyber Threats.” https://innovating.news/article/journalists-cyber-threats/

  29. USC Center for Health Journalism. (2025). “An early AI pioneer shares how the 'vibe coding' revolution could reshape data journalism.” https://centerforhealthjournalism.org/our-work/insights/early-ai-pioneer-shares-how-vibe-coding-revolution-could-reshape-data-journalism

  30. Wiley Online Library. (2024). “From Shadow IT to Shadow AI: Threats, Risks and Opportunities for Organizations.” Strategic Change. https://onlinelibrary.wiley.com/doi/10.1002/jsc.2682

  31. U.S. Copyright Office. (2024). “Copyright and Artificial Intelligence.” https://www.copyright.gov/ai/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...