Journalism Built on Borrowed Code: What Happens When the Vibe Coders Leave

In February 2025, Andrej Karpathy, former director of AI at Tesla and co-founder of OpenAI, introduced a term that would reshape how millions think about software development. “There's a new kind of coding I call 'vibe coding,'” he wrote on social media, “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” By November 2025, Collins Dictionary had named “vibe coding” its Word of the Year, defining it as “using natural-language prompts to have AI assist in writing computer code.”
The concept struck a nerve across industries far beyond Silicon Valley. By March 2025, Y Combinator reported that 25 percent of startup companies in its Winter 2025 batch had codebases that were 95 percent AI-generated. “It's not like we funded a bunch of non-technical founders,” emphasised Jared Friedman, YC's managing partner. “Every one of these people is highly technical, completely capable of building their own products from scratch. A year ago, they would have built their product from scratch, but now 95% of it is built by an AI.”
Y Combinator's CEO Garry Tan confirmed the trend's significance: “What that means for founders is that you don't need a team of 50 or 100 engineers. You don't have to raise as much. The capital goes much longer.” The Winter 2025 batch grew 10 percent per week in aggregate, making it the fastest-growing cohort in YC history.
For resource-constrained industries like journalism, this sounds transformative. Newsrooms that could never afford dedicated development teams can now build custom tools, automate workflows, and create reader-facing applications through natural language prompts. Domain experts, those who understand investigative methodology, editorial ethics, and audience needs, can translate their knowledge directly into functioning software without learning Python or JavaScript.
But beneath this promising surface lies a troubling question that few organisations are asking: what happens when the people who orchestrated these AI-built systems leave? What occurs when the AI capabilities plateau, as some researchers suggest they already are? And who is governing the security vulnerabilities and technical debt accumulating in organisations that have traded coding expertise for prompt engineering prowess?
The New Skill Composition: From Coders to Orchestrators
The shift from coding expertise to project management competency represents more than a tactical adjustment. It fundamentally alters the skill composition and knowledge distribution within non-technical creative teams, creating new hierarchies of capability that look nothing like traditional software development.
According to Gartner's 2025 AI Skills Report, over 40 percent of new AI-related roles involve prompt design, evaluation, or orchestration rather than traditional programming. The Project Management Institute now offers certification in prompt engineering, recognising it as an essential skill for project professionals. As one industry analysis noted, “2025 is seeing a shift from model-building to model-using. Many companies now need prompt engineers more than machine-learning engineers.”
This represents a profound reordering of how technical work gets done. The PMI describes this transformation directly: “Artificial Intelligence has swiftly become a game-changer in the world of project management. Yet, to fully harness its potential, project managers need more than just awareness, they need a new skill: prompt engineering.” Writing effective prompts for generative AI is now considered a skill that project managers can learn and refine to drive better, faster results.
For journalism and other domain-expert-driven fields, this initially appears liberating. Reporters who understand the rhythm of breaking news can design alert systems. Investigators who know which databases matter can build cross-referencing tools. Audience specialists can create personalised content delivery mechanisms. The people who understand the problems are now the people solving them.
The Nieman Journalism Lab described this evolution in its 2025 predictions: “In 2026, more newsrooms will break from their print-era architecture and rebuild around how information now moves through AI systems. News organisations will shift from production-heavy workflows to dynamic, always-on knowledge environments.” Reuters Institute for the Study of Journalism convened 17 experts to forecast how AI would reshape news in 2026, with many predicting that newsroom reporters and developers would collaborate on end-to-end automation with human review, using flexible tools and custom code.
But this democratisation comes with a hidden cost. When vibe coding enables anyone to build software, it distributes the power to create whilst concentrating the capacity to maintain. The person who prompted an AI to build a data visualisation tool may not understand why that tool breaks when the underlying API changes. The editor who orchestrated a comment moderation system may not recognise the security vulnerabilities embedded in its architecture.
Stack Overflow's annual developer survey reveals the scope of this challenge. Whilst 63 percent of professional developers were using AI in their development process by 2024, with another 14 percent planning to start soon, the nature of that usage varied dramatically. For experienced developers, AI served as an accelerant, handling boilerplate whilst they focused on architecture and security. For non-technical users embracing vibe coding, the AI was not an assistant but a replacement for understanding itself.
The distinction matters enormously. As Karpathy himself described his approach: he uses voice input to talk to the AI, barely touching the keyboard. He asks for things like “decrease the padding on the sidebar by half” and always clicks “Accept All” without reading the code changes. When he encounters error messages, he just copy-pastes them in with no comment, and usually that fixes it. “The code grows beyond my usual comprehension,” he acknowledged. “I'd have to really read through it for a while.”
The Complexity Ceiling: Where Vibe Coding Breaks Down
The promise that vibe coding will empower anyone to create functional applications has a fundamental limitation that becomes apparent only after months of enthusiastic adoption. Fast Company reported in September 2025 that the “vibe coding hangover” had arrived, with senior software engineers describing “development hell” when working with AI-generated code.
“Code created by AI coding agents can become development hell,” explained Jack Zante Hays, a senior software engineer at PayPal who works on AI software development tools. According to Hays, vibe coding tools hit a “complexity ceiling” once a codebase grows beyond a certain size. “Small code bases might be fine up until they get to a certain size, and that's typically when AI tools start to break more than they solve.”
The problems compound in ways that non-technical users cannot anticipate. “Vibe coding, especially from nonexperienced users who can only give the AI feature demands, can involve changing like 60 things at once, without testing, so 10 things can be broken at once,” Hays continued. This cascading failure mode is invisible to someone who cannot read the code and understand its dependencies.
A recent survey of 793 builders who tested vibe coding alongside other development approaches found that only 32.5 percent trust vibe coding for business-critical work, and just 9 percent deploy these tools for that work. Most vibe coding tools excel at getting users 70 to 80 percent of the way, then effectively say, “Now hire a developer,” which erodes user trust and creates stranded projects.
For newsrooms, this complexity ceiling arrives precisely when stakes are highest. A simple article-tagging tool might work beautifully for months. But when traffic spikes during breaking news, when the content management system updates, or when a new data source requires integration, the tool that “just worked” suddenly fails in ways nobody on staff can diagnose.
This is not theoretical. In July 2025, a vibe-coded AI agent deleted a live production database during a code freeze, ignoring repeated instructions to stop. Whilst this incident occurred in a technology company rather than a newsroom, the implications for journalism are clear: AI-generated systems can fail catastrophically, and when they do, they require exactly the kind of deep technical expertise that vibe coding was meant to replace.
Even Karpathy acknowledged the limitations, noting that vibe coding works well for “throwaway weekend projects.” The challenge for 2025 and beyond was figuring out where that line falls. Tan, Y Combinator's CEO, also warned that AI-generated code may face challenges at scale and that developers need classical coding skills to sustain products.
The Institutional Memory Problem: Knowledge That Walks Out the Door
Every organisation grapples with knowledge loss when employees depart. Research by Sinequa found that 67 percent of IT leaders are concerned by the loss of knowledge and expertise when people leave, with 64 percent reporting that their organisation has already experienced such losses. An organisation with 30,000 employees can expect to lose $72 million annually in productivity due to inefficiencies caused by knowledge gaps, according to industry analyses.
The financial impact of knowledge loss extends far beyond productivity. Losing a single employee means losing crucial employee knowledge, and can cost companies up to 213 percent of that individual's salary because it takes up to two years to get a new hire to the same level of efficiency as their predecessor. For highly skilled positions, such as those in technology fields, the greater threat is the difficulty in quantifying and replacing these employees at all.
But vibe coding creates a particularly insidious form of institutional amnesia. Traditional software development produces documentation, code comments, version histories, and test suites that preserve knowledge even after developers leave. The code itself serves as a form of institutional memory, readable by any competent engineer. Vibe-coded systems produce none of this.
When a project manager who orchestrated an AI-built newsroom tool leaves, they take with them not just understanding of how the system works, but the conversational history with the AI that created it, the iterative refinements that addressed edge cases, and the tacit knowledge of which prompts produce which outcomes. The organisation is left with functioning code that nobody understands and no documentation that explains it.
Tacit knowledge, the knowledge developed through a person's experiences, observations, and insights, is particularly at risk. This type of knowledge is hard to transfer or pass along through writing or verbalisation. It requires shared activities to transfer or communicate effectively. If an employee with this type of knowledge leaves unexpectedly, it could very well lead to a crisis for the organisation.
The problem extends beyond individual departures. As CIO Dive reported, the greater business threat from technology turnover “is a cumulative decline of institutional knowledge.” Nearly half of survey respondents believe that loss of knowledge and expertise within their organisations undermines hiring efforts. Another 56 percent agree that loss of organisational knowledge has made onboarding more difficult and less effective.
For journalism, where institutional memory encompasses not just technical knowledge but editorial standards, source relationships, and investigation methodologies, this represents an existential risk. A newsroom that builds its technical infrastructure on vibe-coded foundations is one departure away from systems it cannot maintain, modify, or even understand.
When AI Capabilities Plateau: The Coming Infrastructure Crisis
The assumption underlying vibe coding's appeal is that AI capabilities will continue improving indefinitely. Each limitation encountered today will be solved by tomorrow's model. But what if that assumption proves wrong?
There is growing evidence that frontier AI models may be approaching a ceiling. As one analysis summarised, “It is described as 'a well-kept secret in the AI industry: for over a year now, frontier models appear to have reached their ceiling.' The scaling laws that powered the exponential progress of Large Language Models like GPT-4, and fuelled bold predictions of Artificial General Intelligence by 2026, have started to show diminishing returns.”
Inside leading AI labs, consensus is growing that simply adding more data and compute will not create the breakthroughs once promised. As machine learning pioneer Ilya Sutskever observed: “The 2010s were the age of scaling, now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters more now than ever.”
Many respected voices in the field, from Yann LeCun to Michael Jordan, have long argued that large language models will not achieve artificial general intelligence. Instead, progress will require new breakthroughs, as the curve of innovation flattens. The path forward is no longer a matter of simply adding more computational power.
The practical constraints are equally significant. GPU supply chain disruptions, driven by geopolitical tensions and soaring demand, have hindered AI scaling efforts. According to Bain and Company, future demand and potential pricing spikes may disrupt scaling by 2026. Foundry capacity for advanced chips has already been fully booked by leading technology companies until 2026.
For organisations that have built their infrastructure on the assumption of ever-improving AI assistance, a plateau scenario creates immediate problems. Systems that could be fixed by “asking the AI” will require human intervention that nobody on staff can provide. Workflows that depended on AI capabilities improving to handle new requirements will stagnate. The technical debt that accumulated whilst AI appeared to manage complexity will suddenly demand repayment.
IBM's 2026 predictions acknowledged this reality: “2026 will be the year of frontier versus efficient model classes.” Experts share a common belief that efficiency will be the new frontier, suggesting that organisations can no longer assume raw capability improvements will solve their problems.
Technical Debt: The Hidden Tax on AI-Generated Code
Technical debt, the accumulated cost of shortcuts and suboptimal decisions in software development, has always challenged organisations. But AI-generated code creates technical debt at unprecedented scale and velocity.
Research from Ox Security analysing 300 open-source projects, including 50 that were AI-generated, found that AI-generated code is “highly functional but systematically lacking in architectural judgment.” Anti-patterns occurred at high frequency in the vast majority of AI-generated code. As one researcher wrote, “Traditional technical debt accumulates linearly, but AI technical debt is different. It compounds.” The researcher identified three main vectors: model versioning chaos, code generation bloat, and organisation fragmentation.
Gartner estimated that over 40 percent of IT budgets are consumed by dealing with technical debt, whilst a Deloitte survey showed 70 percent of technology leaders believe technical debt is slowing down digital transformation initiatives. Gartner predicts that by 2030, 50 percent of enterprises will face delayed AI upgrades and rising maintenance costs due to unmanaged generative AI technical debt.
The velocity gap compounds the problem. AI has significantly increased the real cost of carrying technical debt. As one analysis noted, “Generative AI dramatically widens the gap in velocity between 'low-debt' and 'high-debt' coding. Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with legacy codebases will struggle to adopt them, making the penalty for having a 'high-debt' codebase larger than ever.”
AI-generated snippets often encourage copy-paste practices instead of thoughtful refactoring, creating bloated, fragile systems that are harder to maintain and scale. As one expert at UST noted, this creates “the paradoxical challenge” of AI development: “The capacity to generate code at unprecedented velocity can compound architectural inconsistencies without proper governance frameworks.”
For newsrooms operating on constrained budgets, technical debt creates a particularly vicious cycle. Without resources for dedicated engineering staff, organisations turn to vibe coding to build needed tools. Those tools accumulate technical debt that eventually requires engineering expertise to address. But the organisation still lacks that expertise, so it either abandons the tool or attempts more vibe coding to fix it, creating additional debt.
Companies that are well-positioned for change typically set aside around 15 percent of their IT budgets for technical debt remediation. Few newsrooms can afford such allocation, making the accumulation of debt particularly dangerous.
“If people blindly use code generated by AI because it worked, then they will quickly learn everything they ever wanted to know about technical debt,” warned one expert. “You still need an engineer with judgment to determine what is appropriate.”
Security Vulnerabilities: The Invisible Threat
The security implications of vibe-coded systems deserve particular attention in journalism, where protecting sources, maintaining reader trust, and safeguarding sensitive data are professional obligations. The evidence suggests that AI-generated code is systematically insecure.
Veracode's 2025 GenAI Code Security Report, which analysed code produced by over 100 large language models across 80 real-world coding tasks, found that generative AI introduces security vulnerabilities in 45 percent of cases. In 45 percent of all test cases, large language models introduced vulnerabilities classified within the OWASP Top 10, the most critical web application security risks.
The failure rates varied by programming language, but none was safe. Java had the highest failure rate, with AI-generated code introducing security flaws more than 70 percent of the time. Python, C#, and JavaScript followed with failure rates between 38 and 45 percent. Large language models failed to secure code against cross-site scripting and log injection in 86 and 88 percent of cases respectively.
“The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” explained Jens Wessling, chief technology officer at Veracode. “The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs.”
Most troublingly, the research shows that models are getting better at coding accurately but are not improving at security. Larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than a problem that scale will solve.
For newsrooms, the implications extend beyond data breaches. AI-generated code can leak proprietary source code to unauthorised external tools. Agents can invent non-existent library names, which attackers register as malicious packages in a technique called “slopsquatting.” Commercial models hallucinate non-existent packages 5.2 percent of the time, whilst open-source models do so 21.7 percent of the time. Common risks include injection vulnerabilities, insecure data handling, and broken access control, precisely the vulnerabilities that could expose confidential sources or compromise editorial systems.
The threat landscape is not static. AI is enabling attackers to identify and exploit security vulnerabilities more quickly and effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. In 2025, researchers unveiled PromptLocker, the first AI-powered ransomware proof of concept, demonstrating that theft and encryption could be automated at remarkably low cost, about $0.70 per full attack using commercial APIs, and essentially free with open-source models.
Governance Frameworks: What News Organisations Need
The combination of institutional knowledge risk, technical debt accumulation, and security vulnerabilities demands governance frameworks that most news organisations lack. Budget constraints mean limited capacity for security review or infrastructure oversight, yet the consequences of ungoverned vibe coding could undermine editorial credibility and reader trust.
The good news is that models exist. The Freedom of the Press Foundation provides digital security support specifically designed for journalists, offering bespoke solutions rooted in deep technical expertise and a clear understanding of the challenges faced by journalists. They are committed to ensuring accessible, relevant, and right-sized digital security support for all journalists, from security novices to reporters working in the most high-risk environments.
The Global Cyber Alliance has developed a Cybersecurity Toolkit for Journalists intended to empower independent journalists, watchdogs, and small newsrooms to protect their data, sources, and reputation with free and effective tools.
The Global Investigative Journalism Network offers the Journalist Security Assessment Tool, a free, comprehensive self-test that identifies security weaknesses in newsroom operations. As the Reuters Institute has argued, key strategies must include clearer and narrowly-drawn legal protections, promoting information security culture in newsrooms, providing training and tools for digital security, establishing secure communication methods, and better data and empirical research to track threats and responses.
But these resources focus primarily on protecting journalists from external threats rather than governing the internal risks of AI-generated code. A comprehensive governance framework for vibe coding in journalism would need to address several distinct challenges.
First, organisations need centralised oversight of what is being built. Shadow IT, where employees deploy systems without explicit organisational approval, has always created risks. Shadow AI amplifies these risks dramatically. A 2025 survey by Komprise found that 90 percent of respondents are concerned about shadow AI from a privacy and security standpoint, with nearly 80 percent having already experienced negative AI-related data incidents, and 13 percent reporting those incidents caused financial, customer, or reputational harm. According to IBM's 2025 Cost of Data Breach Report, AI-associated cases caused organisations more than $650,000 per breach.
Second, governance must establish clear boundaries for what vibe coding can and cannot touch. As one security expert advised, “Don't use AI to generate a whole app. Avoid letting it write anything critical like auth, crypto or system-level code.” For newsrooms, this means authentication systems, source protection mechanisms, data handling for sensitive documents, and anything touching reader privacy must remain outside vibe coding's scope.
Third, organisations need documentation requirements that survive individual departures. When a project manager builds a tool through AI prompts, they must record not just what the tool does but how it was built, what prompts were used, what iterations occurred, and what limitations were discovered. This documentation becomes institutional memory that can inform future maintenance or replacement.
Fourth, news organisations must implement minimum security standards for any AI-generated code before deployment. This includes automated scanning for known vulnerabilities, review of data handling practices, verification that the tool does not introduce dependencies on external services, and testing under failure conditions.
Fifth, governance should require human expertise checkpoints. As Gartner's Arun Chandrasekaran recommended, organisations must establish “clear standards for reviewing and documenting AI-generated assets and tracking technical debt metrics in IT dashboards to prevent costly disruptions.” This requires budget allocation for periodic expert review even when organisations cannot afford full-time technical staff.
Building Security Cultures in Resource-Constrained Newsrooms
Implementing governance frameworks requires more than policies. It requires cultural change. Research from the Tow Center for Digital Journalism found that journalists and management tended to view security reactively, being more likely to engage in relevant practices after a breach had already happened. This reactive posture is precisely what newsrooms cannot afford with vibe-coded systems.
Several factors contribute to developing information security cultures in newsrooms. Investment in information security specialists who liaise with journalists about their specific needs proves valuable, as does providing both informal and formal security training. Newsroom leaders and educators have a particular responsibility to make digital security awareness a fixture in their newsrooms. Information security can no longer be an afterthought and must be recognised as a crucial element of modern journalism.
The digital security of publishers, journalists, and their sources is under threat in many parts of the world. Google experts discovered in 2014 that 21 of the world's 25 most popular media outlets were targets of state-sponsored hacking attempts. Journalists have experienced a wide range of threats, from phishing and distributed denial of service attacks to software and hardware exploits. The risks from internal vibe-coded vulnerabilities compound these external threats.
The practical challenge is that this expertise costs money that many newsrooms do not have. But alternatives exist. Industry associations can provide shared resources, as the Public Media Journalists Association has done by partnering with verification tool providers. Collaborative security initiatives can pool expertise across multiple small newsrooms. Foundation funding can support security infrastructure that no individual organisation could afford.
The Local Independent Online News Publishers organisation offers free access to verification tools, highlighting how industry coordination can address gaps that individual organisations cannot fill. Similar models could provide security review services, technical debt assessment, and governance framework templates specifically designed for journalism's needs.
Practical Recommendations for Managing These Risks
For news organisations navigating this landscape, several practical recommendations emerge from the evidence.
Start with documentation. Before any vibe-coded tool goes into production, require written documentation of its purpose, the prompts used to create it, known limitations, data it accesses, external services it depends upon, and the person responsible for its maintenance. Store this documentation in a shared location accessible to the entire organisation, not just the person who built the tool.
Establish scope boundaries. Create explicit policies about what vibe coding can and cannot touch. Authentication, encryption, source protection, and reader data should remain outside the scope of AI-generated code until the organisation has capacity for expert security review.
Invest in periodic review. Even organisations without full-time technical staff can budget for quarterly or annual expert review of critical AI-generated systems. This review should assess security vulnerabilities, architectural problems, and technical debt accumulation before they become crises.
Build redundancy into roles. If one person understands a critical vibe-coded system, train a second person. If only one person knows the prompts that maintain a workflow, document those prompts for others. Single points of failure in technical knowledge are as dangerous as single points of failure in hardware.
Plan for AI plateau scenarios. Assume that AI capabilities may not continue improving indefinitely. For any system that depends on AI assistance for maintenance, develop contingency plans for how that system would be maintained if the AI could not help.
Participate in industry coordination. Join industry groups developing shared resources for security, governance, and technical review. The costs of expertise can be shared across organisations in ways that make governance feasible even for constrained budgets.
Start small with pilots that solve clear, repeatable problems. Assign a business owner, keep oversight light but consistent, and review sample outputs. Train a few power users to share best practices across teams. Focus on small wins and gradual scaling rather than ambitious projects that create unmanageable complexity.
The Stakes for Editorial Credibility
The risks described here are not merely technical. They directly threaten the editorial credibility and reader trust that journalism depends upon.
A data breach exposing source identities would devastate an investigative unit's ability to function. A tool failure during breaking news would undermine audience confidence. An accumulation of technical debt that eventually cripples newsroom operations would reduce the organisation's capacity for journalism itself.
The promise of vibe coding is real. Domain experts building tools tailored to their actual needs represents genuine progress over waiting months for IT departments to prioritise newsroom requests. AI-powered automation can reduce the time journalists spend on administrative tasks and increase the time available for actual journalism.
But realising this promise requires acknowledging its risks. The shift from coding expertise to project management competency changes what knowledge organisations possess and what happens when that knowledge leaves. The accumulation of technical debt in systems nobody fully understands creates fragility that compounds over time. The security vulnerabilities embedded in AI-generated code represent ongoing exposure to threats that most newsrooms are not equipped to detect.
Governance is not the enemy of innovation. It is the framework that makes innovation sustainable. News organisations that embrace vibe coding without governance are building on foundations that may crumble precisely when they are needed most.
The transformation happening in journalism as AI enables non-programmers to build software tools is genuinely significant. But transformation without preparation creates risk. And in journalism, where institutional credibility is the essential asset, risk management is not optional.
The vibe coders will eventually leave. The AI capabilities may plateau. The technical debt will come due. The only question is whether news organisations will be prepared for that reckoning, or whether they will discover, too late, that they never built foundational understanding of the systems they depend on.
References and Sources
- Karpathy, A. (2025). Original “vibe coding” social media post, February 2025. https://x.com/karpathy/status/1886192184808149383
- Collins Dictionary (2025). Word of the Year 2025: Vibe Coding. https://www.collinsdictionary.com/us/woty
- CNN (2025). “'Vibe coding' named Collins Dictionary's Word of the Year.” November 2025. https://www.cnn.com/2025/11/06/tech/vibe-coding-collins-word-year-scli-intl
- TechCrunch (2025). “A quarter of startups in YC's current cohort have codebases that are almost entirely AI-generated.” March 2025. https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/
- CNBC (2025). “Y Combinator startups are fastest growing, most profitable in fund history because of AI.” March 2025. https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html
- Nieman Journalism Lab (2025). “AI will rewrite the architecture of the newsroom.” December 2025. https://www.niemanlab.org/2025/12/ai-will-rewrite-the-architecture-of-the-newsroom/
- Reuters Institute for the Study of Journalism (2026). “How will AI reshape the news in 2026? Forecasts by 17 experts from around the world.” https://reutersinstitute.politics.ox.ac.uk/news/how-will-ai-reshape-news-2026-forecasts-17-experts-around-world
- Project Management Institute (2025). Prompt Engineering for Project Managers. https://www.pmi.org/shop/p-/elearning/talking-to-ai-prompt-engineering-for-project-managers/el128
- Fast Company (2025). “The vibe coding hangover is upon us.” September 2025. https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us
- Veracode (2025). GenAI Code Security Report 2025. https://www.veracode.com/blog/genai-code-security-report/
- Help Net Security (2025). “AI can write your code, but nearly half of it may be insecure.” August 2025. https://www.helpnetsecurity.com/2025/08/07/create-ai-code-security-risks/
- Sinequa (2022). Survey on organisational knowledge loss from employee turnover. https://www.businesswire.com/news/home/20220802006132/en/Sinequa-Finds-Over-Two-Thirds-of-IT-Leaders-Are-Concerned-by-Organizational-Knowledge-Loss-From-Employee-Turnover
- CIO Dive (2022). “The other problem with too much tech talent turnover: knowledge loss.” https://www.ciodive.com/news/IT-knowledge-gap-retention/629832/
- InfoQ (2025). “AI-Generated Code Creates New Wave of Technical Debt, Report Finds.” November 2025. https://www.infoq.com/news/2025/11/ai-code-technical-debt/
- MIT Sloan Management Review (2025). “How to Manage Tech Debt in the AI Era.” https://sloanreview.mit.edu/article/how-to-manage-tech-debt-in-the-ai-era/
- Gartner (2025). AI Skills Report and technical debt predictions.
- OWASP (2025). Top 10 for LLM Applications 2025. https://genai.owasp.org/
- HEC Paris (2025). “AI Beyond the Scaling Laws.” https://www.hec.edu/en/dare/tech-ai/ai-beyond-scaling-laws
- Council on Foreign Relations (2026). “How 2026 Could Decide the Future of Artificial Intelligence.” https://www.cfr.org/article/how-2026-could-decide-future-artificial-intelligence
- IBM (2026). “The trends that will shape AI and tech in 2026.” https://www.ibm.com/think/news/ai-tech-trends-predictions-2026
- Freedom of the Press Foundation (2026). Digital Security Resources for Journalists. https://freedom.press/digisec/
- Global Cyber Alliance (2025). Cybersecurity Toolkit for Journalists. https://globalcyberalliance.org/work/gca-cybersecurity-toolkit/gca-cybersecurity-toolkit-for-journalists/
- Global Investigative Journalism Network (2025). Journalist Security Assessment Tool. https://gijn.org/resource/digital-security/
- Columbia Journalism Review (2020). “The Rise of the Security Champion: Beta-testing Newsroom Security Cultures.” https://www.cjr.org/tow_center_reports/security-cultures-champions.php
- Komprise (2025). IT Survey on Shadow AI Concerns.
- IBM (2025). Cost of Data Breach Report 2025.
- ISACA (2025). “The Rise of Shadow AI: Auditing Unauthorized AI Tools in the Enterprise.” https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-rise-of-shadow-ai-auditing-unauthorized-ai-tools-in-the-enterprise

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk