The Vibe Coding Reckoning: When Speed Becomes Technical Debt at Scale

In February 2025, Andrej Karpathy, the former AI director at Tesla and founding engineer at OpenAI, posted something to X that would reshape how we talk about software development. “There's a new kind of coding I call 'vibe coding',” he wrote, “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He described using voice transcription to talk to AI assistants, clicking “Accept All” without reading the diffs, and copy-pasting error messages with no comment. When bugs proved stubborn, he would “just work around it or ask for random changes until it goes away.”
Within months, this approach had transformed from a personal workflow confession into a movement. By November 2025, Collins Dictionary had named “vibe coding” its Word of the Year, defining it as “using natural language prompts to have AI assist with the writing of computer code.” The lexicographers at Collins noted a large uptick in usage since the term first appeared, with managing director Alex Beecroft declaring it “perfectly captures how language is evolving alongside technology.”
The numbers behind this shift are staggering. According to Y Combinator managing partner Jared Friedman, a quarter of startups in the Winter 2025 batch had codebases that were 95% AI-generated. Google CEO Sundar Pichai revealed that more than 25% of all new code at Google was being generated by AI, then reviewed and accepted by engineers. Industry estimates suggest that 41% of all code written in 2025 was AI-generated, with data from Jellyfish indicating that almost half of companies now have at least 50% AI-generated code, compared to just 20% at the start of the year.
But beneath these impressive statistics lies a growing unease. What happens when the developers who built these systems cannot explain how they work, because they never truly understood them in the first place? What becomes of software maintainability when the dominant development methodology actively discourages understanding? And as AI-assisted developers increasingly outnumber traditionally trained engineers, who will possess the architectural discipline to recognise when something has gone terribly wrong?
The Maintainability Crisis Takes Shape
The first concrete evidence that vibe coding carries hidden costs arrived in May 2025, when security researcher Matt Palmer discovered a critical vulnerability in Lovable, one of the most prominent vibe coding platforms. The vulnerability, catalogued as CVE-2025-48757 with a CVSS score of 8.26 (High severity), stemmed from misconfigured Row Level Security policies in applications created through the platform.
Palmer's scan of 1,645 Lovable-created web applications revealed that 170 of them allowed anyone to access information about users, including names, email addresses, financial information, and secret API keys for AI services. The vulnerability touched 303 endpoints, allowing unauthenticated attackers to read and write to databases of Lovable apps. In the real world, this meant sensitive data (names, emails, API keys, financial records, even personal debt amounts) was exposed to anyone who knew where to look.
The disclosure timeline proved equally troubling. Palmer emailed Lovable CEO Anton Osika with detailed vulnerability reports on 21 March 2025. Lovable confirmed receipt on 24 March but provided no substantive response. On 24 April, Lovable released “Lovable 2.0” with a new “security scan” feature. The scanner only flagged the presence of Row Level Security policies, not whether they actually worked. It failed to detect misconfigured policies, creating a false sense of security.
The Lovable incident illuminates a fundamental problem: AI models generating code cannot yet see the big picture and scrutinise how that code will ultimately be used. Users of vibe coding platforms might not even know the right security questions to ask. The democratisation of software development had created a new class of developer who could build applications without understanding security fundamentals.
The Productivity Paradox Revealed
The promise of vibe coding rests on a seductive premise: by offloading the mechanical work of writing code to AI, developers can move faster and accomplish more. But a rigorous study published by METR (Model Evaluation and Threat Research) in July 2025 challenged this assumption in unexpected ways.
The study examined how AI tools at the February to June 2025 frontier affected productivity. Sixteen developers with moderate AI experience completed 246 tasks in mature projects where they had an average of five years of prior experience and 1,500 commits. The developers primarily used Cursor Pro with Claude 3.5/3.7 Sonnet, which were frontier models at the time of the study.
The results confounded expectations. Before starting tasks, developers forecast that allowing AI would reduce completion time by 24%. After completing the study, developers estimated that AI had reduced completion time by 20%. The actual measured result: allowing AI increased completion time by 19%. AI tooling had slowed developers down.
This gap between perception and reality is striking. Developers expected AI to speed them up, and even after experiencing the slowdown, they still believed AI had sped them up. The METR researchers identified several factors contributing to the slowdown: developers accepted less than 44% of AI generations, spending considerable time reviewing, testing, and modifying code only to reject it in the end. AI tools introduced “extra cognitive load and context-switching” that disrupted productivity. The researchers also noted that developers worked on mature codebases averaging 10 years old with over 1 million lines of code, environments where AI tools may be less effective than in greenfield projects.
The METR findings align with data from DX's Q4 2025 report, which found that developers saved 3.6 hours weekly among a sample of 135,000+ developers. But these savings came with significant caveats: the report revealed that context pain increases with experience, from 41% among junior developers to 52% among seniors. While some developers report productivity gains, the hard evidence remains mixed.
Trust Erodes Even as Adoption Accelerates
The productivity paradox reflects a broader pattern emerging across the industry: developers are adopting AI tools at accelerating rates while trusting them less. The Stack Overflow 2025 Developer Survey, which received over 49,000 responses from 177 countries, reveals this contradiction in stark terms.
While 84% of developers now use or plan to use AI tools in their development process (up from 76% in 2024), trust has declined sharply. Only 33% of developers trust the accuracy of AI tools, down from 43% in 2024, while 46% actively distrust it. A mere 3% report “highly trusting” the output. Positive sentiment for AI tools dropped from over 70% in 2023 and 2024 to just 60% in 2025.
Experienced developers are the most cautious, with the lowest “highly trust” rate (2.6%) and the highest “highly distrust” rate (20%), indicating a widespread need for human verification for those in roles with accountability.
The biggest frustration, cited by 66% of developers, is dealing with “AI solutions that are almost right, but not quite.” This leads directly to the second-biggest frustration: “Debugging AI-generated code is more time-consuming,” reported by 45% of respondents. An overwhelming 75% said they would still ask another person for help when they do not trust AI's answers. About 35% of developers report that their visits to Stack Overflow are a result of AI-related issues at least some of the time.
Perhaps most telling for the enterprise adoption question: developers show the strongest resistance to using AI for high-responsibility, systemic tasks like deployment and monitoring (76% do not plan to use AI for this) and project planning (69% do not plan to). AI agents are not yet mainstream, with 52% of developers either not using agents or sticking to simpler AI tools, and 38% having no plans to adopt them.
Google's 2024 DORA (DevOps Research and Assessment) report found a troubling trade-off: while a 25% increase in AI usage quickened code reviews and benefited documentation, it resulted in a 7.2% decrease in delivery stability. The 2025 DORA report confirmed that AI adoption continues to have a negative relationship with software delivery stability, noting that “AI acts as an amplifier, increasing the strength of high-performing organisations but worsening the dysfunction of those that struggle.”
Technical Debt Accumulates at Unprecedented Scale
These trust issues and productivity paradoxes might be dismissed as growing pains if the code being produced were fundamentally sound. But the consequences of rapid AI-generated code deployment are becoming measurable, and the data points toward a structural problem.
GitClear's 2025 research, analysing 211 million changed lines of code from repositories owned by Google, Microsoft, Meta, and enterprise corporations, found emerging trends showing four times more code cloning, with “copy/paste” exceeding “moved” code for the first time in history.
During 2024, GitClear tracked an eightfold increase in the frequency of code blocks with five or more lines that duplicate adjacent code, showing a prevalence of code duplication ten times higher than two years ago. Lines classified as “copy/pasted” (cloned) rose from 8.3% to 12.3% between 2021 and 2024. The percentage of changed code lines associated with refactoring sank from 25% of changed lines in 2021 to less than 10% in 2024, with predictions for 2025 suggesting refactoring will represent little more than 3% of code changes.
“What we're seeing is that AI code assistants excel at adding code quickly, but they can cause 'AI-induced tech debt,'” explained GitClear founder Bill Harding. “This presents a significant challenge for DevOps teams that prioritise maintainability and long-term code health.”
A report from Ox Security found that AI-generated code is “highly functional but systematically lacking in architectural judgment.” This aligns with observations that code assistants make it easy to insert new blocks of code simply by pressing the tab key, but they are less likely to propose reusing a similar function elsewhere in the code, partly because of limited context size.
The financial implications are substantial. McKinsey research indicates that technical debt accounts for about 40% of IT balance sheets, with organisations carrying heavy technical debt losing up to 20% to 40% of their IT budgets to maintenance, leaving far less for genuine innovation. Companies pay an additional 10 to 20% to address tech debt on top of the costs of any project.
Armando Solar-Lezama, a professor at MIT specialising in program synthesis, offered a colourful assessment in remarks widely cited across the industry: AI represents a “brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before.”
When the Bill Comes Due
In September 2025, Fast Company reported that the “vibe coding hangover” was upon us. “Code created by AI coding agents can become development hell,” said Jack Zante Hays, a senior software engineer at PayPal who works on AI software development tools. He noted that while the tools can quickly spin up new features, they often generate technical debt, introducing bugs and maintenance burdens that must eventually be addressed by human developers.
The article documented a growing phenomenon: developers struggling to maintain systems that had been easy to create but proved difficult to extend. “Vibe coding (especially from non-experienced users who can only give the AI feature demands) can involve changing like 60 things at once, without testing, so 10 things can be broken at once.” Unlike a human engineer who methodically tests each addition, vibe-coded software often struggles to adapt once it is live, particularly when confronted with real-world edge cases.
By the fourth quarter of 2025, the industry began experiencing what experts call a structural reckoning. LinkedIn searches for “Vibe Coding Cleanup Specialist” reveal dozens of programmers advertising their services as digital janitors for the AI coding revolution. As one consultancy describes it: “Companies increasingly turn to such specialists to rescue projects where AI code is raw, without proper architecture and security. Those who made demos now call in seniors to make the code stable and secure.”
Y Combinator CEO Garry Tan raised this question directly: “Suppose a startup with 95% AI-generated code successfully goes public and has 100 million users a year or two later. Will it crash? Current reasoning models aren't strong enough for debugging. So founders must have a deep understanding of the product.”
The Disappearing Pipeline for Engineering Talent
The impact of vibe coding extends beyond code quality into workforce dynamics, threatening the very mechanisms by which engineering expertise has traditionally been developed. A Stanford University study titled “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence,” authored by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, examined anonymised monthly payroll data from ADP covering millions of workers across tens of thousands of US firms through July 2025.
The findings are stark: employment for software developers aged 22 to 25 declined by nearly 20% compared to its peak in late 2022. Workers aged 22 to 25 are the most exposed to artificial intelligence, suffering a decline in employment of 13%. Early career workers in the most AI-exposed occupations (like software engineering, marketing, and customer service) have experienced a 16% relative decline in employment, even after controlling for firm-level impacts.
Meanwhile, the employment rates of older workers in high AI-exposure fields are holding strong. For workers aged 30 and over, employment in the highest AI-exposure categories grew between 6% and 12% from late 2022 to May 2025. One interpretation offered by the researchers is that while younger employees contribute primarily “codified knowledge” from their education (something AI can replicate), more experienced workers lean on tacit knowledge developed through years on the job, which remains less vulnerable to automation.
A Harvard study on “Seniority-Biased Change” (2025), where two Harvard economists analysed 62 million LinkedIn profiles and 200 million job postings, found that in firms using generative AI, junior employment “declines sharply” relative to non-adopters. The loss was concentrated in occupations highly exposed to AI and was driven by slower hiring, not increased firing. The researchers interpret this as companies with AI largely skipping hiring new graduates for the tasks the AI handled.
The traditional pathway of “learn to code, get junior job, grow into senior” is wobbling. Year-over-year, internships across all industries have decreased 11%, according to Indeed. Handshake, an internship recruitment platform, reported a 30% decline in tech-specific internship postings since 2023. Per the Federal Reserve report on labour market outcomes, computer engineering graduates now have one of the highest rates of unemployment across majors, at 7.5% (higher even than fine arts degree holders).
The Expertise Atrophy Loop
The junior employment crisis connects directly to a deeper concern: fundamental skill atrophy. If developers stop writing code manually, will they lose the ability to understand and debug complex systems? And if the pipeline for developing new senior engineers dries up, who will maintain the increasingly complex systems that vibe coding creates?
Luciano Nooijen, an engineer at the video-game infrastructure developer Companion Group, used AI tools heavily in his day job. But when he began a side project without access to those tools, he found himself struggling with tasks that previously came naturally. “I was feeling so stupid because things that used to be instinct became manual, sometimes even cumbersome,” he told MIT Technology Review. Just as athletes still perform basic drills, he thinks the only way to maintain an instinct for coding is to regularly practice the grunt work.
Developer discourse in 2025 was split. Some admitted they hardly ever write code “by hand” and think coding interviews should evolve. Others argued that skipping fundamentals leads to more firefighting when AI's output breaks. The industry is starting to expect engineers to bring both: AI speed and foundational wisdom for quality.
Y Combinator partner Diana Hu pointed out that even with heavy AI reliance, developers still need a crucial skill: reading code and identifying errors. “You have to have taste, enough training to judge whether the LLM output is good or bad.”
This creates a troubling paradox. The pathway to developing “taste” (the intuition that distinguishes quality code from problematic code) has traditionally come through years of hands-on coding experience. If vibe coding removes that pathway, how will the next generation of developers develop the judgement necessary to evaluate AI-generated output?
Building Guardrails That Preserve the Learning Journey
The question of whether organisations should establish guardrails that preserve the learning journey and architectural discipline that traditional coding cultivates is no longer theoretical. By 2025, 87% of enterprises lacked comprehensive AI security frameworks, according to Gartner research. Governance frameworks matter more for AI code generation than traditional development tools because the technology introduces new categories of risk.
Several intervention strategies have emerged from organisations grappling with vibe coding's consequences.
Layered verification architectures represent one approach. Critical core components receive full human review, while peripheral functionality uses lighter-weight validation. AI can generate code in outer layers, subject to interface contracts defined by verified inner layers. Input access layers ensure only authorised users interact with the system and validate their prompts for malicious injection attempts. Output layers scan generated code for security vulnerabilities and non-compliance with organisational style through static analysis tools.
Contract-first development offers another model. Rather than generating code directly from natural language, developers first specify formal contracts (preconditions, postconditions, invariants) that capture intent. AI then generates implementation code that is automatically checked against these contracts. This approach draws on Bertrand Meyer's Design by Contract methodology from the 1980s, which prescribes that software designers should define formal, precise, and verifiable interface specifications for software components.
Operational safety boundaries prevent AI-generated code from reaching production without human review. All AI-generated changes go through established merge request and review processes. Admin controls block forbidden commands, and configurable human touchpoints exist within workflows based on customer impact.
The code review bottleneck presents its own challenges. As engineering teams discover, the sheer volume of code now being churned out is quickly saturating the ability of midlevel staff to review changes. Senior engineers, who have deeper mental models of their codebase, see the largest quality gains from AI (60%) but also report the lowest confidence in shipping AI-generated code (22%).
Economic Pressure Versus Architectural Discipline
The economic pressure toward speed is undeniable, and it creates structural incentives that directly conflict with maintainability. Y Combinator CEO Garry Tan told CNBC that the Winter 2025 batch of YC companies in aggregate grew 10% per week, and it was not just the top one or two companies but the whole batch. “That's never happened before in early-stage venture.”
“What that means for founders is that you don't need a team of 50 or 100 engineers. You don't have to raise as much. The capital goes much longer,” Tan explained. About 80% of the YC companies that presented at Demo Day were AI-focused, with this group able to prove earlier commercial validation compared to previous generations.
But this very efficiency creates structural incentives that work against long-term sustainability. Forrester predicts that by 2025, more than 50% of technology decision-makers will face moderate to severe technical debt, with that number expected to hit 75% by 2026. Industry analysts predict that by 2027, 75% of organisations will face systemic failures due to unmanaged technical debt.
The State of Software Delivery 2025 report by software vendor Harness found that, contrary to perceived productivity benefits, the majority of developers spend more time debugging AI-generated code and more time resolving security vulnerabilities. If the current trend in code churn continues (now at 7.9% of all newly added code revised within two weeks, compared to just 5.5% in 2020), GitClear predicts defect remediation may become the leading day-to-day developer responsibility.
The software craftsmanship manifesto, established in 2008 by developers meeting in Libertyville, Illinois, articulated values that seem increasingly relevant: not only working software, but also well-crafted software; not only responding to change, but also steadily adding value; not only individuals and interactions, but also a community of professionals.
As Tabnine's analysis observed: “Vibe coding is what happens when AI is applied indiscriminately, without structure, standards, or alignment to engineering principles. Developers lean on generative tools to create code that 'just works.' It might compile. It might even pass a test. But in enterprise environments, where quality and compliance are non-negotiable, this kind of code is a liability, not a lift.”
Structural Interventions That Could Realign Development Practice
What structural or cultural interventions could realign development practices toward meaningful problem-solving over rapid code generation? Several approaches warrant consideration.
First, educational reform must address the skills mismatch. The five core skills shaping engineering in 2026 include context engineering, retrieval-augmented generation, AI agents, AI evaluation, and AI deployment and scaling. By 2026, the most valuable engineers are no longer those who write the best prompts but those who understand how to build systems around models. Junior developers are advised to use AI as a learning tool, not a crutch: review why suggested code works and identify weaknesses, occasionally disable AI helpers and write key algorithms from scratch, prioritise computer science fundamentals, implement projects twice (once with AI, once without), and train in rigorous testing.
Second, organisations need governance frameworks that treat AI-generated code differently from human-written code. Rather than accepting it as a black box, organisations should require that AI-generated code be accompanied by formal specifications, proofs of key properties, and comprehensive documentation that explains not just what the code does but why it does it. The DORA AI Capabilities Model identifies seven technical and cultural best practices for AI adoption: clear communication of AI usage policies, high-quality internal data, AI access to that data, strong version control, small batches of work, user-centric focus, and a high-quality internal platform.
Third, the code review process must evolve. AI reviewers are emerging as a solution to bridge the gap between code generation speed and review capacity. Instead of waiting hours or days for a busy senior developer to give feedback, an AI reviewer can respond within minutes. The answer emerging from practice involves treating AI reviewers as a first-pass filter that catches obvious issues while preserving human review for architectural decisions and security considerations.
Fourth, organisations must invest in maintaining architectural expertise. Successful companies allocate 15% to 20% of budget and sprint capacity systematically to debt reduction, treating it as a “lifestyle change” rather than a one-time project. McKinsey noted that “some companies find that actively managing their tech debt frees up engineers to spend up to 50 percent more of their time on work that supports business goals.”
The Cultural Dimension of Software Quality
Beyond structural interventions, the question is fundamentally cultural. Will the industry value the craftsmanship that comes from understanding systems deeply, or will economic pressure normalise technical debt accumulation at scale?
The signals are mixed. On one hand, the vibe coding hangover suggests market correction is already occurring. Companies that moved fast and broke things are now paying for expertise to fix what they broke. The emergence of “vibe coding cleanup specialists” represents market recognition that speed without sustainability is ultimately expensive.
On the other hand, the competitive dynamics favour speed. When Y Combinator startups grow 10% per week using 95% AI-generated code, the pressure on competitors to match that velocity is intense. The short-term rewards for vibe coding are visible and immediate; the long-term costs are diffuse and deferred.
The craftsmanship movement offers a counternarrative. Zed's blog captured this perspective: “Most people are talking about how AI can help us make software faster and help us make more software. As craftspeople, we should look at AI and ask, 'How can this help me build better software?'” A gnarly codebase hinders not only human ability to work in it but also the ability of AI tools to be effective in it.
Perhaps the most significant intervention would be changing how we measure success. Currently, the industry celebrates velocity: lines of code generated, features shipped, time to market. What if we equally celebrated sustainability: code that remains maintainable over time, systems that adapt gracefully to changing requirements, architectures that future developers can understand and extend?
Where the Reckoning Leads
The proliferation of vibe coding as a dominant development methodology threatens long-term software maintainability in ways that are now empirically documented. Code duplication is up fourfold. Refactoring has collapsed from 25% to potentially 3% of changes. Delivery stability decreases as AI adoption increases. Junior developer employment has fallen by 20% while the pathway to developing senior expertise narrows.
The question of whether organisations should establish guardrails is no longer open. The evidence indicates they must, or face the consequences documented in security incidents, technical debt accumulation, and the structural erosion of engineering expertise.
Whether economic pressure toward speed will inevitably normalise technical debt at scale depends on choices yet to be made. Markets can correct when costs become visible, and the vibe coding hangover suggests that correction has begun. But markets also systematically underweight future costs relative to present benefits, and the current incentive structures favour speed over sustainability.
The interventions that could realign development practices toward meaningful problem-solving are known: layered verification architectures, contract-first development, operational safety boundaries, educational reform emphasising fundamentals alongside AI fluency, governance frameworks that require documentation and review of AI-generated code, investment in architectural expertise, and cultural shifts that value sustainability alongside velocity.
The path forward requires preserving what traditional coding cultivates (the learning journey, the architectural discipline, the deep understanding of systems) while embracing the productivity gains that AI assistance offers. This is not a binary choice between vibe coding and craftsmanship. It is the harder work of integration: using AI to augment human expertise rather than replace it, maintaining the feedback loops that develop judgement, and building organisations that value both speed and sustainability.
The stakes extend beyond any individual codebase. As software mediates an ever-larger share of human activity, the quality of that software matters profoundly. Systems that cannot be maintained will eventually fail. Systems that no one understands will fail in ways no one can predict. The reckoning that began in 2025 is just the beginning of a longer conversation about what we want from the software that shapes our world.
References and Sources
Karpathy, A. (2025, February 2). Twitter/X post introducing vibe coding. https://x.com/karpathy/status/1886192184808149383
Collins Dictionary. (2025). Collins Word of the Year 2025: Vibe Coding. https://www.collinsdictionary.com/us/woty
CNN. (2025, November 6). 'Vibe coding' named Collins Dictionary's Word of the Year. https://www.cnn.com/2025/11/06/tech/vibe-coding-collins-word-year-scli-intl
TechCrunch. (2025, March 6). A quarter of startups in YC's current cohort have codebases that are almost entirely AI-generated. https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/
CNBC. (2025, March 15). Y Combinator startups are fastest growing, most profitable in fund history because of AI. https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html
METR. (2025, July 10). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Stack Overflow. (2025). 2025 Stack Overflow Developer Survey. https://survey.stackoverflow.co/2025/
Stack Overflow Blog. (2025, December 29). Developers remain willing but reluctant to use AI: The 2025 Developer Survey results are here. https://stackoverflow.blog/2025/12/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here
Palmer, M. (2025). Statement on CVE-2025-48757. https://mattpalmer.io/posts/statement-on-CVE-2025-48757/
Security Online. (2025). CVE-2025-48757: Lovable's Row-Level Security Breakdown Exposes Sensitive Data Across Hundreds of Projects. https://securityonline.info/cve-2025-48757-lovables-row-level-security-breakdown-exposes-sensitive-data-across-hundreds-of-projects/
GitClear. (2025). AI Copilot Code Quality: 2025 Data Suggests 4x Growth in Code Clones. https://www.gitclear.com/ai_assistant_code_quality_2025_research
Google Cloud Blog. (2024). Announcing the 2024 DORA report. https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report
Google Cloud Blog. (2025). Announcing the 2025 DORA Report. https://cloud.google.com/blog/products/ai-machine-learning/announcing-the-2025-dora-report
McKinsey. (2024). Tech debt: Reclaiming tech equity. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/tech-debt-reclaiming-tech-equity
Fast Company. (2025, September). The vibe coding hangover is upon us. https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us
Final Round AI. (2025). Young Software Developers Losing Jobs to AI, Stanford Study Confirms. https://www.finalroundai.com/blog/stanford-study-shows-young-software-developers-losing-jobs-to-ai
Stack Overflow Blog. (2025, December 26). AI vs Gen Z: How AI has changed the career pathway for junior developers. https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/
MIT Technology Review. (2025, December 15). AI coding is now everywhere. But not everyone is convinced. https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/
InfoQ. (2025, November). AI-Generated Code Creates New Wave of Technical Debt, Report Finds. https://www.infoq.com/news/2025/11/ai-code-technical-debt/
The New Stack. (2025). Is AI Creating a New Code Review Bottleneck for Senior Engineers? https://thenewstack.io/is-ai-creating-a-new-code-review-bottleneck-for-senior-engineers/
Tabnine Blog. (2025). A Return to Craftsmanship in Software Engineering. https://www.tabnine.com/blog/a-return-to-craftsmanship-in-the-age-of-ai-for-software-engineering/
Zed Blog. (2025). The Case for Software Craftsmanship in the Era of Vibes. https://zed.dev/blog/software-craftsmanship-in-the-era-of-vibes
Manifesto for Software Craftsmanship. (2009). https://manifesto.softwarecraftsmanship.org/
DX. (2025). AI-assisted engineering: Q4 impact report. https://getdx.com/blog/ai-assisted-engineering-q4-impact-report-2025/
Jellyfish. (2025). 2025 AI Metrics in Review: What 12 Months of Data Tell Us About Adoption and Impact. https://jellyfish.co/blog/2025-ai-metrics-in-review/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk