AI Code Needs Oversight: What Actually Keeps Architecture Standing

The promise was seductive: AI that writes code faster than any human, accelerating development cycles and liberating engineers from tedious boilerplate. The reality, as thousands of development teams have discovered, is considerably more complicated. According to the JetBrains State of Developer Ecosystem 2025 survey of nearly 25,000 developers, 85% now regularly use AI tools for coding and development. Yet Stack Overflow's 2025 Developer Survey reveals that only 33% of developers trust the accuracy of AI output, down from 43% in 2024. More developers actively distrust AI tools (46%) than trust them.
This trust deficit tells a story that productivity metrics alone cannot capture. While GitHub reports developers code 55% faster with Copilot and McKinsey studies suggest tasks can be completed twice as quickly with generative AI assistance, GitClear's analysis of 211 million changed lines of code reveals a troubling counter-narrative. The percentage of code associated with refactoring has plummeted from 25% in 2021 to less than 10% in 2024. Duplicated code blocks increased eightfold. For the first time in GitClear's measurement history, copy-pasted lines exceeded refactored lines.
The acceleration is real. So is the architectural degradation it enables.
What emerges from this data is not a simple story of AI success or failure. It is a more nuanced picture of tools that genuinely enhance productivity when deployed with discipline but create compounding problems when adopted without appropriate constraints. The developers and organisations navigating this landscape successfully share a common understanding: AI coding assistants require guardrails, architectural oversight, and deliberate workflow design to deliver sustainable value.
The Feature Creep Accelerator
Feature creep has plagued software development since the industry's earliest days. Wikipedia defines it as the excessive ongoing expansion or addition of new features beyond the original scope, often resulting in software bloat and over-complication rather than simple design. It is considered the most common source of cost and schedule overruns and can endanger or even kill products and projects. What AI coding assistants have done is not create this problem, but radically accelerate its manifestation.
Consider the mechanics. A developer prompts an AI assistant to add a user authentication feature. The AI generates functional code within seconds. The developer, impressed by the speed and apparent correctness, accepts the suggestion. Then another prompt, another feature, another quick acceptance. The velocity feels exhilarating. The Stack Overflow survey confirms this pattern: 84% of developers now use or plan to use AI tools in their development process. The JetBrains survey reports that 74% cite increased productivity as AI's primary benefit, with 73% valuing faster completion of repetitive tasks.
But velocity without direction creates chaos. Google's 2024 DORA report found that while AI adoption increased individual output by 21% more tasks completed and 98% more pull requests merged, organisational delivery metrics remained flat. More alarmingly, AI adoption correlated with a 7.2% reduction in delivery stability. The 2025 DORA report confirms this pattern persists: AI adoption continues to have a negative relationship with software delivery stability. As the DORA researchers concluded, speed without stability is accelerated chaos.
The mechanism driving this instability is straightforward. AI assistants optimise for immediate task completion. They generate code that works in isolation but lacks awareness of broader architectural context. Each generated component may function correctly yet contradict established patterns elsewhere in the codebase. One function uses promises, another async/await, a third callbacks. Database queries are parameterised in some locations and concatenated strings in others. Error handling varies wildly between endpoints.
This is not a failing of AI intelligence. It reflects a fundamental mismatch between how AI assistants operate and how sustainable software architecture develops. The Qodo State of AI Code Quality report identifies missing context as the top issue developers face, reported by 65% during refactoring and approximately 60% during test generation and code review. Only 3.8% of developers report experiencing both low hallucination rates and high confidence in shipping AI-generated code without human review.
Establishing Effective Guardrails
The solution is not to abandon AI assistance but to contain it within structures that preserve architectural integrity. CodeScene's research demonstrates that unhealthy code exhibits 15 times more defects, requires twice the development time, and creates 10 times more delivery uncertainty compared to healthy code. Their approach involves implementing guardrails across three dimensions: code quality, code familiarity, and test coverage.
The first guardrail dimension addresses code quality directly. Every line of code, whether AI-generated or handwritten, undergoes automated review against defined quality standards. CodeScene's CodeHealth Monitor detects over 25 code smells including complex methods and God functions. When AI or human introduces issues, the monitor flags them instantly before the code reaches the main branch. This creates a quality gate that treats AI-generated code with the same scrutiny applied to human contributions.
The quality dimension requires teams to define their code quality standards explicitly and automate enforcement via pull request reviews. A 2023 study found that popular AI assistants generate correct code in only 31.1% to 65.2% of cases. Similarly, CodeScene's Refactoring vs. Refuctoring study found that AI breaks code in two out of three refactoring attempts. These statistics make quality gates not optional but essential.
The second dimension concerns code familiarity. Research from the 2024 DORA report reveals that 39% of respondents reported little to no trust in AI-generated code. This distrust correlates with experience level: senior developers show the lowest “highly trust” rate at 2.6% and the highest “highly distrust” rate at 20%. These experienced developers have learned through hard experience that AI suggestions require verification. Guardrails should institutionalise this scepticism by requiring review from developers familiar with affected areas before AI-generated changes merge.
The familiarity dimension serves another purpose: knowledge preservation. When AI generates code that bypasses human understanding, organisations lose institutional knowledge about how their systems work. When something breaks at 3 a.m. and the code was generated by an AI six months ago, can the on-call engineer actually understand what is failing? Can they trace through the logic and implement a meaningful fix without resorting to trial and error?
The third dimension emphasises test coverage. The Ox Security report titled “Army of Juniors: The AI Code Security Crisis” identified 10 architecture and security anti-patterns commonly found in AI-generated code. Comprehensive test suites serve as executable documentation of expected behaviour. When AI-generated code breaks tests, the violation becomes immediately visible. When tests pass, developers gain confidence that at least basic correctness has been verified.
Enterprise adoption requires additional structural controls. The 2026 regulatory landscape, with the EU AI Act's high-risk provisions taking effect in August and penalties reaching 35 million euros or 7% of global revenue, demands documented governance. AI governance committees have become standard in mid-to-large enterprises, with structured intake processes covering security, privacy, legal compliance, and model risk.
Preventing Architectural Drift
Architectural coherence presents a distinct challenge from code quality. A codebase can pass all quality metrics while still representing a patchwork of inconsistent design decisions. The term “vibe coding” has emerged to describe an approach where developers accept AI-generated code without fully understanding it, relying solely on whether the code appears to work.
The consequences of architectural drift compound over time. A September 2025 Fast Company report quoted senior software engineers describing “development hell” when working with AI-generated code. One developer's experience became emblematic: “Random things are happening, maxed out usage on API keys, people bypassing the subscription.” Eventually: “Cursor keeps breaking other parts of the code,” and the application was permanently shut down.
Research examining ChatGPT-generated code found that only five out of 21 programs were initially secure when tested across five programming languages. Missing input sanitisation emerged as the most common flaw, while Cross-Site Scripting failures occurred 86% of the time and Log Injection vulnerabilities appeared 88% of the time. These are not obscure edge cases but fundamental security flaws that any competent developer should catch during code review.
Preventing this drift requires explicit architectural documentation that AI assistants can reference. A recommended approach involves creating a context directory containing specialised documents: a Project Brief for core goals and scope, Product Context for user experience workflows and business logic, System Patterns for architecture decisions and component relationships, Tech Context for the technology stack and dependencies, and Progress Tracking for working features and known issues.
This Memory Bank approach addresses AI's fundamental limitation: forgetting implementation choices made earlier when working on large projects. AI assistants lose track of architectural decisions, coding patterns, and overall project structure, creating inconsistency as project complexity increases. By maintaining explicit documentation that gets fed into every AI interaction, teams can maintain consistency even as AI generates new code.
The human role in this workflow resembles a navigator in pair programming. The navigator directs overall development strategy, makes architectural decisions, and reviews AI-generated code. The AI functions as the driver, generating code implementations and suggesting refactoring opportunities. The critical insight is treating AI as a junior developer beside you: capable of producing drafts, boilerplate, and solid algorithms, but lacking the deep context of your project.
Breaking Through Repetitive Problem-Solving Patterns
Every developer who has used AI coding assistants extensively has encountered the phenomenon: the AI gets stuck in a loop, generating the same incorrect solution repeatedly, each attempt more confidently wrong than the last. The 2025 Stack Overflow survey captures this frustration, with 66% of developers citing “AI solutions that are almost right, but not quite” as their top frustration. Meanwhile, 45% report that debugging AI-generated code takes more time than expected. These frustrations have driven 35% of developers to turn to Stack Overflow specifically after AI-generated code fails.
The causes of these loops are well documented. VentureBeat's analysis of why AI coding agents are not production-ready identifies brittle context windows, broken refactors, and missing operational awareness as primary culprits. When AI exceeds its context limit, it loses track of previous attempts and constraints. It regenerates similar solutions because the underlying prompt and available context have not meaningfully changed.
Several strategies prove effective for breaking these loops. The first involves starting fresh with new context. Opening a new chat session can help the AI think more clearly without the baggage of previous failed attempts in the prompt history. This simple reset often proves more effective than continued iteration within a corrupted context.
The second strategy involves switching to analysis mode. Rather than asking the AI to fix immediately, developers describe the situation and request diagnosis and explanation. By doing this, the AI outputs analysis or planning rather than directly modifying code. This shift in mode often reveals the underlying issue that prevented the AI from generating a correct solution.
Version control provides the third strategy. Committing a working state before adding new features or accepting AI fixes creates reversion points. When a loop begins, developers can quickly return to the last known good version rather than attempting to untangle AI-generated complexity. Frequent checkpointing makes the decision between fixing forward and reverting backward much easier.
The fourth strategy acknowledges when manual intervention becomes necessary. One successful workaround involves instructing the agent not to read the file and instead requesting it to provide the desired configuration, with the developer manually adding it. This bypasses whatever confusion the AI has developed about the file's current state.
The fifth strategy involves providing better context upfront. Developers should always copy-paste the exact error text or describe the wrong behaviour precisely. Giving all relevant errors and output to the AI leads to more direct fixes, whereas leaving it to infer the issue can lead to loops.
These strategies share a common principle: recognising when AI assistance has become counterproductive and knowing when to take manual control. The 90/10 rule offers useful guidance. AI currently excels at planning architectures and writing code blocks but struggles with debugging real systems and handling edge cases. When projects reach 90% completion, switching from building mode to debugging mode leverages human strengths rather than fighting AI limitations.
Leveraging Complementary AI Models
The 2025 AI landscape has matured beyond questions of whether to use AI assistance toward more nuanced questions of which AI model best serves specific tasks. Research published on ResearchGate comparing Gemini 2.5, Claude 4, LLaMA 4, GPT-4.5, and DeepSeek V3.1 concludes that no single model excels at everything. Each has distinct strengths and weaknesses. Rather than a single winner, the 2025 landscape shows specialised excellence.
Professional developers increasingly adopt multi-model workflows that leverage each AI's advantages while avoiding their pitfalls. The recommended approach matches tasks to model strengths: Gemini for deep reasoning and multimodal analysis, GPT series for balanced performance and developer tooling, Claude for long coding sessions requiring memory of previous context, and specialised models for domain-specific requirements.
Orchestration platforms have emerged to manage these multi-model workflows. They provide the integration layer that routes requests to appropriate models, retrieves relevant knowledge, and monitors performance across providers. Rather than committing to a single AI vendor, organisations deploy multiple models strategically, routing queries to the optimal model per task type.
This multi-model approach proves particularly valuable for breaking through architectural deadlocks. When one model gets stuck in a repetitive pattern, switching to a different model often produces fresh perspectives. The models have different training data, different architectural biases, and different failure modes. What confuses one model may be straightforward for another.
The competitive advantage belongs to developers who master multi-model workflows rather than committing to a single platform. This represents a significant shift in developer skills. Beyond learning specific AI tools, developers must develop meta-skills for evaluating which AI model suits which task and when to switch between them.
Mandatory Architectural Review Before AI Implementation
Enterprise teams have discovered that AI output velocity can exceed review capacity. Qodo's analysis observes that AI coding agents increased output by 25-35%, but most review tools do not address the widening quality gap. The consequences include larger pull requests, architectural drift, inconsistent standards across multi-repository environments, and senior engineers buried in validation work instead of system design. Leaders frequently report that review capacity, not developer output, is the limiting factor in delivery.
The solution emerging across successful engineering organisations involves mandatory architectural review before AI implements major changes. The most effective teams have shifted routine review load off senior engineers by automatically approving small, low-risk, well-scoped changes while routing schema updates, cross-service changes, authentication logic, and contract modifications to human reviewers.
AI review systems must therefore categorise pull requests by risk and flag unrelated changes bundled in the same pull request. Selective automation of approvals under clearly defined conditions maintains velocity for routine changes while ensuring human judgment for consequential decisions. AI-assisted development now accounts for nearly 40% of all committed code, making these review processes critical to organisational health.
The EU AI Act's requirements make this approach not merely advisable but legally necessary for certain applications. Enterprises must demonstrate full data lineage tracking knowing exactly what datasets contributed to each model's output, human-in-the-loop checkpoints for workflows impacting safety, rights, or financial outcomes, and risk classification tags labelling each model with its risk level, usage context, and compliance status.
The path toward sustainable AI-assisted development runs through consolidation and discipline. Organisations that succeed will be those that stop treating AI as a magic solution for software development and start treating it as a rigorous engineering discipline requiring the same attention to process and quality as any other critical capability.
Safeguarding Against Hidden Technical Debt
The productivity paradox of AI-assisted development becomes clearest when examining technical debt accumulation. An HFS Research and Unqork study found that while 84% of organisations expect AI to reduce costs and 80% expect productivity gains, 43% report that AI will create new technical debt. Top concerns include security vulnerabilities at 59%, legacy integration complexity at 50%, and loss of visibility at 42%.
The mechanisms driving this debt accumulation differ from traditional technical debt. AI technical debt compounds through three primary vectors. Model versioning chaos results from the rapid evolution of code assistant products. Code generation bloat emerges as AI produces more code than necessary. Organisation fragmentation develops as different teams adopt different AI tools and workflows. These vectors, coupled with the speed of AI code generation, interact to cause exponential growth.
SonarSource's August 2025 analysis of thousands of programming tasks completed by leading language models uncovered what researchers describe as a systemic lack of security awareness. The Ox Security report found AI-generated code introduced 322% more privilege escalation paths and 153% more design flaws compared to human-written code. AI-generated code is highly functional but systematically lacking in architectural judgment.
The financial implications are substantial. By 2025, CISQ estimates nearly 40% of IT budgets will be spent maintaining technical debt. A Stripe report found developers spend, on average, 42% of their work week dealing with technical debt and bad code. AI assistance that accelerates code production without corresponding attention to code quality simply accelerates technical debt accumulation.
The State of Software Delivery 2025 report by Harness found that contrary to perceived productivity benefits, the majority of developers spend more time debugging AI-generated code and more time resolving security vulnerabilities than before AI adoption. This finding aligns with GitClear's observation that code churn, defined as the percentage of code discarded less than two weeks after being written, has nearly doubled from 3.1% in 2020 to 5.7% in 2024.
Safeguarding against this hidden debt requires continuous measurement and explicit debt budgeting. Teams should track not just velocity metrics but also code health indicators. The refactoring rate, clone detection, code churn within two weeks of commit, and similar metrics reveal whether AI assistance is building sustainable codebases or accelerating decay. If the current trend continues, GitClear believes it could soon bring about a phase change in how developer energy is spent, with defect remediation becoming the leading day-to-day developer responsibility rather than developing new features.
Structuring Developer Workflows for Multi-Model Effectiveness
Effective AI-assisted development requires restructuring workflows around AI capabilities and limitations rather than treating AI as a drop-in replacement for human effort. The Three Developer Loops framework published by IT Revolution provides useful structure: a tight inner loop of coding and testing, a middle loop of integration and review, and an outer loop of planning and architecture.
AI excels in the inner loop. Code generation, test creation, documentation, and similar tasks benefit from AI acceleration without significant risk. Development teams spend nearly 70% of their time on repetitive tasks instead of creative problem-solving, and AI handles approximately 40% of the time developers previously spent on boilerplate code. The middle loop requires more careful orchestration. AI can assist with code review and integration testing, but human judgment must verify that generated code aligns with architectural intentions. The outer loop remains primarily human territory. Planning, architecture, and strategic decisions require understanding of business context, user needs, and long-term maintainability that AI cannot provide.
The workflow implications are significant. Rather than using AI continuously throughout development, effective developers invoke AI assistance at specific phases while maintaining manual control at others. During initial planning and architecture, AI might generate options for human evaluation but should not make binding decisions. During implementation, AI can accelerate code production within established patterns. During integration and deployment, AI assistance should be constrained by automated quality gates that verify generated code meets established standards.
Context management becomes a critical developer skill. The METR 2025 study that found developers actually take 19% longer when using AI tools attributed this primarily to context management overhead. The study examined 16 experienced open-source developers with an average of five years of prior experience with the mature projects they worked on. Before completing tasks, developers predicted AI would speed them up by 24%. After experiencing the slowdown firsthand, they still reported believing AI had improved their performance by 20%. The objective measurement showed the opposite.
The context directory approach described earlier provides one structural solution. Alternative approaches include using version-controlled markdown files to track AI interactions and decisions, employing prompt templates that automatically include relevant context, and establishing team conventions for what context AI should receive for different task types. The specific approach matters less than having a systematic approach that the team follows consistently.
Real-World Implementation Patterns
The theoretical frameworks for AI guardrails translate into specific implementation patterns that teams can adopt immediately. The first pattern involves pre-commit hooks that validate AI-generated code against quality standards before allowing commits. These hooks can verify formatting consistency, run static analysis, check for known security vulnerabilities, and enforce architectural constraints. When violations occur, the commit is rejected with specific guidance for resolution.
The second pattern involves staged code review with AI assistance. Initial review uses AI tools to identify obvious issues like formatting violations, potential bugs, or security vulnerabilities. Human reviewers then focus on architectural alignment, business logic correctness, and long-term maintainability. This two-stage approach captures AI efficiency gains while preserving human judgment for decisions requiring context that AI lacks.
The third pattern involves explicit architectural decision records that AI must reference. When developers prompt AI for implementation, they include references to relevant decision records. The AI then generates code that respects documented constraints. This requires discipline in maintaining decision records but provides concrete guardrails against architectural drift.
The fourth pattern involves regular architectural retrospectives that specifically examine AI-generated code. Teams review samples of AI-generated commits to identify patterns of architectural violation, code quality degradation, or security vulnerability. These retrospectives inform adjustments to guardrails, prompt templates, and review processes.
The fifth pattern involves model rotation for complex problems. When one AI model gets stuck, teams switch to a different model rather than continuing to iterate with the stuck model. This requires access to multiple AI providers and skills in prompt translation between models.
Measuring Success Beyond Velocity
Traditional development metrics emphasise velocity: lines of code, commits, pull requests merged, features shipped. AI assistance amplifies these metrics while potentially degrading unmeasured dimensions like code quality, architectural coherence, and long-term maintainability. Sustainable AI-assisted development requires expanding measurement to capture these dimensions.
The DORA framework has evolved to address this gap. The 2025 report introduced rework rate as a fifth core metric precisely because AI shifts where development time gets spent. Teams produce initial code faster but spend more time reviewing, validating, and correcting it. Monitoring cycle time, code review patterns, and rework rates reveals the true productivity picture that perception surveys miss.
Code health metrics provide another essential measurement dimension. GitClear's analysis tracks refactoring rate, code clone frequency, and code churn. These indicators reveal whether codebases are becoming more or less maintainable over time. When refactoring declines and clones increase, as GitClear's data shows has happened industry-wide, the codebase is accumulating debt regardless of how quickly features appear to ship. The percentage of moved or refactored lines decreased dramatically from 24.1% in 2020 to just 9.5% in 2024, while lines classified as copy-pasted or cloned rose from 8.3% to 12.3% in the same period.
Security metrics deserve explicit attention given AI's documented tendency to generate vulnerable code. The Georgetown University Centre for Security and Emerging Technology identified three broad risk categories: models generating insecure code, models themselves being vulnerable to attack and manipulation, and downstream cybersecurity impacts including feedback loops where insecure AI-generated code gets incorporated into training data for future models.
Developer experience metrics capture dimensions that productivity metrics miss. The Stack Overflow survey finding that 45% of developers report debugging AI-generated code takes more time than expected suggests that velocity gains may come at the cost of developer satisfaction and cognitive load. Sustainable AI adoption requires monitoring not just what teams produce but how developers experience the production process.
The Discipline That Enables Speed
The paradox of AI-assisted development is that achieving genuine productivity gains requires slowing down in specific ways. Establishing guardrails, maintaining context documentation, implementing architectural review, and measuring beyond velocity all represent investments that reduce immediate output. Yet without these investments, the apparent gains from AI acceleration prove illusory as technical debt accumulates, architectural coherence degrades, and debugging time compounds.
The organisations succeeding with AI coding assistance share common characteristics. They maintain rigorous code review regardless of code origin. They invest in automated testing proportional to development velocity. They track quality metrics alongside throughput metrics. They train developers to evaluate AI suggestions critically rather than accepting them reflexively.
These organisations have learned that AI coding assistants are powerful tools requiring skilled operators. In the hands of experienced developers who understand both AI capabilities and limitations, they genuinely accelerate delivery. Applied without appropriate scaffolding, they create technical debt faster than any previous development approach. Companies implementing comprehensive AI governance frameworks report 60% fewer hallucination-related incidents compared to those using AI tools without oversight controls.
The 19% slowdown documented by the METR study represents one possible outcome, not an inevitable one. But achieving better outcomes requires abandoning the comfortable perception that AI automatically makes development faster. It requires embracing the more complex reality that speed and quality require continuous, deliberate balancing.
The future belongs to developers and organisations that treat AI assistance not as magic but as another engineering discipline requiring its own skills, processes, and guardrails. The best developers of 2025 will not be the ones who generate the most lines of code with AI, but the ones who know when to trust it, when to question it, and how to integrate it responsibly. The tools are powerful. The question is whether we have the discipline to wield them sustainably.
References and Sources
- JetBrains (2025). “The State of Developer Ecosystem 2025: Coding in the Age of AI.” https://blog.jetbrains.com/research/2025/10/state-of-developer-ecosystem-2025/
- Stack Overflow (2025). “2025 Stack Overflow Developer Survey: AI Section.” https://survey.stackoverflow.co/2025/ai
- GitClear (2025). “AI Copilot Code Quality: 2025 Data Suggests 4x Growth in Code Clones.” https://www.gitclear.com/ai_assistant_code_quality_2025_research
- Google DORA (2024). “DORA Report 2024.” https://dora.dev/research/2024/dora-report/
- Google DORA (2025). “State of AI-assisted Software Development 2025.” https://dora.dev/research/2025/dora-report/
- Qodo (2025). “State of AI Code Quality Report.” https://www.qodo.ai/reports/state-of-ai-code-quality/
- CodeScene (2025). “AI Code Guardrails: Validate and Quality-Gate GenAI Code.” https://codescene.com/resources/use-cases/prevent-ai-generated-technical-debt
- Ox Security (2025). “Army of Juniors: The AI Code Security Crisis.” Referenced via InfoQ.
- Georgetown University CSET (2024). “Cybersecurity Risks of AI-Generated Code.” https://cset.georgetown.edu/publication/cybersecurity-risks-of-ai-generated-code/
- McKinsey (2024). “Unleashing Developer Productivity with Generative AI.” https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/unleashing-developer-productivity-with-generative-ai
- IT Revolution (2025). “The Three Developer Loops: A New Framework for AI-Assisted Coding.” https://itrevolution.com/articles/the-three-developer-loops-a-new-framework-for-ai-assisted-coding/
- METR (2025). “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
- HFS Research (2025). “AI Won't Save Enterprises from Tech Debt Unless They Change the Architecture First.” https://www.hfsresearch.com/press-release/ai-wont-save-enterprises-from-tech-debt-unless-they-change-the-architecture-first/
- VentureBeat (2025). “Why AI Coding Agents Aren't Production-Ready.” https://venturebeat.com/ai/why-ai-coding-agents-arent-production-ready-brittle-context-windows-broken
- SonarSource (2025). Research on AI-generated code security. Referenced via DevOps.com.
- ResearchGate (2025). “The Most Advanced AI Models of 2025: Comparative Analysis.” https://www.researchgate.net/publication/392160200
- EU AI Act (2024). High-risk provisions effective August 2026. https://natlawreview.com/article/2026-outlook-artificial-intelligence
- Faros AI (2025). “DORA Report 2025 Key Takeaways.” https://www.faros.ai/blog/key-takeaways-from-the-dora-report-2025
- LeadDev (2025). “How AI Generated Code Compounds Technical Debt.” https://leaddev.com/software-quality/how-ai-generated-code-accelerates-technical-debt
- MIT Sloan Management Review (2025). “The Hidden Costs of Coding With Generative AI.” https://sloanreview.mit.edu/article/the-hidden-costs-of-coding-with-generative-ai/
- Harness (2025). “State of Software Delivery 2025.” Referenced via DevOps.com.
- Fast Company (2025). “Development Hell: Senior Engineers and AI-Generated Code.” September 2025.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk