When Code Lives in Chat: The Dangers of Democratised Development

Somewhere in a Fortune 500 company's engineering Slack, a product manager types a casual message: “@CodingBot can you add a quick feature to disable rate limiting for our VIP customers?” Within minutes, the AI agent has pushed a commit to the main branch, bypassing the security team entirely. Nobody reviewed the code. Nobody questioned whether this created a vulnerability. The change simply happened because someone with a blue “PM” badge next to their name asked politely in a chat window.

This scenario is no longer hypothetical. As organisations race to embed AI coding agents directly into collaboration platforms like Slack and Microsoft Teams, they are fundamentally redrawing the boundaries of who controls software development. According to the JetBrains State of Developer Ecosystem 2025 survey, which gathered responses from 24,534 developers between April and June 2025, 85 per cent of developers now regularly use AI tools for coding and development work. More striking still, 41 per cent of all code written in 2025 was AI-generated. The shift from isolated integrated development environments (IDEs) to shared conversational spaces represents perhaps the most significant transformation in how software gets built since the advent of version control.

The convenience is undeniable. GitHub Copilot's November 2025 update introduced Model Context Protocol (MCP) integration with OAuth support, enabling AI agents to authenticate securely with tools like Slack and Jira without hardcoded tokens. Developers can now issue commands to create pull requests, search repositories, and manage issues directly from chat interfaces. The friction between “I have an idea” and “the code exists” has collapsed to nearly zero.

But this collapse carries profound implications for power, security, and the intentionality that once protected software systems from hasty decisions. When anyone with access to a Slack channel can summon code into existence through natural language, the centuries-old gatekeeping function of technical expertise begins to erode. The question facing every technology organisation today is not whether to adopt these tools, but how to prevent convenience from becoming catastrophe.

The Shifting Tectonics of Software Power

For decades, the software development process enforced a natural hierarchy. Product managers could request features. Designers could propose interfaces. Executives could demand timelines. But ultimately, developers held the keys to the kingdom. Only they could translate abstract requirements into functioning code. This bottleneck, frustrating as it often proved, served as a crucial check on impulse and impatience.

That structural constraint is dissolving. As McKinsey's research indicates, AI tools are now automating time-consuming routine tasks such as project management, market analysis, performance testing, and documentation, freeing product managers, engineers, and designers to focus on higher-value work. The technology consultancy notes that teams are not looking to replace human judgment and decision-making with AI; instead, the goal is to use AI for what it does best, whilst relying on human insight for understanding complex human needs.

Yet the practical reality is messier. When a non-technical stakeholder can type a request into Slack and watch code materialise within seconds, the power dynamic shifts in subtle but significant ways. Research from MIT published in July 2025 found that developers feel they “don't really have much control over what the model writes.” Without a channel for AI to expose its own confidence, the researchers warn, “developers risk blindly trusting hallucinated logic that compiles, but collapses in production.”

This confidence gap becomes particularly dangerous when AI agents operate in shared spaces. In an IDE, a developer maintains clear responsibility for what they commit. In a chat environment, multiple stakeholders may issue requests, and the resulting code reflects a confused amalgamation of intentions. The MIT researchers call for “transparent tooling that lets models expose uncertainty and invite human steering rather than passive acceptance.”

The democratisation of code generation also threatens to flatten organisational learning curves in problematic ways. Bain and Company's 2025 technology report found that three of four companies report the hardest part of AI adoption is getting people to change how they work. Under pressure, developers often fall back on old habits, whilst some engineers distrust AI or worry that it will undermine their role. This tension creates an unstable environment where traditional expertise is simultaneously devalued and desperately needed.

The implications extend beyond individual teams. As AI tools become the primary interface for requesting software changes, the vocabulary of software development shifts from technical precision to conversational approximation. Product managers who once needed to craft detailed specifications can now describe what they want in plain English. The question of whether this represents democratisation or degradation depends entirely on the governance structures surrounding these new capabilities.

Who Gets to Summon the Machine?

The question of who can invoke AI coding agents has become one of the most contentious governance challenges facing technology organisations. In traditional development workflows, access to production systems required specific credentials, code reviews, and approval chains. The move to chat-based development threatens to bypass all of these safeguards with a simple “@mention.”

Slack's own documentation for its agent-ready APIs, released in October 2025, emphasises that permission inheritance ensures AI applications respect the same access controls as human users. IT leaders have specific concerns, the company acknowledges, as many organisations only discover extensive over-permissioning when they are ready to deploy AI systems. This revelation typically comes too late, after permissions have already propagated through interconnected systems.

The architectural challenge is that traditional role-based access control (RBAC) was designed for human users operating at human speeds. As WorkOS explains in its documentation on AI agent access control, AI agents powered by large language models “generate actions dynamically based on natural language inputs and infer intent from ambiguous context, which makes their behaviour more flexible, and unpredictable.” Without a robust authorisation model to enforce permissions, the consequences can be severe.

Cerbos, a provider of access control solutions, notes that many current AI agent frameworks still assume broad system access. By default, an AI support agent might see the entire ticketing database instead of only the subset relevant to the current user. When that agent can also write code, the exposure multiplies exponentially.

The most sophisticated organisations are implementing what the Cloud Security Alliance describes as “Zero Trust 2.0” specifically designed for AI systems. This framework uses artificial intelligence integrated with machine learning to establish trust in real-time through behavioural and network activity observation. A Policy Decision Point sits at the centre of this architecture, watching everything in real-time, evaluating context, permissions, and behaviour, and deciding whether that agentic AI can execute this action on that system under these conditions.

This represents a fundamental shift from the traditional model of granting permissions once and trusting them indefinitely. As the Cloud Security Alliance warns, traditional zero trust relied heavily on perimeter controls and static policies because the entities it governed (human users) operated within predictable patterns and at human speed. AI agents shatter these assumptions entirely.

Beyond RBAC, organisations are exploring attribute-based access control (ABAC) and relationship-based access control (ReBAC) for managing AI agent permissions. ABAC adds context such as user tier, branch, time of day, and tenant ID. However, as security researchers note, modern LLM stacks often rely on ephemeral containers or serverless functions where ambient context vanishes with each invocation. Persisting trustworthy attributes across the chain demands extra engineering that many proof-of-concept projects skip. ReBAC models complex resource graphs elegantly, but when agents make dozens of micro-tool calls per prompt, those lookups must complete in tens of milliseconds or users will notice lag.

The Security Surface Expands

Moving coding workflows from isolated IDEs into shared chat environments multiplies the surface area for security exposure in ways that many organisations have failed to anticipate. The attack vectors include token leakage, unaudited repository access, prompt injection, and the fundamental loss of control over when and how code is generated.

Dark Reading's January 2026 analysis of security pitfalls in AI coding adoption highlights the severity of this shift. Even as developers start to use AI agents to build applications and integrate AI services into the development and production pipeline, the quality of the code, especially the security of the code, varies significantly. Research from CodeRabbit found that whilst developers may be moving quicker and improving productivity with AI, these benefits are offset by the fact they are spending time fixing flawed code or tackling security issues.

The statistics are sobering. According to Checkmarx's 2025 global survey, nearly 70 per cent of respondents estimated that more than 40 per cent of their organisation's code was AI-generated in 2024, with 44.4 per cent of respondents estimating 41 to 60 per cent of their code is AI-generated. IBM's 2025 Cost of a Data Breach Report reveals that 13 per cent of organisations reported breaches of AI models or applications, with 97 per cent lacking proper AI access controls. Shadow AI breaches cost an average of $670,000 more than traditional incidents and affected one in five organisations in 2025. With average breach costs exceeding $5.2 million and regulatory penalties reaching eight figures, the business case for robust security controls is compelling.

The specific risks of chat-based development deserve careful enumeration. First, prompt injection attacks have emerged as perhaps the most insidious threat. As Dark Reading explains, data passed to a large language model from a third-party source could contain text that the LLM will execute as a prompt. This indirect prompt injection is a major problem in the age of AI agents where LLMs are linked with third-party tools to access data or perform tasks. Researchers have demonstrated prompt injection attacks in AI coding assistants including GitLab Duo, GitHub Copilot Chat, and AI agent platforms like ChatGPT. Prompt injection now ranks as LLM01 in the OWASP Top 10 for LLM Applications, underscoring its severity.

Second, token and credential exposure creates systemic vulnerabilities. TechTarget's analysis of AI code security risks notes that to get useful suggestions, developers might prompt these tools with proprietary code or confidential logic. That input could be stored or later used in model training, potentially leaking secrets. Developers increasingly paste sensitive code or data into public tools, which may use that input for future model training. This phenomenon, referred to as IP leakage and shadow AI, represents a category of risk that barely existed five years ago. Security concerns include API keys, passwords, and tokens appearing in AI-suggested code, along with insecure code patterns like SQL injection, command injection, and path traversal.

Third, the speed of chat-based code generation outpaces human review capacity. Qodo's 2026 analysis of enterprise code review tools observes that AI-assisted development now accounts for nearly 40 per cent of all committed code, and global pull request activity has surged. Leaders frequently report that review capacity, not developer output, is the limiting factor in delivery. When code can be generated faster than it can be reviewed, the natural safeguard of careful human inspection begins to fail.

Chris Wysopal of Veracode, quoted in Dark Reading's analysis, offers stark guidance: “Developers need to treat AI-generated code as potentially vulnerable and follow a security testing and review process as they would for any human-generated code.” The problem is that chat-based development makes this discipline harder to maintain, not easier.

Building Governance for the Conversational Era

The governance frameworks required for AI coding agents in chat environments must operate at multiple levels simultaneously. They must define who can invoke agents, what those agents can access, how their outputs are reviewed, and what audit trails must be maintained. According to Deloitte's 2025 analysis, only 9 per cent of enterprises have reached what they call a “Ready” level of AI governance maturity. That is not because 91 per cent of companies are lazy, but because they are trying to govern something that moves faster than their governance processes.

The Augment Code framework for enterprise AI code governance identifies several essential components. Usage policies must clearly define which AI tools are permitted and for what capacity, specify acceptable use cases (distinguishing between prototyping and production code), ensure that AI-generated code is clearly identifiable, and limit use of AI-generated code in sensitive or critical components such as authentication modules or financial systems.

A clear policy should define approved use cases. For example, organisations might allow AI assistants to generate boilerplate code, documentation, or test scaffolding, but disallow use in implementing core cryptography, authentication flows, or handling credentials. Governance controls should specify which AI tools are permitted and for what capacity, define acceptable use cases, ensure that AI-generated code is clearly identifiable, and limit use of AI-generated code in sensitive or critical components.

Automated enforcement becomes crucial when human review cannot keep pace. DX's enterprise adoption guidelines recommend configurable rulesets that allow organisations to encode rules for style, patterns, frameworks, security, and compliance. Review agents check each diff in the IDE and pull request against these rules, flagging or blocking non-compliant changes. Standards can be managed centrally and applied across teams and repositories.

The most successful engineering organisations in 2025, according to Qodo's analysis, shifted routine review load off senior engineers by automatically approving small, low-risk, well-scoped changes, whilst routing schema updates, cross-service changes, authentication logic, and contract modifications to humans. AI review must categorise pull requests by risk, flag unrelated changes bundled in the same request, and selectively automate approvals under clearly defined conditions.

This tiered approach preserves human ownership of critical decisions whilst enabling AI acceleration of routine work. As the Qodo analysis notes, a well-governed AI code review system preserves human ownership of the merge button whilst raising the baseline quality of every pull request, reduces back-and-forth, and ensures reviewers only engage with work that genuinely requires their experience.

Regulatory pressure is accelerating the formalisation of these practices. The European Data Protection Board's 2025 guidance provides criteria for identifying privacy risks, classifying data, and evaluating consequences. It emphasises controlling inputs to LLM systems to avoid exposing personal information, trade secrets, or intellectual property. The NIST framework, SOC2 certifications, and ISO/IEC 42001 compliance all have their place in enterprise AI governance. Regulations like HIPAA, PCI DSS, and GDPR are forcing organisations to take AI security seriously, with logging, audit trails, and principle of least privilege becoming not just best practices but legal requirements.

Architectural Patterns for Auditability

The technical architecture of AI coding agents in chat environments must be designed from the ground up with auditability in mind. This is not merely a compliance requirement; it is a precondition for maintaining engineering integrity in an era of automated code generation.

The concept of provenance bills of materials (PBOMs) is gaining traction as a way to track AI-generated code from commit to deployment. As Substack's Software Analyst newsletter explains, standards for AI-BOM tracking are forming under NIST and OWASP influence. Regulatory pressure from the EU Cyber Resilience Act and similar US initiatives will push organisations to document the provenance of AI code.

Qodo's enterprise review framework emphasises that automated tools must produce artifacts that reviewers and compliance teams can rely on, including referenced code snippets, security breakdowns, call-site lists, suggested patches, and an audit trail for each workflow action. In large engineering organisations, these artifacts become the verifiable evidence needed for governance, incident review, and policy enforcement. Effective monitoring and logging ensure accountability by linking AI-generated code to developers, inputs, and decisions for audit and traceability.

The OWASP Top 10 for Large Language Model Applications, updated for 2025, provides specific guidance for securing AI-generated code. The project notes that prompt injection remains the number one concern in securing LLMs, underscoring its critical importance in generative AI security. The framework identifies insecure output handling as a key vulnerability: neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data. Attack scenarios include cross-site scripting, SQL injection, or code execution via unsafe LLM output, as well as LLM-generated Markdown or HTML enabling malicious script injection.

Mitigation strategies recommended by OWASP include treating the model as a user, adopting a zero-trust approach, and ensuring proper input validation for any responses from the model to backend functions. Organisations should encode the model's output before delivering it to users to prevent unintended code execution and implement content filters to eliminate vulnerabilities like cross-site scripting and SQL injection in LLM-generated outputs. Following the OWASP Application Security Verification Standard guidelines with a focus on input sanitisation is essential. Incorporating Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) into the development process helps identify vulnerabilities early.

The principle of least privilege takes on new dimensions when applied to AI agents. Slack's security documentation for AI features emphasises that AI interactions are protected by enterprise-grade safety and security frameworks, providing layered protection across every prompt and response. These protections include content thresholds to avoid hallucinations, prompt instructions that reinforce safe behaviour, provider-level mitigations, context engineering to mitigate prompt injection vulnerabilities, URL filtering to reduce phishing risk, and output validation.

Slack's Real-Time Search API, coming in early 2026, will allow organisations to build custom AI applications that maintain enterprise security standards, providing real-time search access that allows users to interact with data directly. Crucially, when access to a sensitive document is revoked, that change is reflected in the user's next query across all AI systems without waiting for overnight sync jobs.

Preserving Intentionality in the Age of Automation

Perhaps the most subtle but significant challenge of chat-based AI development is the erosion of intentionality. When code could only be written through deliberate effort in an IDE, every line represented a considered decision. When code can be summoned through casual conversation, the distinction between intention and impulse begins to blur.

The JetBrains 2025 survey reveals telling statistics about developer attitudes. Among concerns about AI coding tools, 23 per cent cite inconsistent code quality, 18 per cent point to limited understanding of complex logic, 13 per cent worry about privacy and security, 11 per cent fear negative effects on their skills, and 10 per cent note lack of context awareness. Developers want to delegate mundane tasks to AI but prefer to stay in control of more creative and complex ones. Meanwhile, 68 per cent of developers anticipate that AI proficiency will become a job requirement, and 90 per cent report saving at least an hour weekly using AI tools.

This preference for maintained control reflects a deeper understanding of what makes software development valuable: not the typing, but the thinking. The Pragmatic Engineer newsletter's analysis of how AI-assisted coding will change software engineering observes that the best developers are not the ones who reject AI or blindly trust it. They are the ones who know when to lean on AI and when to think deeply themselves.

The shift to chat-based development creates particular challenges for this discernment. In an IDE, the boundary between human thought and AI suggestion remains relatively clear. In a chat environment, where multiple participants may contribute to a thread, the provenance of each requirement becomes harder to trace. The Capgemini analysis of AI agents in software development emphasises that autonomy in this context refers to systems that self-organise, adapt, and collaborate to achieve a shared goal. The goal is not to automate the whole software development lifecycle, but specific tasks where developers benefit from automation.

This targeted approach requires organisational discipline that many companies have not yet developed. IBM's documentation on the benefits of ChatOps notes that it offers automated workflows, centralised communication, real-time monitoring, and security and compliance features. But it also warns of ChatOps dangers and the need for organisational protocols and orchestrators for governed LLM infrastructure use. Critical security implications include data exposure and the need for internal models or strict rules.

The risk is that replacing traditional development with chat-based AI could lead to unmanaged infrastructure if companies do not have proper protocols and guardrails in place for LLM usage. DevOps.com's analysis of AI-powered DevSecOps warns that automated compliance checks may miss context-specific security gaps, leading to non-compliance in highly regulated industries. Organisations should integrate AI-driven governance tools with human validation to maintain accountability and regulatory alignment.

The Human-in-the-Loop Imperative

The emerging consensus among security researchers and enterprise architects is that AI coding agents in chat environments require what is termed a “human-in-the-loop” approach for any sensitive operations. This is not a rejection of automation, but a recognition of its proper boundaries.

Slack's security documentation for its Agentforce product, available since early 2025, describes AI interactions protected by enterprise-grade guardrails. These include content thresholds to avoid hallucinations, prompt instructions that reinforce safe behaviour, and output validation. However, the documentation acknowledges that these technical controls are necessary but not sufficient. The company uses third-party large language models hosted within secure AWS infrastructure, with LLMs that do not retain any information from requests, and customer data is never used to train third-party LLMs.

The Obsidian Security analysis of AI agent security risks identifies identity-based attacks, especially involving stolen API keys and OAuth tokens, as a rapidly growing threat vector for enterprises using AI agents. In one notable incident, attackers exploited Salesloft-Drift OAuth tokens, which granted them access to hundreds of downstream environments. The blast radius of this supply chain attack was ten times greater than previous incidents.

Best practices for mitigating these risks include using dynamic, context-aware authentication such as certificate-based authentication, implementing short-lived tokens with automatic rotation, and most importantly, requiring human approval for sensitive operations. As the analysis notes, security mitigations should include forcing context separation by splitting different tasks to different LLM instances, employing the principle of least privilege for agents, taking a human-in-the-loop approach for approving sensitive operations, and filtering input for text strings commonly used in prompt injections.

The Unit 42 research team at Palo Alto Networks has documented how context attachment features can be vulnerable to indirect prompt injection. To set up this injection, threat actors first contaminate a public or third-party data source by inserting carefully crafted prompts into the source. When a user inadvertently supplies this contaminated data to an assistant, the malicious prompts hijack the assistant. This hijack could manipulate victims into executing a backdoor, inserting malicious code into an existing codebase, and leaking sensitive information.

This threat model makes clear that human oversight cannot be optional. Even the most sophisticated AI guardrails can be circumvented by adversaries who understand how to manipulate the inputs that AI systems consume.

Redefining Roles for the Agentic Era

As AI coding agents become embedded in everyday workflows, the roles of developers, product managers, and technical leaders are being fundamentally redefined. The DevOps community discussion on the evolution from Copilot to autonomous AI suggests that developers' roles may shift to guiding these agents as “intent engineers” or “AI orchestrators.”

This transition requires new skills and new organisational structures. The AWS DevOps blog's analysis of the AI-driven development lifecycle identifies levels of AI autonomy similar to autonomous driving: Level 0 involves no AI-assisted automation; Level 1 provides AI-assisted options where the developer is in full control and receives recommendations; Level 2 involves AI-assisted selection where AI selects pre-defined options; Level 3 provides AI-based partial automation where AI selects options in simple standard cases; and Level 4 involves AI-based full automation where AI operates without the developer. Currently, Levels 1 and 2 are the most common, Level 3 is on the rise, and Level 4 is considered rather unrealistic for complex, industrial-scale software.

The key insight, as articulated in the Capgemini analysis, is that the future is not about AI replacing developers. It is about AI becoming an increasingly capable collaborator that can take initiative whilst still respecting human guidance and expertise. The most effective teams are those that learn to set clear boundaries and guidelines for their AI agents, establish strong architectural patterns, create effective feedback loops, and maintain human oversight whilst leveraging AI autonomy.

This balance requires governance structures that did not exist in the pre-AI era. The Legit Security analysis of DevOps governance emphasises that hybrid governance combines centralised standards with decentralised execution. You standardise core practices like identity management, secure deployment, and compliance monitoring, whilst letting teams adjust the rest to fit their workflows. This balances consistency with agility to support collaboration across diverse environments.

For product managers and non-technical stakeholders, the new environment demands greater technical literacy without the pretence of technical expertise. Whilst AI tools can generate features and predict patterns, the critical decisions about how to implement these capabilities to serve real human needs still rest firmly in human hands. The danger is that casual @mentions become a way of avoiding this responsibility, outsourcing judgment to systems that cannot truly judge.

Towards a Disciplined Future

The integration of AI coding agents into collaboration platforms like Slack represents an inflection point in the history of software development. The potential benefits are enormous: faster iteration, broader participation in the development process, and reduced friction between conception and implementation. But these benefits come with risks that are only beginning to be understood.

The statistics point to a trajectory that cannot be reversed. The global AI agents market reached $7.63 billion in 2025 and is projected to hit $50.31 billion by 2030, according to industry analyses cited by the Cloud Security Alliance. McKinsey's research shows that 88 per cent of organisations now use AI in at least one function, up from 55 per cent in 2023. The question is not whether AI coding agents will become ubiquitous in collaborative environments, but whether organisations will develop the governance maturity to deploy them safely.

The path forward requires action on multiple fronts. First, organisations must implement tiered permission systems that treat AI agents with the same rigour applied to human access, or greater. The principle of least privilege must be extended to every bot that can touch code. Second, audit trails must be comprehensive and immutable, documenting every AI-generated change, who requested it, and what review it received. Third, human approval must remain mandatory for any changes to critical systems, regardless of how convenient chat-based automation might be.

Perhaps most importantly, organisations must resist the cultural pressure to treat chat-based code generation as equivalent to traditional development. The discipline of code review, the intentionality of careful architecture, and the accountability of clear ownership were never bureaucratic obstacles to progress. They were the foundations of engineering integrity.

IT Pro's analysis of AI software development in 2026 warns that developer teams still face significant challenges with adoption, security, and quality control. The Knostic analysis of AI coding assistant governance notes that governance frameworks matter more for AI code generation than traditional development tools because the technology introduces new categories of risk. Without clear policies, teams make inconsistent decisions about when to use AI, how to validate outputs, and what constitutes acceptable generated code.

The convenience of asking an AI to write code in a Slack channel is seductive. But convenience has never been the highest virtue in software engineering. Reliability, security, and maintainability are what distinguish systems that endure from those that collapse. As AI coding agents proliferate through our collaboration platforms, the organisations that thrive will be those that remember this truth, even as they embrace the power of automation.

The next time a product manager types “@CodingBot” into a Slack channel, the response should not be automatic code generation. It should be a series of questions: What is the business justification? Has this been reviewed by security? What is the rollback plan? Is human approval required? Only with these safeguards in place can chat-driven development realise its potential without becoming a vector for chaos.


References and Sources

  1. JetBrains. “The State of Developer Ecosystem 2025.” https://devecosystem-2025.jetbrains.com/
  2. Dark Reading. “As Coders Adopt AI Agents, Security Pitfalls Lurk in 2026.” https://www.darkreading.com/application-security/coders-adopt-ai-agents-security-pitfalls-lurk-2026
  3. Slack. “Securing the Agentic Enterprise.” https://slack.com/blog/transformation/securing-the-agentic-enterprise
  4. GitHub. “November 2025 Copilot Roundup.” https://github.com/orgs/community/discussions/180828
  5. MIT News. “Can AI Really Code? Study Maps the Roadblocks to Autonomous Software Engineering.” July 2025. https://news.mit.edu/2025/can-ai-really-code-study-maps-roadblocks-to-autonomous-software-engineering-0716
  6. Bain and Company. “From Pilots to Payoff: Generative AI in Software Development.” 2025. https://www.bain.com/insights/from-pilots-to-payoff-generative-ai-in-software-development-technology-report-2025/
  7. McKinsey. “How an AI-Enabled Software Product Development Life Cycle Will Fuel Innovation.” https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/how-an-ai-enabled-software-product-development-life-cycle-will-fuel-innovation
  8. Cloud Security Alliance. “Fortifying the Agentic Web: A Unified Zero-Trust Architecture for AI.” September 2025. https://cloudsecurityalliance.org/blog/2025/09/12/fortifying-the-agentic-web-a-unified-zero-trust-architecture-against-logic-layer-threats
  9. Cloud Security Alliance. “Agentic AI and Zero Trust.” August 2025. https://cloudsecurityalliance.org/blog/2025/08/07/agentic-ai-and-zero-trust
  10. Checkmarx. “2025 CISO Guide to Securing AI-Generated Code.” https://checkmarx.com/blog/ai-is-writing-your-code-whos-keeping-it-secure/
  11. IBM. “2025 Cost of a Data Breach Report.” https://www.ibm.com/reports/data-breach
  12. OWASP. “Top 10 for Large Language Model Applications.” https://owasp.org/www-project-top-10-for-large-language-model-applications/
  13. TechTarget. “Security Risks of AI-Generated Code and How to Manage Them.” https://www.techtarget.com/searchsecurity/tip/Security-risks-of-AI-generated-code-and-how-to-manage-them
  14. Qodo. “AI Code Review Tools Compared: Context, Automation, and Enterprise Scale.” 2026. https://www.qodo.ai/blog/best-ai-code-review-tools-2026/
  15. Augment Code. “AI Code Governance Framework for Enterprise Dev Teams.” https://www.augmentcode.com/guides/ai-code-governance-framework-for-enterprise-dev-teams
  16. WorkOS. “AI Agent Access Control: How to Manage Permissions Safely.” https://workos.com/blog/ai-agent-access-control
  17. Cerbos. “Access Control and Permission Management for AI Agents.” https://www.cerbos.dev/blog/permission-management-for-ai-agents
  18. Obsidian Security. “Top AI Agent Security Risks and How to Mitigate Them.” https://www.obsidiansecurity.com/blog/ai-agent-security-risks
  19. Palo Alto Networks Unit 42. “The Risks of Code Assistant LLMs: Harmful Content, Misuse and Deception.” https://unit42.paloaltonetworks.com/code-assistant-llms/
  20. Slack Engineering. “Streamlining Security Investigations with Agents.” https://slack.engineering/streamlining-security-investigations-with-agents/
  21. DX (GetDX). “AI Code Generation: Best Practices for Enterprise Adoption in 2025.” https://getdx.com/blog/ai-code-enterprise-adoption/
  22. Capgemini. “How AI Agents in Software Development Empowers Teams to Do More.” https://www.capgemini.com/insights/expert-perspectives/how-ai-agents-in-software-development-empowers-teams-to-do-more/
  23. DevOps.com. “AI-Powered DevSecOps: Navigating Automation, Risk and Compliance in a Zero-Trust World.” https://devops.com/ai-powered-devsecops-navigating-automation-risk-and-compliance-in-a-zero-trust-world/
  24. Legit Security. “DevOps Governance: Importance and Best Practices.” https://www.legitsecurity.com/aspm-knowledge-base/devops-governance
  25. IT Pro. “AI Could Truly Transform Software Development in 2026.” https://www.itpro.com/software/development/ai-software-development-2026-vibe-coding-security
  26. Knostic. “Governance for Your AI Coding Assistant.” https://www.knostic.ai/blog/ai-coding-assistant-governance
  27. Slack. “Security for AI Features in Slack.” https://slack.com/help/articles/28310650165907-Security-for-AI-features-in-Slack
  28. InfoWorld. “85% of Developers Use AI Regularly.” https://www.infoworld.com/article/4077352/85-of-developers-use-ai-regularly-jetbrains-survey.html

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...