The Elevation Thesis: Why Democratising Code Will Not Kill Craft

The smartphone in your pocket contains more computing power than the Apollo 11 mission computers. Yet whilst NASA needed rooms full of specialised engineers writing machine code to land humans on the moon, today's citizen developers can build functional applications using drag-and-drop interfaces whilst queuing for coffee. This transformation represents one of the most profound democratisations in human technological history, but it also raises an uncomfortable question: if everyone can code, what becomes of the craft itself?

The numbers tell a story of radical change. Gartner forecasts that by 2025, 70% of new applications developed by enterprises will utilise low-code or no-code technologies, a dramatic leap from less than 25% in 2020. The global low-code development platform market, valued at $10.46 billion in 2024, is projected to explode to $82.37 billion by 2034, growing at a compound annual growth rate of 22.92%. Meanwhile, AI-augmented development tools like GitHub Copilot have been activated by more than one million developers and adopted by over 20,000 organisations, generating over three billion accepted lines of code.

Yet beneath this seductive narrative of universal empowerment lies a deeper tension. The same forces democratising software creation may simultaneously erode the very qualities that distinguish meaningful software from merely functional code. When technical constraints vanish, when implementation becomes trivial, when everyone possesses the tools of creation, does intentionality disappear? Does architectural rigour become an anachronism? Or does the craft of software engineering simply migrate to a different, more rarefied plane where the essential skill becomes knowing what not to build rather than how to build it?

This isn't merely an academic debate. It strikes at the heart of what software means in modern civilisation. We're grappling with questions of expertise, quality, sustainability, and ultimately, meaning itself in an age of abundant creation.

The Abstraction Cascade

The evolution of programming languages is fundamentally a story about abstraction, each generation trading granular control for broader accessibility. This progression has been a defining characteristic of software development history, a relentless march towards making the complex simple.

In the beginning, there was machine code: raw binary instructions understood directly by processors. First-generation programming languages (1GL) required programmers to think in the computer's native tongue, manipulating individual bits and bytes. Second-generation languages (2GL) brought assembly language, where mnemonic codes replaced binary strings. Yet assembly still required deep hardware knowledge and remained tied to specific processor architectures.

The true revolution came with third-generation high-level languages: FORTRAN, COBOL, C, and their descendants. These pioneering languages transformed software development, making it easier to create, maintain, and debug applications. By abstracting away hardware-specific details, developers could focus on solving problems rather than wrestling with machine code nuances. Each relatively abstract, “higher” level builds on a relatively concrete, “lower” level.

But the abstraction didn't stop there. Object-oriented programming encapsulated data and behaviour into reusable components. Web frameworks abstracted network protocols and browser differences. Cloud platforms abstracted infrastructure management. Each step made software development more accessible whilst raising new questions about what developers needed to understand versus what they could safely ignore.

Programming has always been about building layers of abstraction that make complex systems more accessible. Each major evolution has followed a similar pattern: trade some control for productivity, and open the field to more people. Low-code, no-code, and AI-augmented platforms represent the latest iteration of this ancient pattern.

Yet something feels different this time. Previous abstraction layers still required understanding underlying concepts. A Java programmer might not write assembly code, but they needed to understand memory management, data structures, and algorithmic complexity. The new platforms promise to abstract away not just implementation details but conceptual understanding itself. You don't need to understand databases to build a database-backed application. You simply describe what you want, and the platform materialises it.

This represents a qualitative shift, not merely a quantitative one. We're moving from “making programming easier” to “making programming unnecessary”. The implications cascade through every assumption about technical expertise, professional identity, and software quality.

The Citizen Developer Revolution

Walk into any modern enterprise, and you'll encounter a new species of creator: the citizen developer. These aren't trained software engineers but operations managers, HR professionals, marketing analysts, and finance controllers who build the tools they need using platforms requiring no traditional coding knowledge. The statistics reveal their proliferation: 83% of enterprises with over 5,000 employees report active citizen development programmes. Gartner predicts that by 2026, developers outside formal IT departments will account for at least 80% of low-code development tool users, up from 60% in 2021.

The business case appears compelling. 62% of respondents in a 2024 survey believe citizen development significantly accelerates digital transformation. Tangible benefits include faster response times (76% of tech leaders expect this), increased solution customisation (75% anticipate this), and productivity gains (58% predict over 10% increases). Companies spend an average of 40% of their IT budget maintaining existing software; citizen development promises to redirect that spending towards innovation.

The platforms enabling this revolution have become remarkably sophisticated. Tools like Microsoft Power Apps, Mendix, OutSystems, and Bubble offer visual development environments where complex applications emerge from dragging components onto canvases. AI has accelerated this further; platforms now generate code from natural language descriptions and automatically suggest optimal database schemas.

Yet the citizen developer movement harbours profound tensions. The same speed and accessibility that make these platforms attractive also create new categories of risk. Without proper governance, low-code adoption can lead to technical debt: a hidden yet costly issue undermining long-term scalability, security, and performance. Without governance, low-code's speed can result in proliferating unmanaged apps, inconsistent practices, security gaps, and integration problems.

Consider the fundamental paradox: citizen developers succeed precisely because they don't think like traditional engineers. They focus on immediate business problems rather than architectural elegance. They prioritise working solutions over scalable systems. They solve today's challenges without necessarily considering tomorrow's maintenance burden. This pragmatism is both their strength and their weakness.

Data security remains the top concern for 44% of CIOs when asked about citizen development. Citizen developers, lacking formal security training, may inadvertently create applications with SQL injection vulnerabilities, exposed API keys, or inadequate access controls. They might store sensitive data in ways violating regulatory compliance.

Beyond security, there's the spectre of technical debt. Jacob Goerz, a technology analyst, contends that “if business users are empowered to build their own tools and can build them rapidly, we are trading one form of technical debt for another.” The debt manifests differently: proliferating applications that nobody maintains, undocumented business logic trapped in visual workflows, and integration spaghetti connecting systems in ways that confound anyone attempting to understand them later.

Gartner analyst Jason Wong notes that “anytime you add customisations via scripting and programming, you introduce technical debt into a low-code or no-code platform.” Low-code platforms immediately turn each drag-and-drop specification into code, often using proprietary languages developers may not understand. The abstraction becomes a black box, working perfectly until it doesn't.

The citizen developer revolution thus presents a Faustian bargain: immediate empowerment purchased with deferred costs in governance, security, and long-term maintainability.

The AI Multiplier

If low-code platforms democratise software creation by removing manual coding, AI-augmented development tools represent something more radical: they transform coding from a human activity into collaborative dialogue between human intent and machine capability. GitHub Copilot, the most prominent exemplar, has been activated by over one million developers. Research from controlled experiments showed developers with GitHub Copilot completed tasks 55.8% faster. Engineering teams at companies like Duolingo achieved a 25% increase in developer velocity.

These aren't marginal improvements; they're transformative shifts. Developers using GitHub Copilot report being up to 55% more productive at writing code and experiencing up to 75% higher job satisfaction. Research suggests the increase in developer productivity from AI could boost global GDP by over $1.5 trillion.

Copilot integrates seamlessly with development environments, providing real-time code suggestions by understanding context and intention. It actively predicts patterns, making repetitive coding tasks significantly faster. By automating parts of quality engineering and testing processes, AI helps developers maintain high-quality code with less manual effort.

The impact varies by experience level. Studies found that less experienced developers gain greater advantages from tools like GitHub Copilot, showing promise for AI pair programmers to help people transition into software development careers. This democratising effect could fundamentally reshape who becomes a developer and how they learn the craft.

Yet AI-augmented development introduces its own paradoxes. Copilot's suggestions can be less than optimal or incorrect, especially for complex logic or edge cases, requiring developers to review and validate generated code. This creates an interesting inversion: instead of writing code and checking if it works, developers now read AI-generated code and determine if it's correct. The cognitive skill shifts from creation to evaluation, from synthesis to analysis.

This shift has profound implications for expertise. Traditional programming education emphasises understanding why code works and building mental models of program execution. But when AI generates code, what knowledge becomes essential? Do developers still need to understand algorithmic complexity if AI handles optimisation?

Some argue AI elevates developers from implementation mechanics to higher-order design thinking. Instead of sweating syntax and boilerplate code, they focus on architecture, user experience, and business logic. In this view, AI doesn't diminish expertise; it refocuses it on aspects machines handle poorly: ambiguity resolution, stakeholder communication, and holistic system design.

Others worry that relying on AI-generated code without deep understanding creates brittle expertise. When AI suggests suboptimal solutions, will developers recognise the deficiencies? There's a risk of creating developers who can prompt AI effectively but struggle to understand the systems they nominally control.

The consensus emerging from early adoption suggests AI works best as an amplifier of existing expertise rather than a replacement. Experienced developers use AI to accelerate work whilst maintaining critical oversight. They leverage AI for boilerplate generation and routine implementation whilst retaining responsibility for architectural decisions, security considerations, and quality assurance.

GitHub Copilot has generated over three billion accepted lines of code, representing an unprecedented transfer of implementation work from humans to machines. The question isn't whether AI can write code (it demonstrably can), but whether AI-written code possesses the same intentionality, coherence, and maintainability as code written by thoughtful humans. The answer appears to be: it depends on the human wielding the AI.

The Craftsmanship Counterpoint

Amidst the democratisation narrative, a quieter but persistent voice advocates for something seemingly contradictory: software craftsmanship. In December 2008, aspiring software craftsmen met in Libertyville, Illinois to establish principles for software craftsmanship, eventually presenting their conclusions as the Manifesto for Software Craftsmanship. The manifesto articulates four core values extending beyond Agile's focus on working software:

  1. Not only working software, but also well-crafted software
  2. Not only responding to change, but also steadily adding value
  3. Not only individuals and interactions, but also a community of professionals
  4. Not only customer collaboration, but also productive partnerships

Software craftsmanship emphasises the coding skills of software developers, drawing a metaphor between modern software development and the apprenticeship model of medieval Europe. It represents a fundamental assertion: that how software is built matters as much as whether it works. The internal quality of code, its elegance, its maintainability, its coherence, possesses intrinsic value beyond mere functionality.

This philosophy stands in stark tension with democratisation. Craftsmanship requires time, deliberate practice, mentorship, and deep expertise. It celebrates mastery that comes from years of experience, intuition distinguishing expert from novice, tacit knowledge that cannot be easily codified or automated.

The practical manifestations include test-driven development, rigorous code review, refactoring for clarity, and adherence to design principles. Craftsmen argue these practices create software that's not just functional but sustainable: systems that adapt gracefully to changing requirements, code that future developers can understand and modify, architectures that remain coherent as they evolve.

Critics accuse craftsmanship of elitism, of valuing aesthetic preferences over business outcomes. They argue “well-crafted” is subjective, that perfect code shipped late is worthless. In an era where speed determines competitive advantage, craftsmanship is a luxury few can afford.

Yet proponents counter this misunderstands the time scale of value creation. Poorly structured code might deliver features faster initially but accumulates technical debt slowing all future development. Systems built without architectural rigour become increasingly difficult to modify, eventually reaching states where any change risks catastrophic failure.

Research on technical debt in agile contexts validates this concern. The most popular causes of incurring technical debt in agile software development are “focus on quick delivery” and “architectural and design issues”. Agile methodologies use continuous delivery and adaptability to develop software meeting user needs, but such methods are prone to accumulating technical debt. The paradox emerges clearly: agility values immediate functionality over long-term code quality, which inherently encourages technical debt accrual, yet agility's iterative nature offers an ideal setting for addressing technical debt.

The craftsmanship movement articulates a vital counterpoint to pure democratisation. It insists that expertise matters, that quality exists on dimensions invisible to end-users, and that long-term sustainability requires discipline and skill.

But here's where the tension becomes most acute: if craftsmanship requires years of dedicated practice, how does it coexist with platforms promising anyone can build software? Can craftsmanship principles apply to citizen-developed applications? Does AI-generated code possess craft?

Intentionality in the Age of Abundance

The democratisation of software creation produces an unexpected consequence: abundance without curation. When building software is difficult, scarcity naturally limits what gets built. Technical barriers act as filters, ensuring only ideas with sufficient backing overcome the implementation hurdle. But when those barriers dissolve, when creating software becomes as easy as creating a document, we face a new challenge: deciding what deserves to exist.

This is where intentionality becomes critical. Intentional architecture, as defined in software engineering literature, is “a purposeful set of statements, models, and decisions that represent some future architectural state”. The purpose of software architecture is to bring order and intentionality to the design of software systems. But intentionality operates on multiple levels: not just how we build, but why we build and what we choose to build.

The ongoing discussion in software architecture contrasts intentional architecture (planned, deliberate, involving upfront design) with emergent design (extending and improving architecture as needed). Neither extreme proves optimal; the consensus suggests balancing intentionality and emergence is essential. Yet this balance requires judgment, experience, and understanding of trade-offs, qualities that democratised development tools don't automatically confer.

Consider what happens when technical constraints vanish. A citizen developer identifies a business problem and, within hours, constructs an application addressing it. The application works. Users adopt it. Value is created. But was this the right solution? Might a different approach have addressed not just the immediate problem but a broader category of issues? Does this application duplicate functionality elsewhere in the organisation? Will anyone maintain it when the creator moves to a different role?

These questions concern intentionality at the system level, not just the code level. They require stepping back from immediate problem-solving to consider broader context, long-term implications, and architectural coherence. They demand expertise not in building things but in knowing whether things should be built, and if so, how they integrate with the larger ecosystem.

Democratised development tools excel at implementation but struggle with intentionality. They make building easy but provide little guidance on whether to build. They optimise for individual productivity but may undermine organisational coherence. They solve the “how” brilliantly whilst leaving the “why” and “what” largely unaddressed.

This creates a profound irony: the very accessibility that democratises creation also demands higher-order expertise to manage its consequences. When anyone can build software, someone must curate what gets built, ensure integration coherence, manage the proliferation of applications, and maintain architectural vision preventing organisational software from fragmenting into chaos.

The skill, then, migrates from writing code to making judgments: judgments about value, sustainability, integration, and alignment with organisational goals. It becomes less about technical implementation and more about systems thinking, less about algorithms and more about architecture, less about individual applications and more about the holistic digital ecosystem.

Intentionality also extends to the experiential dimension: not just what software does but how it feels to use, what values it embodies, and what second-order effects it creates. In an age where software mediates increasing amounts of human experience, these considerations matter profoundly.

Yet democratised development tools rarely engage with these questions. They optimise for functionality, not meaning. They measure success in working features, not in coherent experiences or embodied values.

This represents perhaps the deepest tension in democratisation: whether software reduced to pure functionality, stripped of craft and intentionality, can still serve human flourishing in the ways software created with care and purpose might. When everyone can code, the challenge becomes ensuring what gets coded actually matters.

The Elevation Thesis

Perhaps the dichotomy is false. Perhaps democratisation doesn't destroy expertise but transforms it, elevating craft to a different plane where different skills matter. Several threads of evidence support this more optimistic view.

First, consider historical precedent. When high-level programming languages emerged, assembly programmers worried abstraction would erode understanding and produce inferior software. They were partially correct: fewer modern developers understand processor architecture. But they were also profoundly wrong: high-level languages didn't eliminate expertise; they redirected it toward problems machines handle poorly (business logic, user experience, system design) and away from problems machines handle well (memory management, instruction scheduling).

The abstraction layers that democratised programming simultaneously created new domains for expertise. Performance optimisation moved from hand-tuned assembly to algorithm selection and data structure design. Security shifted from buffer overflow prevention to authentication architecture and threat modelling. Expertise didn't disappear; it migrated and transformed.

Current democratisation may follow a similar pattern. As implementation becomes automated, expertise concentrates on aspects machines can't easily automate: understanding stakeholder needs, navigating organisational politics, designing coherent system architectures, evaluating trade-offs, and maintaining long-term vision. These skills, often termed “soft skills” but more accurately described as high-level cognitive and social capabilities, become the differentiators.

Research on GitHub Copilot usage reveals this pattern emerging. Experienced developers leverage AI for routine implementation whilst maintaining critical oversight of architecture, security, and quality. They use AI to accelerate mechanical aspects of development, freeing cognitive capacity for conceptual challenges requiring human judgment. The AI handles boilerplate; the human handles the hard parts.

Second, consider the role of governance and platform engineering. The proliferation of citizen developers and AI-augmented tools creates demand for a new expertise category: those who design guardrails, create reusable components, establish standards, and build the platforms on which others build. This isn't traditional coding, but it requires deep technical knowledge combined with organisational understanding and system design capability.

83% of enterprises with active citizen development programmes also report implementing governance frameworks. These frameworks don't emerge spontaneously; they require expert design. Someone must create component libraries enabling consistent, secure development. Someone must architect integration patterns preventing chaos. This work demands expertise at a higher abstraction level than traditional development.

Third, craftsmanship principles adapt rather than obsolete. Well-crafted software remains superior to poorly crafted software even when created with low-code tools. The manifestation of craft changes: instead of elegant code, it might be well-designed workflows, thoughtful data models, or coherent component architectures. The underlying values (clarity, maintainability, sustainability) persist even as the medium transforms.

Evolutionary architecture, described as “the approach to building software that's designed to evolve over time as business priorities change, customer demands shift, and new technologies emerge”, is “forged by the perfect mix between intentional architecture and emergent design”. This philosophy applies equally whether implementation happens through hand-written code, low-code platforms, or AI-generated logic. The expertise lies in balancing intention and emergence, not in the mechanics of typing code.

Fourth, democratisation creates its own expertise hierarchies. Not all citizen developers are equally effective. Some produce coherent, maintainable applications whilst others create tangled messes. Expertise in wielding democratised tools effectively, in knowing their affordances and limitations, in producing quality outputs despite simplified interfaces, this becomes a skill in itself.

The elevation thesis suggests that each wave of democratisation expands the pool of people who can perform routine tasks whilst simultaneously raising the bar for what constitutes expert-level work. More people can build basic applications, but architecting robust, scalable, secure systems becomes more valuable precisely because it requires navigating complexity that democratised tools can't fully abstract away.

This doesn't mean everyone benefits equally from the transition. Traditional coding skills may become less valuable relative to architectural thinking, domain expertise, and stakeholder management. The transition creates winners and losers, as all technological transformations do.

But the thesis challenges the narrative that democratisation inevitably degrades quality or eliminates expertise. Instead, it suggests expertise evolves, addressing new challenges at higher abstraction levels whilst delegating routine work to increasingly capable tools. The craft doesn't disappear; it ascends.

Designing for Meaning

If we accept that democratisation is inevitable and potentially beneficial, the critical question becomes: how do we ensure that abundant creation doesn't devolve into meaningless proliferation? How do we design for meaning rather than merely for function when everyone possesses the tools of creation?

This question connects to deeper philosophical debates about technology and human values. A philosophy of software design defines complexity as “anything related to the structure of a software system that makes it hard to understand and modify the system”, shifting focus from what software does to how it's designed for understanding and maintainability. But we might extend this further: what would it mean to design software not just for maintainability but for meaningfulness?

Meaningfulness in software might encompass several dimensions. First, alignment with genuine human needs rather than superficial wants or artificial problems. The ease of creation tempts us to build solutions searching for problems. Designing for meaning requires disciplined inquiry into whether proposed software serves authentic purposes.

Second, coherence with existing systems and practices. Software participates in ecosystems of tools, workflows, and human activities. Meaningful software integrates thoughtfully, enhancing rather than fragmenting the systems it joins.

Third, sustainability across time. Meaningful software considers its lifecycle: who will maintain it, how it will evolve, what happens when original creators move on. Can future developers understand this system? Can it adapt to changing requirements?

Fourth, embodiment of values. Software encodes assumptions about users, workflows, and what matters. Meaningful software makes these assumptions explicit and aligns them with the values of the communities it serves.

Fifth, contribution to human capability rather than replacement. The most meaningful software augments human judgment and creativity rather than attempting to eliminate them.

Achieving these dimensions of meaning requires what we might call “meta-expertise”: not just skill in building software but wisdom in deciding what software should exist and how it should relate to human flourishing. This expertise cannot be fully codified into development platforms because it requires contextual judgment, ethical reasoning, and long-term thinking that resists algorithmic capture.

The challenge facing organisations embracing democratised development is cultivating this meta-expertise whilst empowering citizen developers. Several approaches show promise: establishing centres of excellence that mentor citizen developers in system thinking and design philosophy, creating review processes evaluating proposed applications on dimensions beyond functionality, and developing shared vocabularies for discussing software quality and sustainability.

Educational institutions face parallel challenges: if coding mechanics become increasingly automated, what should computer science education emphasise? Perhaps greater focus on computational thinking divorced from specific languages, on software architecture and system design, on ethics and values in technology, on communication and collaboration.

Ultimately, designing for meaning in abundant software requires cultural shifts as much as technical solutions. We need to cultivate appreciation for restraint, for the applications we choose not to build. We need to celebrate coherence and integration as achievements equal to novel creation. We need to recognise that in a world where everyone can code, the differentiating skill becomes knowing what deserves coding and having the judgment to execute on that knowledge with care.

Democratisation and Expertise as Complements

The anxiety underlying the question posed at the beginning (when everyone can code, does craft disappear?) rests on a false dichotomy: that democratisation and expertise exist in zero-sum competition. The evidence suggests otherwise.

Democratisation expands the base of people who can create functional software, solving the mismatch between demand for software solutions and supply of professional developers. Gartner's prediction that 70% of new applications will use low-code or no-code technologies by 2025 reflects this reality: given ever-increasing demand for software solutions, it will eventually become impossible to rely only on expert software engineers.

But democratisation also creates new demand for expertise. Someone must build the platforms that democratise creation. Someone must govern their use. Someone must maintain architectural coherence across proliferating applications. Someone must make high-level design decisions that platforms can't automate. The nature of expertise shifts, but its necessity persists.

Moreover, democratisation and craftsmanship can coexist in symbiosis. Professional developers can focus on complex, critical systems where quality and sustainability justify the investment in expertise. Citizen developers can address the long tail of niche needs that professional developers couldn't economically serve. The platforms can incorporate craftsmanship principles (security best practices, accessibility guidelines, performance optimisation) as defaults.

The consensus emerging from low-code adoption experiences suggests that low-code is not a silver bullet to solve technical debt, and maintaining the highest levels of quality and performance still requires expert involvement. Hybrid models work best: platforms for routine needs, professional development for complex systems, and experts providing governance and guidance across both.

Intentionality and architectural rigour don't erode in this model; they become more important precisely because they can't be fully automated. As implementation mechanics get abstracted away, the aspects requiring human judgment (what to build, how to design for evolution, how to balance competing concerns) gain prominence. The craft elevates from syntax and algorithms to strategy and system design.

The real risk isn't that democratisation destroys expertise but that we fail to adapt our institutions, education, and professional development to cultivate the new forms of expertise that democratisation demands. If we continue training developers primarily in coding mechanics whilst neglecting system design, stakeholder communication, and architectural thinking, we'll create a mismatch between skills and needs.

The evidence from GitHub Copilot adoption is instructive: productivity gains are largest when AI augments existing expertise rather than replacing it. The same pattern likely applies to low-code and no-code platforms. They amplify capability; they don't replace judgment.

Building Better, Not Just Building More

The democratisation of software creation represents one of the most consequential technological shifts of our era. The numbers are staggering: markets growing at 20% to 30% annually, 80% of development eventually occurring outside traditional IT departments, AI-generated billions of lines of code, productivity gains exceeding 50%. These changes are neither reversible nor ignorable.

But the question posed at the beginning reveals a deeper anxiety about meaning and value in an age of technological abundance. If building software becomes trivial, what distinguishes good software from bad? If technical barriers vanish, what prevents proliferation without purpose? If anyone can create, how do we ensure what gets created actually matters?

The answer emerging from experience, research, and philosophical reflection is nuanced. Craft doesn't disappear; it transforms. The skills that matter shift from implementation mechanics toward system design, from coding syntax toward architectural thinking, from building individual applications toward maintaining coherent ecosystems. Intentionality becomes more critical precisely because it can't be automated. The ability to decide what not to build, to design for meaning rather than mere function, to balance immediate needs with long-term sustainability, these capabilities distinguish expertise in the democratised era.

This transformation requires rethinking professional identity, restructuring education, redesigning organisational processes, and cultivating new forms of meta-expertise. It demands that we resist the seduction of building simply because we can, that we develop cultural appreciation for restraint and coherence, that we design governance systems ensuring democratisation doesn't devolve into chaos.

The software craftsmanship manifesto's insistence on well-crafted software, steadily adding value, professional community, and productive partnerships becomes more relevant, not less, in the democratised era. But craftsmanship must adapt: from code elegance to system coherence, from individual mastery to collaborative governance, from artisanal creation to platform architecture.

The promise of democratisation isn't that everyone becomes an expert software engineer (they won't and needn't). The promise is that people can solve their own problems without waiting for scarce expert attention, that organisations can respond faster to opportunities, that the gap between idea and implementation narrows. But realising this promise without creating unsustainable messes requires expertise at higher abstraction levels: in platform design, governance frameworks, architectural vision, and the cultivation of intentionality even in simplified creation environments.

We're living through a grand experiment in what happens when software creation tools become abundant and accessible. Early results suggest both tremendous opportunity and real risks. The outcome depends on choices we make now about how to structure these tools, educate their users, govern their outputs, and define what software excellence means in contexts where everyone can code.

The craft of software engineering isn't disappearing. It's elevating to a plane where the essential skills are knowing what deserves building, designing systems that hang together coherently, embedding quality and sustainability into the platforms everyone uses, and ultimately, creating software that serves human flourishing rather than merely executing functions. When everyone can code, the real expertise lies in ensuring what gets coded actually matters.

That's a craft worth cultivating, and it's more necessary now than ever.


Sources and References

  1. Precedence Research. (2024). “Low-Code Development Platform Market Size to Surpass USD 82.37 Bn by 2034.” https://www.precedenceresearch.com/low-code-development-platform-market

  2. Gartner. (2021-2024). “Low-Code and No-Code Development Technologies Forecast.” Multiple reports and press releases. https://www.gartner.com/en/newsroom/press-releases/2021-11-10-gartner-says-cloud-will-be-the-centerpiece-of-new-digital-experiences

  3. Kissflow. (2024). “The 2024 Citizen Development Trends Report.” https://kissflow.com/citizen-development/citizen-development-trends-report/

  4. Kissflow. (2024). “Kissflow launches 2024 Citizen Development Trends Report.” PR Newswire, October 2024. https://www.prnewswire.com/news-releases/kissflow-launches-2024-citizen-development-trends-report-302258507.html

  5. Peng, S., et al. (2023). “The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.” arXiv:2302.06590. https://arxiv.org/abs/2302.06590

  6. GitHub. (2023). “Research: quantifying GitHub Copilot's impact on developer productivity and happiness.” The GitHub Blog. https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/

  7. GitHub. (2023). “The economic impact of the AI-powered developer lifecycle and lessons from GitHub Copilot.” The GitHub Blog. https://github.blog/news-insights/research/the-economic-impact-of-the-ai-powered-developer-lifecycle-and-lessons-from-github-copilot/

  8. GitHub Customer Stories. “How Duolingo uses GitHub.” https://github.com/customer-stories/duolingo

  9. TechCrunch. (2025). “GitHub Copilot crosses 20M all-time users.” 30 July 2025. https://techcrunch.com/2025/07/30/github-copilot-crosses-20-million-all-time-users/

  10. Communications of the ACM. (2024). “Measuring GitHub Copilot's Impact on Productivity.” https://cacm.acm.org/research/measuring-github-copilots-impact-on-productivity/

  11. Manifesto for Software Craftsmanship. (2009). Libertyville, Illinois meeting, December 2008; published March 2009. https://manifesto.softwarecraftsmanship.org/

  12. Wikipedia. “Software craftsmanship.” https://en.wikipedia.org/wiki/Software_craftsmanship

  13. InfoQ. (2009). “Software Craftsmanship Manifesto: A Call to Arms.” March 2009. https://www.infoq.com/news/2009/03/software_craftsmanship/

  14. Ramač, R., et al. (2024). “Analyzing the concept of technical debt in the context of agile software development: A systematic literature review.” Information and Software Technology, ScienceDirect. https://www.sciencedirect.com/science/article/abs/pii/S0950584916302890

  15. Alzaghoul, E., & Bahsoon, R. (2024). “Technical debt and agile software development practices and processes: An industry practitioner survey.” Information and Software Technology, ScienceDirect. https://www.sciencedirect.com/science/article/pii/S0950584917305098

  16. The Open Group. “Intentional Architecture.” The Open Group Agile Architecture Standard. https://pubs.opengroup.org/architecture/o-aa-standard/intentional-architecture.html

  17. Rossini, S. (2024). “Agile Architecture Part.2: Intentional, Emergent & Evolutionary Architectures.” Medium, September 2024. https://medium.com/@stefano.rossini.mail/agile-architecture-intentional-emergent-evolutionary-architectures-da77905098fc

  18. Muthukumarana, D. “Balancing Emergent Design and Intentional Architecture in Agile Software Development.” Medium. https://dilankam.medium.com/balancing-emergent-design-and-intentional-architecture-in-agile-software-development-889b07d5ccb9

  19. CIO Dive. “Low code offers a glimmer of hope for paying off technical debt.” https://www.ciodive.com/news/no-code-codeless-low-code-software-development-unqork/640798/

  20. Merak Systems. (2019). “Technical Debt – The promise and peril of low-code applications.” By Jacob Goerz, 14 October 2019. https://www.meraksystems.com/blog/2019/10/14/technical-debt-the-promise-and-peril-of-low-code-applications.html

  21. Wong, J. (Gartner). Referenced in multiple sources regarding low-code technical debt and customizations. Gartner analyst profile: https://www.gartner.com/en/experts/jason-wong

  22. Quixy. “Game-Changing Top 60 No-Code Low-Code Citizen Development Statistics.” https://quixy.com/blog/no-code-low-code-citizen-development-statistics-facts/

  23. IT Brief Asia. (2024). “83% of firms confirm active citizen development programs – report.” https://itbrief.asia/story/83-of-firms-confirm-active-citizen-development-programs-report

  24. Stack Overflow. (2024). “AI | 2024 Stack Overflow Developer Survey.” https://survey.stackoverflow.co/2024/ai

  25. Intelligent CIO North America. (2023). “AI developer productivity could boost global GDP by over $1.5 trillion by 2030.” 10 July 2023. https://www.intelligentcio.com/north-america/2023/07/10/ai-developer-productivity-could-boost-global-gdp-by-over-1-5-trillion-by-2030/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...