You Can Download AI for Free: But Running It Costs Thousands

The artificial intelligence industry stands at a crossroads. On one side, proprietary giants like OpenAI, Google, and Anthropic guard their model weights and training methodologies with the fervour of medieval warlords protecting castle secrets. On the other, a sprawling, chaotic, surprisingly powerful open-source movement is mounting an insurgency that threatens to democratise the most transformative technology since the internet itself.
The question isn't merely academic. It's existential for the future of AI: Can community-driven open-source infrastructure genuinely rival the proprietary stacks that currently dominate production-grade artificial intelligence? And perhaps more importantly, what governance structures and business models will ensure these open alternatives remain sustainable, safe, and equitably accessible to everyone, not just Silicon Valley elites?
The answer, as it turns out, is both more complex and more hopeful than you might expect.
Open Source Closes the Gap
For years, the conventional wisdom held that open-source AI would perpetually trail behind closed alternatives. Proprietary models like GPT-4 and Claude dominated benchmarks, whilst open alternatives struggled to keep pace. That narrative has fundamentally shifted.
Meta's release of LLaMA models has catalysed a transformation in the open-source AI landscape. The numbers tell a compelling story: Meta's LLaMA family has achieved more than 1.2 billion downloads as of late 2024, with models being downloaded an average of one million times per day since the first release in February 2023. The open-source community has published over 85,000 LLaMA derivatives on Hugging Face alone, an increase of more than five times since the start of 2024.
The performance gap has narrowed dramatically. Code LLaMA with additional fine-tuning managed to beat GPT-4 in the HumanEval programming benchmark. LLaMA 2-70B and GPT-4 achieved near human-level performance of 84 per cent accuracy on fact-checking tasks. When comparing LLaMA 3.3 70B with GPT-4o, the open-source model remains highly competitive, especially when considering factors like cost, customisation, and deployment flexibility.
Mistral AI, a French startup that raised $645 million at a $6.2 billion valuation in June 2024, has demonstrated that open-source models can compete at the highest levels. Their Mixtral 8x7B model outperforms the 70 billion-parameter LLaMA 2 on most benchmarks with six times faster inference, and also outpaces OpenAI's GPT-3.5 on most metrics. Distributed under the Apache 2.0 licence, it can be used commercially for free.
The ecosystem has matured rapidly. Production-grade open-source frameworks now span every layer of the AI stack. LangChain supports both synchronous and asynchronous workflows suitable for production pipelines. SuperAGI is designed as a production-ready framework with extensibility at its core, featuring a graphical interface combined with support for multiple tools, memory systems, and APIs that enable developers to prototype and scale agents with ease.
Managed platforms are emerging to bridge the gap between open-source potential and enterprise readiness. Cake, which raised $10 million in seed funding from Google's Gradient Ventures in December 2024, integrates the various layers that constitute the AI stack into a more digestible, production-ready format suitable for business. The Finnish company Aiven offers managed open-source data infrastructure, making it easier to deploy production-grade AI systems.
When Free Isn't Actually Free
Here's where the open-source narrative gets complicated. Whilst LLaMA models are free to download, the infrastructure required to run them at production scale is anything but.
The economics are sobering. Training, fine-tuning, and running inference at scale consume expensive GPU resources that can often easily exceed any licensing fees incurred for proprietary technology. Infrastructure costs masquerade as simple compute and storage line items until you're hit with unexpected scaling requirements, with proof-of-concept setups often falling apart when handling real traffic patterns.
Developers can run inference on LLaMA 3.1 at roughly 50 per cent the cost of using closed models like GPT-4o, according to industry analysis. However, gross margins for AI companies average 50 to 60 per cent compared to 80 to 90 per cent for traditional software-as-a-service businesses, with 67 per cent of AI startups reporting that infrastructure costs are their number one constraint to growth.
The infrastructure arms race is staggering. Nvidia's data centre revenue surged by 279 per cent year-over-year, reaching $14.5 billion in the third quarter of 2023, primarily driven by demand for large language model training. Some projections suggest infrastructure spending could reach $3 trillion to $4 trillion over the next ten years.
This creates a paradox: open-source models democratise access to AI capabilities, but the infrastructure required to utilise them remains concentrated in the hands of cloud giants. Spot instances can reduce costs by 60 to 80 per cent for interruptible training workloads, but navigating this landscape requires sophisticated technical expertise.
Monetisation Models
If open-source AI infrastructure is to rival proprietary alternatives, it needs sustainable business models. The community has experimented with various approaches, with mixed results.
The Hugging Face Model
Hugging Face, valued at $4.5 billion in a Series D funding round led by Salesforce in August 2023, has pioneered a hybrid approach. Whilst strongly championing open-source AI and encouraging collaboration amongst developers, it maintains a hybrid model. The core infrastructure and some enterprise features remain proprietary, whilst the most valuable assets, the vast collection of user-contributed models and datasets, are entirely open source.
The CEO has emphasised that “finding a profitable, sustainable business model that doesn't prevent us from doing open source and sharing most of the platform for free was important for us to be able to deliver to the community.” Investors include Google, Amazon, Nvidia, AMD, Intel, IBM, and Qualcomm, demonstrating confidence that the model can scale.
The Stability AI Rollercoaster
Stability AI's journey offers both warnings and hope. The creator of Stable Diffusion combines open-source model distribution with commercial API services and enterprise licensing, releasing model weights under permissive licences whilst generating revenue through hosted API access, premium features, and enterprise support.
Without a clear revenue model initially, financial health deteriorated rapidly, mounting $100 million in debts coupled with $300 million in future obligations. However, in December 2024, new CEO Prem Akkaraju reported the company was growing at triple-digit rates and had eliminated its debt, with continued expansion expected into film, television, and large-scale enterprise integrations in 2025. The turnaround demonstrates that open-source AI companies can find sustainable revenue streams, but the path is treacherous.
The Red Hat Playbook
Red Hat's approach to open-source monetisation, refined over decades with Linux, offers a proven template. Red Hat OpenShift AI provides enterprise-grade support, lifecycle management, and intellectual property indemnification. This model works because enterprises value reliability, support, and indemnification over raw access to technology, paying substantial premiums for guaranteed uptime, professional services, and someone to call when things break.
Emerging Hybrid Models
The market is experimenting with increasingly sophisticated hybrid approaches. Consumption-based pricing has emerged as a natural fit for AI-plus-software-as-a-service products that perform work instead of merely supporting it. Hybrid models work especially well for enterprise AI APIs where customers want predictable base costs with the ability to scale token consumption based on business growth.
Some companies are experimenting with outcome-based pricing. Intercom abandoned traditional per-seat pricing for a per-resolution model, charging $0.99 per AI-resolved conversation instead of $39 per support agent, aligning revenue directly with value delivered.
Avoiding the Tragedy of the Digital Commons
For open-source AI infrastructure to succeed long-term, it requires governance structures that balance innovation with safety, inclusivity with direction, and openness with sustainability.
The Apache Way
The Apache Software Foundation employs a meritocratic governance model that fosters balanced, democratic decision-making through community consensus. As a Delaware-based membership corporation and IRS-registered 501©(3) non-profit, the ASF is governed by corporate bylaws, with membership electing a board of directors which sets corporate policy and appoints officers.
Apache projects span from the flagship Apache HTTP project to more recent initiatives encompassing AI and machine learning, big data, cloud computing, financial technology, geospatial, Internet of Things, and search. For machine learning governance specifically, Apache Atlas Type System can be used to define new types, capturing machine learning entities and processes as Atlas metadata objects, with relationships visualised in end-to-end lineage flow. This addresses key governance needs: visibility, model explainability, interpretability, and reproducibility.
EleutherAI's Grassroots Non-Profit Research
EleutherAI represents a different governance model entirely. The grassroots non-profit artificial intelligence research group formed in a Discord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao to organise a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Institute, a non-profit research institute.
Researchers from EleutherAI open-sourced GPT-NeoX-20B, a 20-billion-parameter natural language processing AI model similar to GPT-3, which was the largest open-source language model in the world at the time of its release in February 2022.
Part of EleutherAI's motivation is their belief that open access to such models is necessary for advancing research in the field. According to founder Connor Leahy, they believe “the benefits of having an open source model of this size and quality available for that research outweigh the risks.”
Gary Marcus, a cognitive scientist and noted critic of deep learning companies such as OpenAI and DeepMind, has repeatedly praised EleutherAI's dedication to open-source and transparent research. Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, applauded EleutherAI's efforts to give more researchers the ability to audit and assess AI technology.
Mozilla Common Voice
Mozilla's Common Voice project demonstrates how community governance can work for AI datasets. Common Voice is the most diverse open voice dataset in the world, a crowdsourcing project to create a free and open speech corpus. As part of their commitment to helping make voice technologies more accessible, they release a cost and copyright-free dataset of multilingual voice clips and associated text data under a CC0 licence.
The dataset has grown to a staggering 31,841 hours with 20,789 community-validated hours of speech data across 129 languages. The project is supported by volunteers who record sample sentences with a microphone and review recordings of other users.
The governance structure includes advisory committees consulted for decision-making, especially in cases of conflict. Whether or not a change is made to the dataset is decided based on a prioritisation matrix, where the cost-benefit ratio is weighed in relation to the public interest. Transparency is ensured through a community forum, a blog and the publication of decisions, creating a participatory and deliberative decision-making process overall.
Policy and Regulatory Developments
Governance doesn't exist in a vacuum. In December 2024, a report from the House Bipartisan Task Force called for federal investments in open-source AI research at the National Science Foundation, National Institute of Standards and Technology, and the Department of Energy to strengthen AI model security, governance, and privacy protections. The report emphasised taking a risk-based approach that would monitor potential harms over time whilst sustaining open development.
California introduced SB-1047 in early 2024, proposing liability measures requiring AI developers to certify their models posed no potential harm, but Governor Gavin Newsom vetoed the measure in September 2024, citing concerns that the bill's language was too imprecise and risked stifling innovation.
At the international level, the Centre for Data Innovation facilitated a dialogue on addressing risks in open-source AI with international experts at a workshop in Beijing on 10 to 11 December 2024, developing a statement on how to enhance international collaboration to improve open-source AI safety and security. At the AI Seoul Summit in May 2024, sixteen companies made a public commitment to release risk thresholds and mitigation frameworks by the next summit in France.
The Open Source Initiative and Open Future released a white paper titled “Data Governance in Open Source AI: Enabling Responsible and Systematic Access” following a global co-design process and a two-day workshop held in Paris in October 2024.
The Open Safety Question
Critics of open-source AI frequently raise safety concerns. If anyone can download and run powerful models, what prevents malicious actors from fine-tuning them for harmful purposes? The debate is fierce and far from settled.
The Safety-Through-Transparency Argument
EleutherAI and similar organisations argue that open access enables better safety research. As Connor Leahy noted, EleutherAI believes “AI safety is massively important for society to tackle today, and hope that open access to cutting edge models will allow more such research to be done on state of the art systems.”
The logic runs that closed systems create security through obscurity, which historically fails. Open systems allow the broader research community to identify vulnerabilities, test edge cases, and develop mitigation strategies. The diversity of perspectives examining open models may catch issues that homogeneous corporate teams miss.
Anthropic, which positions itself as safety-focused, takes a different approach. Incorporated as a Delaware public-benefit corporation, Anthropic brands itself as “a safety and research-focused company with the goal of building systems that people can rely on and generating research about the opportunities and risks of AI.”
Their Constitutional AI approach trains language models like Claude to be harmless and helpful without relying on extensive human feedback. Anthropic has published constitutional principles relating to avoiding harmful responses, including bias and profanity, avoiding responses that would reveal personal information, avoiding responses regarding illicit acts, avoiding manipulation, and encouraging honesty and helpfulness.
Notably, Anthropic generally doesn't publish capabilities work because they do not wish to advance the rate of AI capabilities progress, taking a cautious stance that contrasts sharply with the open-source philosophy. The company brings in over $2 billion in annualised revenue, with investors including Amazon at $8 billion, Google at $2 billion, and Menlo Ventures at $750 million.
Empirical Safety Records
The empirical evidence on safety is mixed. Open-source models have not, to date, caused catastrophic harms at a scale beyond what proprietary models have enabled. Both open and closed models can be misused for generating misinformation, creating deepfakes, or automating cyberattacks. The difference lies less in the models themselves and more in the surrounding ecosystem, moderation policies, and user education.
Safety researchers are developing open-source tools for responsible AI. Anthropic released Petri, an open-source auditing tool to accelerate AI safety research, demonstrating that even closed-model companies recognise the value of open tooling for safety evaluation.
The Global South Challenge
Perhaps the most compelling argument for open-source AI infrastructure is equitable access. Proprietary models concentrate power and capability in wealthy nations and well-funded organisations. Open-source models theoretically democratise access, but theory and practice diverge significantly. The safety debate connects directly to this challenge: if powerful AI remains locked behind proprietary walls, developing nations face not just technical barriers, but fundamental power asymmetries in shaping the technology's future.
The Promise of Democratisation
Open-source AI innovation enables collaboration across borders, allows emerging economies to avoid technological redundancy, and creates a platform for equitable participation in the AI era. Open-source approaches allow countries to avoid expensive licensing, making technology more accessible for resource-constrained environments.
Innovators across the Global South are applying AI solutions to local problems, with open-source models offering advantages in adapting to local cultures and languages whilst preventing vendor lock-in. According to industry analysis, 89 per cent of AI-using organisations incorporate open-source tools in some capacity, driven largely by cost considerations, with 75 per cent of small businesses turning to open-source AI for cost-effective solutions.
The Centre for Strategic and International Studies notes that open-source models create opportunities for AI innovation in the Global South amid geostrategic competition, potentially reducing dependence on technology from major powers.
The Infrastructure Reality
Despite these advantages, significant barriers remain. In the Global South, access to powerful GPUs and fast, stable internet is limited, leading some observers to call the trend “algorithmic colonialism.”
The statistics are stark. According to research on African contexts, only 1 per cent of Zindi Africa data scientists have on-premises access to GPUs, whilst 4 per cent pay for cloud access worth $1,000 per month. Despite apparent progress, the resources required to utilise open-access AI are still not within arm's reach in many African contexts.
The paradox is cruel: open-source models are freely available, but the computational infrastructure to use them remains concentrated in data centres controlled by American and Chinese tech giants. Downloading LLaMA costs nothing; spinning up enough GPU instances to fine-tune it for a local language costs thousands of dollars per hour.
Bridging the Gap
Some initiatives attempt to bridge this divide. Open-source tools for managing GPU infrastructure include DeepOps, an open-source toolkit designed for deploying and managing GPU clusters that automates the deployment of Kubernetes and Slurm clusters with GPU support. Kubeflow, an open-source machine learning toolkit for Kubernetes, streamlines end-to-end machine learning workflows with GPU acceleration.
Spot instances and per-second billing from some cloud providers make short training runs, inference jobs, and bursty workloads more cost-efficient, potentially lowering barriers. However, navigating these options requires technical sophistication that many organisations in developing countries lack.
International collaboration efforts are emerging. The December 2024 workshop in Beijing brought together international experts to develop frameworks for enhancing collaboration on open-source AI safety and security, potentially creating more equitable participation structures.
The Production-Grade Reality Check
For all the promise of open-source AI, the question remains whether it can truly match proprietary alternatives for production deployments at enterprise scale.
Where Open Source Excels
Open-source infrastructure demonstrably excels in several domains. Customisation and control allow organisations to fine-tune models for specific use cases, languages, or domains without being constrained by API limitations. Companies like Spotify use LLaMA to help deliver contextualised recommendations to boost artist discovery, combining LLaMA's broad world knowledge with Spotify's expertise in audio content. LinkedIn found that LLaMA achieved comparable or better quality compared to state-of-the-art commercial foundational models at significantly lower costs and latencies.
Cost optimisation at scale becomes possible when organisations have the expertise to manage infrastructure efficiently. Whilst upfront costs are higher, amortised over millions of API calls, self-hosted open-source models can be substantially cheaper than proprietary alternatives.
Data sovereignty and privacy concerns drive many organisations to prefer on-premises or private cloud deployments of open-source models, avoiding the need to send sensitive data to third-party APIs. This is particularly important for healthcare, finance, and government applications.
Where Proprietary Holds Edges
Proprietary platforms maintain advantages in specific areas. Frontier capabilities often appear first in closed models. GPT-4 generally outperforms LLaMA 3 70B across benchmarks, particularly in areas like common knowledge and grade school maths, logical reasoning and code generation for certain tasks.
Ease of use and integration matter enormously for organisations without deep AI expertise. Proprietary APIs offer simple integration, comprehensive documentation, and managed services that reduce operational overhead. According to industry surveys, 72 per cent of enterprises use an API to access their models, with over half using models hosted by their cloud service provider.
Reliability and support carry weight in production environments. Enterprise contracts with proprietary vendors typically include service-level agreements, guaranteed uptime, professional support, and liability protection that open-source alternatives struggle to match without additional commercial layers.
The Hybrid Future
The emerging pattern suggests that the future isn't binary. Global enterprise spending on AI applications has increased eightfold over the last year to close to $5 billion, though it still represents less than 1 per cent of total software application spending. Organisations increasingly adopt hybrid strategies: proprietary APIs for tasks requiring frontier capabilities or rapid deployment, and open-source infrastructure for customised, cost-sensitive, or privacy-critical applications.
The Sustainability Question
Can open-source AI infrastructure sustain itself long-term? The track record of open-source software offers both encouragement and caution.
Learning from Linux
Linux transformed from a hobbyist project to the backbone of the internet, cloud computing, and Android. The success stemmed from robust governance through the Linux Foundation, sustainable funding through corporate sponsorships, and clear value propositions for both individual contributors and corporate backers.
The Linux model demonstrates that open-source infrastructure can not only survive but thrive, becoming more robust and ubiquitous than proprietary alternatives. However, Linux benefited from timing, network effects, and the relatively lower costs of software development compared to training frontier AI models.
The AI Sustainability Challenge
AI infrastructure faces unique sustainability challenges. The computational costs of training large models create barriers that software development doesn't face. A talented developer can contribute to Linux with a laptop and internet connection. Contributing to frontier AI model development requires access to GPU clusters costing millions of dollars.
This asymmetry concentrates power in organisations with substantial resources, whether academic institutions, well-funded non-profits like EleutherAI, or companies like Meta and Mistral AI that have raised hundreds of millions in venture funding.
Funding Models That Work
Several funding models show promise for sustaining open-source AI:
Corporate-backed open source, exemplified by Meta's LLaMA releases, allows companies to commoditise complementary goods whilst building ecosystems around their platforms. Mark Zuckerberg positioned LLaMA 3.1 as transformative, stating “I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source.”
Academic and research institution leadership, demonstrated by EleutherAI and university labs, sustains fundamental research that may not have immediate commercial applications but advances the field.
Foundation and non-profit models, like the Apache Software Foundation and Mozilla Foundation, provide neutral governance and long-term stewardship independent of any single company's interests.
Commercial open-source companies like Hugging Face, Mistral AI, and Stability AI develop sustainable businesses whilst contributing back to the commons, though balancing commercial imperatives with community values remains challenging.
Where We Stand
So can community-driven open-source infrastructure rival proprietary stacks for production-grade AI? The evidence suggests a nuanced answer: yes, but with important caveats.
Open-source AI has demonstrably closed the performance gap for many applications. Models like LLaMA 3.3 70B and Mixtral 8x7B compete with or exceed GPT-3.5 and approach GPT-4 in various benchmarks. For organisations with appropriate expertise and infrastructure, open-source solutions offer compelling advantages in cost, customisation, privacy, and strategic flexibility.
However, the infrastructure requirements create a two-tiered system. Well-resourced organisations with technical talent can leverage open-source AI effectively, potentially at lower long-term costs than proprietary alternatives. Smaller organisations, those in developing countries, or teams without deep machine learning expertise face steeper barriers.
Governance and business models are evolving rapidly. Hybrid approaches combining open-source model weights with commercial services, support, and hosting show promise for sustainability. Foundation-based governance like Apache and community-driven models like Mozilla Common Voice demonstrate paths toward accountability and inclusivity.
Safety remains an active debate rather than a settled question. Both open and closed approaches carry risks and benefits. The empirical record suggests that open-source models haven't created catastrophic harms beyond those possible with proprietary alternatives, whilst potentially enabling broader safety research.
Equitable access requires more than open model weights. It demands investments in computational infrastructure, education, and capacity building in underserved regions. Without addressing these bottlenecks, open-source AI risks being open in name only.
The future likely involves coexistence and hybridisation rather than the triumph of one paradigm over another. Different use cases, organisational contexts, and regulatory environments will favour different approaches. The vibrant competition between open and closed models benefits everyone, driving innovation, reducing costs, and expanding capabilities faster than either approach could alone.
Meta's strategic bet on open source, Mistral AI's rapid ascent, Hugging Face's ecosystem play, and the steady contribution of organisations like EleutherAI and Mozilla collectively demonstrate that open-source AI infrastructure can absolutely rival proprietary alternatives, provided the community solves the intertwined challenges of governance, sustainability, safety, and genuine equitable access.
The insurgency isn't just mounting a challenge. In many ways, it's already won specific battles, claiming significant territory in the AI landscape. Whether the ultimate victory favours openness, closure, or some hybrid configuration will depend on choices made by developers, companies, policymakers, and communities over the coming years.
One thing is certain: the community-driven open-source movement has irrevocably changed the game, ensuring that artificial intelligence won't be controlled exclusively by a handful of corporations. Whether that partial accessibility evolves into truly universal access remains the defining challenge of the next phase of the AI revolution.
References and Sources
GitHub Blog. (2024). “2024 GitHub Accelerator: Meet the 11 projects shaping open source AI.” Available at: https://github.blog/news-insights/company-news/2024-github-accelerator-meet-the-11-projects-shaping-open-source-ai/
TechCrunch. (2024). “Google's Gradient backs Cake, a managed open source AI infrastructure platform.” Available at: https://techcrunch.com/2024/12/04/googles-gradient-backs-cake-a-managed-open-source-ai-infrastructure-platform/
Acquired.fm. (2024). “Building the Open Source AI Revolution with Hugging Face CEO, Clem Delangue.” Available at: https://www.acquired.fm/episodes/building-the-open-source-ai-revolution-with-hugging-face-ceo-clem-delangue
Meta AI Blog. (2024). “The future of AI: Built with Llama.” Available at: https://ai.meta.com/blog/future-of-ai-built-with-llama/
Meta AI Blog. (2024). “Introducing Llama 3.1: Our most capable models to date.” Available at: https://ai.meta.com/blog/meta-llama-3-1/
Meta AI Blog. (2024). “With 10x growth since 2023, Llama is the leading engine of AI innovation.” Available at: https://ai.meta.com/blog/llama-usage-doubled-may-through-july-2024/
About Meta. (2024). “Open Source AI is the Path Forward.” Available at: https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/
McKinsey & Company. (2024). “Evolving models and monetization strategies in the new AI SaaS era.” Available at: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/upgrading-software-business-models-to-thrive-in-the-ai-era
Andreessen Horowitz. (2024). “16 Changes to the Way Enterprises Are Building and Buying Generative AI.” Available at: https://a16z.com/generative-ai-enterprise-2024/
FourWeekMBA. “How Does Stability AI Make Money? Stability AI Business Model Analysis.” Available at: https://fourweekmba.com/how-does-stability-ai-make-money/
VentureBeat. (2024). “Stable Diffusion creator Stability AI raises $101M funding to accelerate open-source AI.” Available at: https://venturebeat.com/ai/stable-diffusion-creator-stability-ai-raises-101m-funding-to-accelerate-open-source-ai
AI Media House. (2024). “Stability AI Fights Back from Collapse to Dominate Generative AI Again.” Available at: https://aimmediahouse.com/ai-startups/stability-ai-fights-back-from-collapse-to-dominate-generative-ai-again
Wikipedia. “Mistral AI.” Available at: https://en.wikipedia.org/wiki/Mistral_AI
IBM Newsroom. (2024). “IBM Announces Availability of Open-Source Mistral AI Model on watsonx.” Available at: https://newsroom.ibm.com/2024-02-29-IBM-Announces-Availability-of-Open-Source-Mistral-AI-Model-on-watsonx
EleutherAI Blog. (2022). “Announcing GPT-NeoX-20B.” Available at: https://blog.eleuther.ai/announcing-20b/
Wikipedia. “EleutherAI.” Available at: https://en.wikipedia.org/wiki/EleutherAI
InfoQ. (2022). “EleutherAI Open-Sources 20 Billion Parameter AI Language Model GPT-NeoX-20B.” Available at: https://www.infoq.com/news/2022/04/eleutherai-gpt-neox/
Red Hat. “Red Hat OpenShift AI.” Available at: https://www.redhat.com/en/products/ai/openshift-ai
CSIS. (2024). “An Open Door: AI Innovation in the Global South amid Geostrategic Competition.” Available at: https://www.csis.org/analysis/open-door-ai-innovation-global-south-amid-geostrategic-competition
AI for Developing Countries Forum. “AI Patents: Open Source vs. Closed Source – Strategic Choices for Developing Countries.” Available at: https://aifod.org/ai-patents-open-source-vs-closed-source-strategic-choices-for-developing-countries/
Linux Foundation. “Open Source AI Is Powering a More Inclusive Digital Economy across APEC Economies.” Available at: https://www.linuxfoundation.org/blog/open-source-ai-is-powering-a-more-inclusive-digital-economy-across-apec-economies
Stanford Social Innovation Review. “How to Make AI Equitable in the Global South.” Available at: https://ssir.org/articles/entry/equitable-ai-in-the-global-south
Brookings Institution. “Is open-access AI the great safety equalizer for African countries?” Available at: https://www.brookings.edu/articles/is-open-access-ai-the-great-safety-equalizer-for-african-countries/
Apache Software Foundation. “A Primer on ASF Governance.” Available at: https://www.apache.org/foundation/governance/
Mozilla Foundation. “Common Voice.” Available at: https://www.mozillafoundation.org/en/common-voice/
Mozilla Foundation. (2024). “Common Voice 18 Dataset Release.” Available at: https://www.mozillafoundation.org/en/blog/common-voice-18-dataset-release/
Wikipedia. “Common Voice.” Available at: https://en.wikipedia.org/wiki/Common_Voice
Linux Insider. (2024). “Open-Source Experts' 2024 Outlook for AI, Security, Sustainability.” Available at: https://www.linuxinsider.com/story/open-source-experts-2024-outlook-for-ai-security-sustainability-177250.html
Center for Data Innovation. (2024). “Statement on Enhancing International Collaboration on Open-Source AI Safety.” Available at: https://datainnovation.org/2024/12/statement-on-enhancing-international-collaboration-on-open-source-ai-safety/
Open Source Initiative. (2024). “Data Governance in Open Source AI.” Available at: https://opensource.org/data-governance-open-source-ai
Wikipedia. “Anthropic.” Available at: https://en.wikipedia.org/wiki/Anthropic
Anthropic. “Core Views on AI Safety: When, Why, What, and How.” Available at: https://www.anthropic.com/news/core-views-on-ai-safety
Anthropic. “Petri: An open-source auditing tool to accelerate AI safety research.” Available at: https://www.anthropic.com/research/petri-open-source-auditing
TechTarget. (2024). “Free isn't cheap: How open source AI drains compute budgets.” Available at: https://www.techtarget.com/searchcio/feature/How-open-source-AI-drains-compute-budgets
Neev Cloud. “Open Source Tools for Managing Cloud GPU Infrastructure.” Available at: https://blog.neevcloud.com/open-source-tools-for-managing-cloud-gpu-infrastructure
RunPod. (2025). “Top 12 Cloud GPU Providers for AI and Machine Learning in 2025.” Available at: https://www.runpod.io/articles/guides/top-cloud-gpu-providers
CIO Dive. “Nvidia CEO praises open-source AI as enterprises deploy GPU servers.” Available at: https://www.ciodive.com/news/nvidia-revenue-gpu-servers-open-source-ai/758897/
Netguru. “Llama vs GPT: Comparing Open-Source Versus Closed-Source AI Development.” Available at: https://www.netguru.com/blog/gpt-4-vs-llama-2
Codesmith. “Meta Llama 2 vs. OpenAI GPT-4: A Comparative Analysis of an Open Source vs Proprietary LLM.” Available at: https://www.codesmith.io/blog/meta-llama-2-vs-openai-gpt-4-a-comparative-analysis-of-an-open-source-vs-proprietary-llm
Prompt Engineering. “How Does Llama-2 Compare to GPT-4/3.5 and Other AI Language Models.” Available at: https://promptengineering.org/how-does-llama-2-compare-to-gpt-and-other-ai-language-models/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk