When Tech Hubs Fail Ethics: How Shillong Built Accountable AI

When Santosh Sunar launched AEGIS AI at Sankardev College in Shillong on World Statistics Day 2025, he wasn't just unveiling another artificial intelligence framework. He was making a declaration: that the future of ethical AI wouldn't necessarily be written in Silicon Valley boardrooms or European regulatory chambers, but potentially in the hills of Meghalaya, where the air is clearer and perhaps, the thinking more grounded.
“AI should not just predict or create; it should protect,” Sunar stated at the launch event, his words resonating with a philosophy that directly challenges the breakneck pace of AI development globally. “AEGIS AI is the shield humanity needs to defend truth, trust, and innovation.”
The timing couldn't be more critical. As artificial intelligence systems rapidly gain unprecedented capabilities and influence across governance, cybersecurity, and disaster response, a fundamental question haunts every deployment: how do we ensure that AI remains accountable to human values rather than operating as an autonomous decision-maker divorced from ethical oversight?
It's a question that has consumed technologists, ethicists, and policymakers worldwide. Yet the answer may be emerging not from traditional tech hubs, but from unexpected places where technology development is being reimagined from the ground up, with wisdom prioritised over raw computational power.
The Accountability Crisis in Modern AI
The challenge of AI accountability has become acute as systems evolve from narrow, task-specific tools into sophisticated decision-makers influencing critical aspects of society. According to a 2024 survey, whilst 87% of business leaders plan to implement AI ethics policies by 2025, only 35% of companies currently have an AI governance framework in place. This gap between intention and implementation reveals a troubling reality: we're deploying powerful systems faster than we're developing the mechanisms to control them.
The problem isn't merely technical. Traditional accountability methods, designed for human decision-makers, fundamentally fail when applied to AI systems. As research published in 2024 highlighted, artificial intelligence presents “unclear connections between decision-makers and operates through autonomous or probabilistic systems” that defy conventional oversight. When an algorithm denies a loan application, recommends a medical treatment, or flags content for removal, the chain of responsibility becomes dangerously opaque.
This opacity has real consequences. AI systems deployed in healthcare have perpetuated biases present in training data, leading to discriminatory outcomes. In criminal justice, risk assessment algorithms have exhibited racial bias, affecting parole decisions and sentencing. Financial services algorithms have denied credit based on proxy variables that correlate with protected characteristics.
The European Union's AI Act, implemented in 2024, attempts to address these issues through a risk-based classification system, with companies potentially facing fines up to 6% of global revenue for violations. The United States Government Accountability Office developed an accountability framework organised around four complementary principles addressing governance, data, performance, and monitoring. Yet these regulatory approaches, whilst necessary, are fundamentally reactive; they attempt to constrain systems already in deployment rather than building accountability into their foundational architecture.
Enter the Guardian Framework
This is where Santosh Sunar's BTG AEGIS AI (Autonomous Ethical Guardian Intelligence System) presents a different paradigm. Built on what Sunar calls the LITT Principle, the framework positions itself not as an AI system that operates with oversight, but as a guardian intelligence that cannot function without human integration at its core.
The distinction is subtle but profound. Most “human in the loop” systems treat human oversight as a checkpoint, a verification step in an otherwise automated process. AEGIS AI, by contrast, is architecturally dependent on continuous human engagement, maintaining what Sunar describes as a “Human in the Loop” at all times. The technology cannot make decisions in isolation; it must reflect human wisdom in its operations.
The framework has gained recognition across 322 international media and institutional networks, including organisations linked to NASA, IAEA, NATO, IMF, APEC, WHO, and WTO, according to reports from The Shillong Times. It was officially featured in The National Law Review in the United States, suggesting that its approach resonates beyond regional boundaries.
AEGIS AI is designed to reinforce digital trust, data integrity, and decision reliability across diverse sectors, including governance, cybersecurity, and disaster response. Its applications extend to defending against deepfakes, cyber fraud, and misinformation; protecting employment from data manipulation; providing verified mentorship resources; safeguarding entrepreneurs from information exploitation; and strengthening data integrity across sectors.
The Architecture of Accountability
Human-in-the-loop AI systems have emerged as crucial approaches to ensuring AI operates in alignment with ethical norms and social expectations, according to research published in 2024. By embedding humans at key stages such as data curation, model training, outcome evaluation, and real-time operation, these systems foster transparency, accountability, and adaptability.
The European Union's AI Act mandates this approach for high-risk applications. Article 14 requires that “High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.”
Yet implementation varies dramatically. Research involving 40 AI developers worldwide found they are largely aware of ethical territories but face limited and inconsistent resources for ethical guidance or training. Significant barriers inhibit ethical wisdom development in the AI community, including industry fixation on innovation, narrow technical practice scope, and limited provisions for reflection and dialogue.
The “collaborative loop” architecture represents a more sophisticated approach, wherein humans and AI jointly solve tasks, with each party handling aspects where they excel. In content moderation, algorithms flag potential issues whilst human reviewers make nuanced judgements about context, satire, or cultural sensitivity.
AEGIS AI pushes this concept further, positioning human oversight not as an adjunct to AI decision-making but as an integral component of the system's intelligence. This approach aligns with emerging scholarship on artificial wisdom (AW), which proposes that future AI technologies must be designed to emulate qualities of wise humans rather than merely intelligent ones.
The concept of artificial wisdom, whilst still theoretical, addresses a fundamental limitation in current AI development. Intelligence, in computational terms, refers to pattern recognition, prediction, and optimisation. Wisdom encompasses judgement, ethical reasoning, contextual understanding, and the capacity to weigh competing values. No amount of computational power can substitute for this qualitative dimension.
The Shillong Advantage
The emergence of AEGIS AI from Shillong raises provocative questions about where innovation happens and why geography might matter in ethical technology development. The narrative of technological progress has long centred on established hubs: Silicon Valley, Boston's biotechnology sector, Tel Aviv where AI companies comprise more than 40% of startups, and Bengaluru, India's engine of digital transformation.
Yet this concentration creates blind spots. As a Fortune magazine analysis noted in 2025, Silicon Valley increasingly ignores Middle America, leading to an innovation blind spot where “the next wave of truly transformative companies won't just come from Silicon Valley's demo days or AI leaderboards but will emerge from factory floors, farms and freight hubs.”
India has recognised this dynamic. The IndiaAI Mission, approved in March 2024, aims to bolster the country's global leadership in AI whilst fostering technological self-reliance. The government announced plans to establish over 20 Data and AI Labs under the India AI Mission across Tier 2 and Tier 3 cities, with this number to expand to 200 by 2026 and eventually 570 labs in emerging urban centres over the following two years.
Shillong features in this expansion. As part of the IndiaAI FutureSkills initiative, the government is setting up 27 new Data and AI Labs across Tier 2 and Tier 3 cities, including Shillong. The Software Technology Parks of India (STPI) has established 65 centres, with 57 located in Tier 2 and Tier 3 cities. STPI has created 24 domain-specific Centres for Entrepreneurship supporting over 1,000 tech startups. In 2022, 39% of tech startups originated from these emerging hubs, and approximately 33% of National Startup Awards winners came from Tier 2 and Tier 3 cities.
IIM Shillong hosted the International Conference on Leveraging Emerging Technologies and Analytics for Development (LEAD-2024) in December 2024, themed “Empowering Humanity,” signalling the region's growing engagement with AI, analytics, and sustainability principles.
This decentralisation isn't merely about distributing resources. It represents a fundamental rethinking of what environments foster responsible innovation. Smaller cities often maintain stronger community connections, clearer accountability structures, and less pressure to prioritise growth over governance. When Sunar emphasises that “AI should reflect human wisdom,” that philosophy may be easier to implement in contexts where community values remain visible and technology development hasn't outpaced ethical reflection.
Currently, 11-15% of tech talent resides in Tier 2 and Tier 3 cities, a percentage expected to rise as more individuals opt to work from non-metro areas. Yet challenges remain: fragmented access to high-quality datasets, infrastructure gaps, and the need for upskilling mid-career professionals. These constraints, however, might paradoxically advantage ethical AI development. When resources are limited, technology must be deployed more thoughtfully. When datasets are smaller, bias becomes more visible. When infrastructure requires deliberate investment, governance structures can be built from the foundation rather than retrofitted.
Global Applications
The practical test of any ethical AI framework lies in its real-world applications across sectors where stakes are highest: governance, cybersecurity, and disaster response. These domains share common characteristics: they involve critical decisions affecting human wellbeing, operate under time pressure, require balancing competing values, and have limited tolerance for error.
In governance, AI systems increasingly support policy-making, resource allocation, and service delivery. Benefits include more efficient identification of citizen needs, data-driven policy evaluation, and improved responsiveness. Yet risks are equally significant: algorithmic bias can systematically disadvantage marginalised populations, lack of transparency undermines democratic accountability, and over-reliance on predictive models can perpetuate historical patterns rather than enabling transformative change.
The United States Department of Homeland Security unveiled its first Artificial Intelligence Roadmap in March 2024, detailing plans to test AI technologies whilst partnering with privacy, cybersecurity, and civil rights experts. FEMA initiated a generative AI pilot for hazard mitigation planning, demonstrating how AI can support rather than supplant human decision-making in critical government functions.
In cybersecurity, AI improves risk assessment, fraud detection, compliance monitoring, and incident response. Within Security Operations Centres, AI enhances threat detection and automated triage. Yet adversaries also employ AI, creating an escalating technological arms race. DHS guidelines, developed in January 2024 by the Cybersecurity and Infrastructure Security Agency (CISA), address three types of AI risks: attacks using AI, attacks targeting AI systems, and failures in AI design and implementation.
A holistic approach merging AI with human expertise and robust governance, alongside continuous monitoring, is essential to combat evolving cyber threats. The challenge isn't deploying more sophisticated AI but ensuring that human judgement remains central to security decisions.
Disaster response represents perhaps the most compelling application for guardian AI frameworks. AI enhances disaster governance through governance functions, information-based strategies including real-time data and predictive analytics, and operational processes such as strengthening logistics and communication, according to research published in 2024.
AI-powered predictive analytics allow emergency managers to anticipate disasters by analysing historical data, climate patterns, and population trends. During active disasters, AI can process real-time data from social media, sensors, and satellite imagery to provide situational awareness impossible through manual analysis.
The RAND Corporation's 2025 analysis highlighted a fundamental tension: “Using AI well long-term requires addressing classic governance questions about legitimate authority and the problem of alignment; aligning AI models with human values, goals, and intentions.” In crisis situations where every minute counts, the temptation to fully automate decisions is powerful. Yet disasters are precisely the contexts where human judgement, ethical reasoning, and community knowledge are most critical.
This is where frameworks like AEGIS AI could prove transformative. By architecturally requiring human integration, such systems could enable AI to augment human disaster response capabilities without displacing the wisdom, contextual knowledge, and ethical reasoning that effective emergency management requires.
The Implementation Challenge
If guardian frameworks like AEGIS AI offer a viable model for accountable AI, what systemic changes would be necessary to implement such approaches across diverse sectors globally? The challenge spans technical, regulatory, cultural, and economic dimensions.
From a technical perspective, implementing human-in-the-loop architecture at scale requires fundamental rethinking of AI system design. Current AI development prioritises autonomy and efficiency. Guardian frameworks invert this logic, treating human engagement as a feature rather than a constraint. This requires new interface designs, workflow patterns, and integration architectures that make human oversight seamless rather than burdensome.
The regulatory landscape presents both opportunities and obstacles. Major frameworks established in 2024-2025 create foundations for accountability: the OECD AI Principles (updated 2024), the EU AI Act with its risk-based classification system, the NIST AI Risk Management Framework, and the G7 Code of Conduct.
Yet companies operating across multiple countries face conflicting AI regulations. The EU imposes strict risk-based classifications whilst the United States follows a voluntary framework under NIST. In many countries across Africa, Latin America, and Southeast Asia, AI governance is still emerging, with these regions facing the paradox of low regulatory capacity but high exposure to imported AI systems designed without local context.
Implementing ethical AI demands significant investment in technology, skilled personnel, and oversight mechanisms. Smaller organisations and emerging economies often lack necessary resources, creating a dangerous dynamic where ethical AI becomes a luxury good.
Cultural barriers may be most challenging. In fast-paced industries where innovation drives competition, ethical considerations can be overlooked in favour of quick launches. The industry fixation on innovation creates pressure to ship products rapidly rather than ensure they're responsibly designed.
Effective AI governance requires a holistic approach from developing internal frameworks and policies to monitoring and managing risks from the conceptual design phase through deployment. This demands cultural shifts within organisations, moving from compliance-oriented approaches to genuine ethical integration.
UNESCO's Recommendation on the Ethics of Artificial Intelligence, produced in November 2021 and applicable to all 194 member states, provides a global standard. Yet without ethical guardrails, AI risks reproducing real-world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms. Translating high-level principles into operational practices remains the persistent challenge.
Value alignment requires translation of abstract ethical principles into practical technical guidelines. Yet human values are not uniform across regions and cultures, so AI systems must be tailored to specific cultural, legal, and societal contexts. What constitutes fairness, privacy, or appropriate autonomy varies across societies. Guardian frameworks must somehow navigate this diversity whilst maintaining core ethical commitments.
The operationalisation challenge extends to measurement and verification. How do we assess whether an AI system is genuinely accountable? What metrics capture ethical reasoning? How do we audit for wisdom rather than merely accuracy? These questions lack clear answers, making implementation and oversight inherently difficult.
For guardian frameworks to succeed globally, we need not just ethical AI systems but ethical AI ecosystems, with supporting infrastructure, training programmes, oversight mechanisms, and stakeholder engagement.
Beyond Computational Intelligence
The distinction between intelligence and wisdom lies at the heart of debates about AI accountability. Current systems excel at intelligence in its narrow computational sense: pattern recognition, prediction, optimisation, and task completion. They process vast datasets, identify subtle correlations, and execute complex operations at speeds and scales impossible for humans.
Yet wisdom encompasses dimensions beyond computational intelligence. Research on artificial wisdom identifies qualities that wise humans possess but current AI systems lack: ethical reasoning that weighs competing values and considers consequences; contextual judgement that adapts principles to specific situations; humility that recognises limitations and uncertainty; compassion that centres human wellbeing; and integration of diverse perspectives rather than optimisation for single objectives.
Contemporary scholarship proposes frameworks for planetary ethics built upon symbiotic relationships between humans, technology, and nature, grounded in wisdom philosophies. The MIT Ethics of Computing course, offered for the first time in autumn 2024, brings philosophy and computer science together, recognising that technical expertise alone is insufficient for responsible AI development.
The future need in technology is for artificial wisdom which would ensure AI technologies are designed to emulate the qualities of wise humans and serve the greatest benefit to humanity, according to research published in 2024. Yet there's currently no consensus on artificial wisdom development given cultural subjectivity and lack of institutionalised scientific impetus.
This absence of consensus might actually create space for diverse approaches to emerge. Rather than a single definition imposed globally, different regions and cultures could develop frameworks reflecting their own wisdom traditions. Shillong's AEGIS AI, grounded in principles emphasising protection, trust, and human integration, represents one such approach.
The democratisation of AI development could thus enable pluralism in ethical approaches. Silicon Valley's values, emphasising innovation, disruption, and individual empowerment, have shaped AI development thus far. But those values aren't universal. Communities in Meghalaya, villages in Africa, towns in Latin America, and cities across Asia might prioritise different values: stability over disruption, collective welfare over individual advancement, harmony over competition, sustainability over growth.
Guardian frameworks emerging from diverse contexts could embody these alternative value systems, creating a richer ethical ecosystem than any single framework could provide. The true test of AI lies not in computation but in compassion, according to recent scholarship, requiring humanity to become stewards of inner wisdom in the age of intelligent machines.
Implementing the Vision
If wisdom-centred, guardian-oriented AI frameworks represent a viable path toward genuine accountability, how do we move from concept to widespread implementation? Several pathways emerge from current practice and emerging initiatives.
First, education and training must evolve. Computer science curricula remain heavily weighted toward technical skills. Ethical considerations, when included, are often relegated to single courses or brief modules. Developing AI systems that embody wisdom requires professionals trained at the intersection of technology, ethics, philosophy, and social sciences. IIM Shillong's LEAD conference, integrating AI with sustainability and development themes, suggests how educational institutions can foster this interdisciplinary approach.
India's AI skill penetration leads globally, with the 2024 Stanford AI Index ranking India first. Yet skill penetration differs from skill orientation. The government's initiative to establish hundreds of AI labs creates infrastructure, but the pedagogical approach will determine whether these labs produce guardian frameworks or merely replicate existing development paradigms.
Second, regulatory frameworks must evolve from risk management to capability building. Current regulations primarily impose constraints: prohibitions on certain applications, requirements for high-risk systems, penalties for violations. Regulations could instead incentivise ethical innovation through tax benefits for certified ethical AI systems, government procurement preferences for guardian frameworks, research funding prioritising accountable architectures, and international standards recognising ethical excellence.
Third, industry practices must shift from compliance to commitment. The gap between companies planning to implement AI ethics policies (87%) and those actually having governance frameworks (35%) reveals this implementation deficit. Guardian frameworks cannot be retrofitted as compliance layers; they must be foundational architectural choices.
This requires changes in development processes, with ethical review integrated from initial design through deployment; organisational structures, with ethicists embedded in technical teams; performance metrics, with ethical outcomes weighted alongside efficiency; and incentive systems rewarding responsible innovation.
Fourth, global cooperation must balance standardisation with pluralism. UNESCO's recommendation provides a foundation, but implementing guidance must accommodate diverse cultural contexts. International cooperation could focus on shared principles: transparency, accountability, human oversight, bias mitigation, and privacy protection. Implementation specifics would vary by region, allowing guardian frameworks to reflect local values whilst adhering to universal commitments.
The challenge resembles environmental protection. Core principles, such as reducing carbon emissions and protecting biodiversity, have global consensus. Implementation strategies vary dramatically by country based on development levels, economic structures, and cultural priorities. AI ethics might follow similar patterns.
Fifth, civil society engagement must expand. Guardian frameworks, by design, require ongoing human engagement. This creates opportunities for broader participation: community advisory boards reviewing local AI deployments, citizen assemblies deliberating on AI ethics questions, participatory design processes involving end users, and public audits of AI system impacts.
Such participation faces practical challenges: technical complexity, time requirements, resource constraints, and ensuring representation of marginalised voices. Yet successful models of participatory governance exist in environmental management, public health, and urban planning. Adapting these models to AI governance could democratise not just where technology is developed but how it's developed and for whose benefit.
The Meghalaya Model
Santosh Sunar's development of AEGIS AI in Shillong offers concrete lessons for global implementation of guardian frameworks. Several factors enabled this innovation outside traditional tech hubs, suggesting replicable conditions for ethical AI development elsewhere.
Geographic distance from established AI centres provided freedom from prevailing assumptions. Silicon Valley's “move fast and break things” ethos has driven remarkable innovation but also created ethical blind spots. Developing AI in contexts not immersed in that culture allows different priorities to emerge. Sunar's emphasis that “AI should not replace human wisdom; it should reflect it” might have faced more resistance in environments where autonomy and automation are presumed goods.
Access to diverse stakeholder perspectives informed the framework's development. Smaller cities often have more integrated communities where technologists, educators, government officials, and citizens interact regularly. This integration can facilitate the interdisciplinary dialogue essential for ethical AI. The launch of AEGIS AI at Sankardev College, a public event aligned with World Statistics Day, exemplifies this community integration.
Government support for regional innovation created enabling infrastructure. India's commitment to establishing AI labs in Tier 2 and Tier 3 cities signals recognition that innovation ecosystems can be deliberately cultivated. STPI's network of 57 centres in smaller cities, supporting over 1,000 tech startups, demonstrates how institutional support can catalyse regional innovation.
These conditions can be replicated elsewhere. Cities and regions worldwide could position themselves as ethical AI innovation centres by cultivating similar environments: creating distance from prevailing tech culture, fostering interdisciplinary collaboration, providing institutional support for ethical innovation, and drawing on local cultural values.
The competition among regions need not be for computational supremacy but for wisdom leadership. Which cities will produce AI systems that best serve human flourishing? Which frameworks will most effectively balance innovation with responsibility? Which approaches will prove most resilient and adaptable across contexts? These questions could drive a different kind of technological competition, one where Shillong's AEGIS AI represents an early entry rather than an outlier.
Questions and Imperatives
As AI systems continue their inexorable advance into every domain of human activity, the questions posed at this article's beginning become increasingly urgent. Can we ensure AI remains fundamentally accountable to human values? Can technology and morality evolve together? Can regions outside traditional tech hubs become crucibles for ethical innovation? Can wisdom be prioritised over computational power?
The emerging evidence suggests affirmative answers are possible, though far from inevitable. Guardian frameworks like AEGIS AI demonstrate architectural approaches that build accountability into AI systems' foundations. Human-in-the-loop designs, when implemented genuinely rather than performatively, can maintain the primacy of human judgement. The democratisation of AI development, supported by deliberate policy choices and infrastructure investments, can enable innovation from diverse contexts. And wisdom-centred approaches, grounded in philosophical traditions and community values, can guide AI development toward serving humanity's deepest needs rather than merely its surface preferences.
Yet possibility differs from probability. Realising these potentials requires confronting formidable obstacles: economic pressures prioritising efficiency over ethics, regulatory fragmentation creating compliance burdens without coherence, resource constraints limiting ethical AI to well-funded entities, cultural momentum in the tech industry resistant to slowing innovation for reflection, and the persistent challenge of operationalising abstract ethical principles into concrete technical practices.
The ultimate question may be not whether we can build accountable AI but whether we will choose to. The technical capabilities exist. The philosophical frameworks are available. The regulatory foundations are emerging. The implementation examples are demonstrating viability. What remains uncertain is whether the collective will exists to prioritise accountability over autonomy, wisdom over intelligence, and human flourishing over computational optimisation.
Santosh Sunar's declaration in Shillong, that “AEGIS AI is the shield humanity needs to defend truth, trust, and innovation,” captures this imperative. We don't need AI to make us more efficient, productive, or connected. We need AI that protects what makes us human: our capacity for ethical reasoning, our commitment to truth, our responsibility to one another, and our wisdom accumulated through millennia of lived experience.
Whether guardian frameworks like AEGIS AI will scale from Shillong to the world remains uncertain. But the question itself represents progress. We're moving beyond asking whether AI can be ethical to examining how ethical AI actually works, beyond debating abstract principles to implementing concrete architectures, and beyond assuming innovation must come from established centres to recognising that wisdom might emerge from unexpected places.
The hills of Meghalaya may seem an unlikely epicentre for the AI ethics revolution. But then again, the most profound transformations often begin not at the noisy centre but at the thoughtful periphery, where clarity of purpose hasn't been drowned out by the din of disruption. In an age of artificial intelligence, perhaps the ultimate innovation isn't technological at all. Perhaps it's the wisdom to remember that technology must serve humanity, not the other way round.
Sources and References
Primary Sources on BTG AEGIS AI Framework
“AEGIS AI Officially Launches on World Statistics Day 2025 – 'Intelligence That Defends' Empowers Data Integrity, Mentorship & Trust,” OpenPR, 20 October 2025. https://www.openpr.com/news/4233882/aegis-ai-officially-launches-on-world-statistics-day-2025
“Shillong innovator's ethical AI framework earns global acclaim,” The Shillong Times, 26 October 2025. https://theshillongtimes.com/2025/10/26/shillong-innovators-ethical-ai-framework-earns-global-acclaim/
“BeTheGuide® Launches AEGIS AI – A Global Initiative to Strengthen Digital Trust and Data Integrity,” India Arts Today, October 2025. https://www.indiaartstoday.com/article/860784565-betheguide-launches-aegis-ai-a-global-initiative-to-strengthen-digital-trust-and-data-integrity
AI Governance and Accountability Frameworks
“9 Key AI Governance Frameworks in 2025,” AI21, 2025. https://www.ai21.com/knowledge/ai-governance-frameworks/
“Top AI Governance Trends for 2025: Compliance, Ethics, and Innovation,” GDPR Local, 2025. https://gdprlocal.com/top-5-ai-governance-trends-for-2025-compliance-ethics-and-innovation-after-the-paris-ai-action-summit/
“Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities,” U.S. Government Accountability Office, GAO-21-519SP, June 2021. https://www.gao.gov/products/gao-21-519sp
“AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development,” Taylor & Francis Online, 2025. https://www.tandfonline.com/doi/full/10.1080/08839514.2025.2463722
“Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making,” Frontiers in Human Dynamics, 2024. https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1421273/full
Human-in-the-Loop AI Systems
“HUMAN-IN-THE-LOOP SYSTEMS FOR ETHICAL AI,” ResearchGate, 2024. https://www.researchgate.net/publication/393802734_HUMAN-IN-THE-LOOP_SYSTEMS_FOR_ETHICAL_AI
“Constructing Ethical AI Based on the 'Human-in-the-Loop' System,” MDPI, 2024. https://www.mdpi.com/2079-8954/11/11/548
“What Is Human In The Loop (HITL)?” IBM Think Topics. https://www.ibm.com/think/topics/human-in-the-loop
“Artificial Intelligence and Keeping Humans 'in the Loop',” Centre for International Governance Innovation. https://www.cigionline.org/articles/artificial-intelligence-and-keeping-humans-loop/
“Evolving Human-in-the-Loop: Building Trustworthy AI in an Autonomous Future,” Seekr Blog, 2024. https://www.seekr.com/blog/human-in-the-loop-in-an-autonomous-future/
India's AI Innovation Ecosystem
“IndiaAI Mission: How India is Emerging as a Global AI Superpower,” TICE News, 2024. https://www.tice.news/tice-trending/indias-ai-leap-how-india-is-emerging-as-a-global-ai-superpower-8871380
“India's interesting AI initiatives in 2024: AI landscape in India,” IndiaAI, 2024. https://indiaai.gov.in/article/india-s-interesting-ai-initiatives-in-2024-ai-landscape-in-india
“IIM Shillong Hosts LEAD-2024: A Global Convergence of Thought Leaders on Emerging Technologies and Development,” Yutip News, December 2024. https://yutipnews.com/news/iim-shillong-hosts-lead-2024-a-global-convergence-of-thought-leaders-on-emerging-technologies-and-development/
“Expanding IT sector to tier-2 and tier-3 cities our top priority: STPI DG Arvind Kumar,” Software Technology Park of India, Ministry of Electronics & Information Technology, Government of India. https://stpi.in/en/news/expanding-it-sector-tier-2-and-tier-3-cities-our-top-priority-stpi-dg-arvind-kumar
“Can Tier-2 India Be the Next Frontier for AI?” Analytics India Magazine, 2024. https://analyticsindiamag.com/ai-features/can-tier-2-india-be-the-next-frontier-for-ai/
“Indian Government to Establish Data and AI Labs Across Tier 2 and Tier 3 Cities,” TopNews, 2024. https://www.topnews.in/indian-government-establish-data-and-ai-labs-across-tier-2-and-tier-3-cities-2416199
AI in Disaster Response and Cybersecurity
“Department of Homeland Security Unveils Artificial Intelligence Roadmap, Announces Pilot Projects,” U.S. Department of Homeland Security, 18 March 2024. https://www.dhs.gov/archive/news/2024/03/18/department-homeland-security-unveils-artificial-intelligence-roadmap-announces
“AI applications in disaster governance with health approach: A scoping review,” PMC, National Center for Biotechnology Information, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC12379498/
“How AI Is Changing Our Approach to Disasters,” RAND Corporation, 2025. https://www.rand.org/pubs/commentary/2025/08/how-ai-is-changing-our-approach-to-disasters.html
“2024 Volume 4 The Pivotal Role of AI in Navigating the Cybersecurity Landscape,” ISACA Journal, 2024. https://www.isaca.org/resources/isaca-journal/issues/2024/volume-4/the-pivotal-role-of-ai-in-navigating-the-cybersecurity-landscape
“Leveraging AI in emergency management and crisis response,” Deloitte Insights, 2024. https://www2.deloitte.com/us/en/insights/industry/public-sector/automation-and-generative-ai-in-government/leveraging-ai-in-emergency-management-and-crisis-response.html
Global Tech Innovation Hubs
“Beyond Silicon Valley: the US's other innovation hubs,” Kepler Trust Intelligence, December 2024. https://www.trustintelligence.co.uk/investor/articles/features-investor-beyond-silicon-valley-the-us-s-other-innovation-hubs-retail-dec-2024
“The innovation blind spot: how Silicon Valley ignores Middle America,” Fortune, 5 November 2025. https://fortune.com/2025/11/05/the-innovation-blind-spot-how-silicon-valley-ignores-middle-america/
“Understanding the Surge of Tech Hubs Beyond Silicon Valley,” Observer Today, May 2024. https://www.observertoday.com/news/2024/05/understanding-the-surge-of-tech-hubs-beyond-silicon-valley/
“Netizen: Beyond Silicon Valley: 20 Global Tech Innovation Hubs Shaping the Future,” Netizen, May 2025. https://www.netizen.page/2025/05/beyond-silicon-valley-20-global-tech.html
Ethical AI Implementation Challenges
“Ethical and legal considerations in healthcare AI: innovation and policy for safe and fair use,” Royal Society Open Science, 2024. https://royalsocietypublishing.org/doi/10.1098/rsos.241873
“Ethical Integration of Artificial Intelligence in Healthcare: Narrative Review of Global Challenges and Strategic Solutions,” PMC, National Center for Biotechnology Information, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC12195640/
“Ethics of Artificial Intelligence,” UNESCO. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
“Shaping the future of AI in healthcare through ethics and governance,” Nature – Humanities and Social Sciences Communications, 2024. https://www.nature.com/articles/s41599-024-02894-w
“Challenges and Risks in Implementing AI Ethics,” AIGN (AI Governance Network). https://aign.global/ai-governance-insights/patrick-upmann/challenges-and-risks-in-implementing-ai-ethics/
Artificial Wisdom and Philosophy of AI
“Beyond Artificial Intelligence (AI): Exploring Artificial Wisdom (AW),” PMC, National Center for Biotechnology Information. https://pmc.ncbi.nlm.nih.gov/articles/PMC7942180/
“Wisdom in the Age of AI Education,” Postdigital Science and Education, Springer, 2024. https://link.springer.com/article/10.1007/s42438-024-00460-w
“The ethical wisdom of AI developers,” AI and Ethics, Springer, 2024. https://link.springer.com/article/10.1007/s43681-024-00458-x
“Bridging philosophy and AI to explore computing ethics,” MIT News, 11 February 2025. https://news.mit.edu/2025/bridging-philosophy-and-ai-to-explore-computing-ethics-0211

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








