AI Trades Your Money: What Retail Investors Need to Know Now

The robots are taking over Wall Street, but this time they're not just working for the big players. Retail investors, armed with smartphones and a healthy dose of optimism, are increasingly turning to artificial intelligence to guide their investment decisions. According to recent research from eToro, the use of AI-powered investment solutions amongst retail investors jumped by 46% in 2025, with nearly one in five now utilising these tools to manage their portfolios. It's a digital gold rush, powered by algorithms that promise to level the playing field between Main Street and Wall Street.
But here's the trillion-dollar question: Are these AI-generated market insights actually improving retail investor decision-making, or are they simply amplifying noise in an already chaotic marketplace? As these systems become more sophisticated and ubiquitous, the financial world faces a reckoning. The platforms serving these insights must grapple with thorny questions about transparency, accountability, and the very real risk of market manipulation.
The Rise of the Robot Advisors
The numbers tell a compelling story. Assets under management in the robo-advisors market reached $1.8 trillion in 2024, with the United States leading at $1.46 trillion. The global robo-advisory market was valued at $8.39 billion in 2024 and is projected to grow to $69.32 billion by 2032, exhibiting a compound annual growth rate of 30.3%. The broader AI trading platform market is expected to increase from $11.26 billion in 2024 to $69.95 billion by 2034.
This isn't just institutional money quietly flowing into algorithmic strategies. Retail investors are leading the charge, with the retail segment expected to expand at the fastest rate. Why? Increased accessibility of AI-powered tools, user-friendly interfaces, and the democratising effect of these technologies. AI platforms offer automated investment tools and educational resources, making it easier for individuals with limited experience to participate in the market.
The platforms themselves have evolved considerably. Leading robo-advisors like Betterment and Wealthfront both use AI for investing, automatic portfolio rebalancing, and tax-loss harvesting. They reinvest dividends automatically and invest money in exchange-traded funds rather than individual stocks. Betterment charges 0.25% annually for its Basic plan, whilst Wealthfront employs Modern Portfolio Theory and provides advanced features including direct indexing for larger accounts.
Generational shifts drive this adoption. According to the World Economic Forum's survey of 13,000 investors across 13 countries, investors are increasingly heterogeneous across generations. Millennials are now the most likely to use AI tools at 72% compared to 61% a year ago, surpassing Gen Z at 69%. Even more telling: 40% of Gen Z investors are using AI chatbots for financial coaching or advice, compared with only 8% of baby boomers.
Overcoming Human Biases
The case for AI in retail investing rests on a compelling premise: humans are terrible at making rational investment decisions. We're emotional, impulsive, prone to recency bias, and easily swayed by fear and greed. Research from Deutsche Bank in 2025 highlights that whilst human traders remain susceptible to recent events and easily available information, AI systems maintain composure during market swings.
During market volatility in April 2025, AI platforms like dbLumina recognised widespread investor excitement as a signal to buy, even as many individuals responded with fear and hesitation. This capacity to override emotional decision-making represents one of AI's most significant advantages.
Research focusing on AI-driven financial robo-advisors examined how these systems influence retail investors' loss aversion and overconfidence biases. Using data from 461 retail investors analysed through structural equation modelling, results indicate that robo-advisors' perceived personalisation, interactivity, autonomy, and algorithm transparency substantially mitigated investors' overconfidence and loss-aversion biases.
The Ontario Securities Commission released a comprehensive report on artificial intelligence in supporting retail investor decision-making. The experiment consisted of an online investment simulation testing how closely Canadians followed suggestions for investing a hypothetical $20,000. Participants were told suggestions came from a human financial services provider, an AI tool, or a blended approach.
Notably, there was no discernible difference in adherence to investment suggestions provided by a human or AI tool, indicating Canadian investors may be receptive to AI advice. More significantly, 29% of Canadians are already using AI to access financial information, with 90% of those using it to inform their financial decisions to at least a moderate extent.
The Deloitte Center for Financial Services predicts that generative AI-enabled applications will likely become the leader in advice mind-space for retail investors, growing from its current nascent stage to 78% usage in 2028, and could become the leading source of retail investment advice in 2027.
Black Boxes and Algorithmic Opacity
But here's where things get murky. Unlike rule-based bots, AI systems adapt their strategies based on market behaviour, meaning even developers may not fully predict each action. This “black box” nature makes transparency difficult. Regulators demand audit-ready procedures, yet many AI systems operate as black boxes, making it difficult to explain why a particular trade was made. This lack of explainability risks undermining trust amongst regulators and clients.
Explainable artificial intelligence (XAI) represents an attempt to solve this problem. XAI allows human users to comprehend and trust results created by machine learning algorithms. Unlike traditional AI models that function as black boxes, explainable AI strives to make reasoning accessible and understandable.
In finance, where decisions affect millions of lives and billions of dollars, explainability isn't just desirable; it's often a regulatory and ethical requirement. Customers and regulators need to trust these decisions, which means understanding why and how they were made.
Some platforms are attempting to address this deficit. Tickeron assigns a “Confidence Level” to each prediction and allows users to review the AI's past accuracy on that specific pattern and stock. TrendSpider consolidates advanced charting, market scanning, strategy backtesting, and automated execution, providing retail traders with institutional-grade capabilities.
However, these represent exceptions rather than the rule. The lack of transparency in many AI trading systems makes it difficult for stakeholders to understand how decisions are being made, raising concerns about fairness.
The Flash Crash Warning
If you need a cautionary tale about what happens when algorithms run amok, look no further than May 6, 2010. The “Flash Crash” remains one of the most significant examples of how algorithmic trading can contribute to extreme market volatility. The Dow Jones Industrial Average plummeted nearly 1,000 points (about 9%) within minutes before rebounding almost as quickly. Although the market indices partially rebounded the same day, the flash crash erased almost $1 trillion in market value.
What triggered it? At 2:32 pm EDT, against a backdrop of unusually high volatility and thinning liquidity, a large fundamental trader (Waddell & Reed Financial Inc.) initiated a sell programme for 75,000 E-Mini S&P contracts (valued at approximately $4.1 billion). The computer algorithm was set to target an execution rate of 9% of the trading volume calculated over the previous minute, but without regard to price or time.
High-frequency traders quickly bought and then resold contracts to each other, generating a “hot potato” volume effect. In 14 seconds, high-frequency traders traded over 27,000 contracts, accounting for about 49% of total trading volume, whilst buying only about 200 additional contracts net.
One example that sums up the volatile afternoon: Accenture fell from nearly $40 to one cent and recovered all of its value within seconds. Over 20,000 trades representing 5.5 million shares were executed at prices more than 60% away from their 2:40 pm value, and these trades were subsequently cancelled.
The flash crash demonstrated how unrelated trading algorithms activated across different parts of the financial marketplace can cascade into a systemic event. By reacting to rapidly changing market signals immediately, multiple algorithms generate sharp price swings that lead to short-term volatility. The speed of the crash, largely driven by an algorithm, led agencies like the SEC to enact new “circuit breakers” and mechanisms to halt runaway market crashes. The Limit Up-Limit Down mechanism, implemented in 2012, now prevents trades in National Market System securities from occurring outside of specified price bands.
The Herding Problem
Here's an uncomfortable truth about AI-powered trading: if everyone's algorithm is reading the same data and using similar strategies, we risk creating a massive herding problem. Research examining algorithmic trading and herding behaviour breaks new ground by investigating how algorithmic trading influences stock markets. The findings carry critical implications as researchers uncover dual behaviours of algorithmic trading-induced herding and anti-herding in varying market conditions.
Research has observed that the correlation between asset prices has risen, suggesting that AI systems might encourage herding behaviour amongst traders. As a result, market movements could be intensified, leading to greater volatility. Herd behaviour can emerge because different trading systems adopt similar investment strategies using the same raw data points.
The GameStop and AMC trading frenzy of 2021 offered a different kind of cautionary tale. In early 2021, GameStop experienced a “short squeeze”, with a price surge of almost 1,625% within a week. This financial operation was attributed to activity from Reddit's WallStreetBets subreddit. On January 28, 2021, GameStop stock reached an astonishing intraday high of $483, a meteoric rise from its price of under $20 at the beginning of the year.
Using Reddit, retail investors came together to act “collectively” on certain stocks. According to data firm S3 Partners, by 27 January short sellers had accumulated losses of more than $5 billion in 2021.
As Guy Warren, CEO of FinTech ITRS Group noted, “Until now, retail trading activity has never been able to move the market one way or another. However, following the successful coordination by a large group of traders, the power dynamic has shifted; exposing the vulnerability of the market as well as the weaknesses in firms' trading systems.”
Whilst GameStop represented social media-driven herding rather than algorithm-driven herding, it demonstrates the systemic risks when large numbers of retail investors coordinate their behaviour, whether through Reddit threads or similar AI recommendations. The risk models of certain hedge funds and institutional investors proved themselves inadequate in a situation like the one that unfolded in January. As such an event had never happened before, risk models were subsequently not equipped to manage them.
The Manipulation Question
Multiple major regulatory bodies have raised concerns about AI in financial markets, including the Bank of England, the European Central Bank, the U.S. Securities and Exchange Commission, the Dutch Authority for the Financial Markets, the International Organization of Securities Commissions, and the Financial Stability Board. Regulatory authorities are concerned about the potential for deep and reinforcement learning-based trading algorithms to engage in or facilitate market abuse. As the Dutch Authority for the Financial Markets has noted, naively programmed reinforcement learning algorithms could inadvertently learn to manipulate markets.
Research from Wharton professors confirms concerns about AI-driven market manipulation, emphasising the risk of AI collusion. Their research reveals the mechanisms behind AI collusion and demonstrates which mechanism dominates under different trading environments. Despite AI's perceived ability to enhance efficiency, recent research demonstrates the ever-present risk of AI-powered market manipulation through collusive trading, despite AI having no intention of collusion.
CFTC Commissioner Kristin Johnson expressed deep concern about the potential for abuse of AI technologies to facilitate fraud in markets, calling for heightened penalties for those who intentionally use AI technologies to engage in fraud, market manipulation, or the evasion of regulations.
The SEC's concerns are equally serious. Techniques such as deepfakes on social media to artificially inflate stock prices or disseminate false information pose substantial risks. The SEC has prioritised combating these activities, leveraging its in-house AI expertise to monitor the market for malicious conduct.
In March 2024, the SEC announced that San Francisco-based Global Predictions, along with Toronto-based Delphia, would pay a combined $400,000 in fines for falsely claiming to use artificial intelligence. SEC Chair Gensler has warned businesses against “AI washing”, making misleading AI-related claims similar to greenwashing. Within the past year, the SEC commenced four enforcement actions against registrants for misrepresentation of AI's purported capability, scope, and usage.
Scholars argue that during market turmoil, AI accelerates volatility faster than traditional market forces. AI operates like “black-boxes”, leaving human programmers unable to understand why AI makes trading decisions as the technology learns on its own. Traditional corporate and securities laws struggle to police AI because black-box algorithms make autonomous decisions without a culpable mental state.
The Bias Trap
AI ethics in finance is about ensuring that AI-driven decisions uphold fairness, transparency, and accountability. When AI models inherit biases from flawed data or poorly designed algorithms, they can unintentionally discriminate, restricting access to financial services and triggering compliance penalties.
AI models can learn and propagate biases if training data represents past discrimination, such as redlining, which systematically denied home loans to racial minorities. Machine learning models trained on historical mortgage data may deny loans at higher rates to applicants from historically marginalised neighbourhoods simply because their profile matches past biased decisions.
The proprietary nature of algorithms and their complexity allow discrimination to hide behind supposed objectivity. These “black box” algorithms can produce life-altering outputs with little knowledge of their inner workings. “Explainability” is a core tenet of fair lending systems. Lenders are required to tell consumers why they were denied, providing a paper trail for accountability.
This creates what AI ethics researchers call the “fairness paradox”: we can't directly measure bias against protected categories if we don't collect data about those categories, yet collecting such data raises concerns about potential misuse.
In December 2024, the Financial Conduct Authority announced an initiative to undertake research into AI bias to inform public discussion and published its first research note on bias in supervised machine learning. The FCA will regulate “critical third parties” (providers of critical technologies, including AI, to authorised financial services entities) under the Financial Services Markets Act 2023.
The Consumer Financial Protection Bureau announced that it will expand the definition of “unfair” within the UDAAP regulatory framework to include conduct that is discriminatory, and plans to review “models, algorithms and decision-making processes used in connection with consumer financial products and services.”
The Guardrails Being Built
The regulatory landscape is evolving rapidly, though not always coherently. A challenge emerges from the divergence between regulatory approaches. The FCA largely sees its existing regulatory regime as fit for purpose, with enforcement action in AI-related matters likely to be taken under the Senior Managers and Certification Regime and the new Consumer Duty. Meanwhile, the SEC has proposed specific new rules targeting AI conflicts of interest. This regulatory fragmentation creates compliance challenges for firms operating across multiple jurisdictions.
On December 5, 2024, the CFTC released a nonbinding staff advisory addressing the use of AI by CFTC-regulated entities in derivatives markets, describing it as a “measured first step” to engage with the marketplace. The CFTC undertook a series of initiatives in 2024 to address CFTC registrants' and other industry participants' use and application of AI technologies. Whilst these actions do not constitute formal rulemaking or adoption of new regulations, they underscore CFTC's continued awareness of and attention to the potential benefits and risks of AI on financial markets.
The SEC has proposed Predictive Analytics Rules that would require broker-dealers and registered investment advisers to eliminate or neutralise conflicts of interest associated with their use of AI and other technologies. SEC Chair Gensler stated firms are “obligated to eliminate or otherwise address any conflicts of interest and not put their own interests ahead of their investors' interests.”
FINRA has identified several regulatory risks for member firms associated with AI use that warrant heightened attention, including recordkeeping, customer information protection, risk management, and compliance with Regulation Best Interest. On June 27, 2024, FINRA issued a regulatory notice reminding member firms of their obligations.
In Europe, the Financial Conduct Authority publicly recognises the potential benefits of AI in financial services, running an AI sandbox for firms to test innovations. In October 2024, the FCA launched its AI lab, which includes initiatives such as the Supercharged Sandbox, AI Live Testing, AI Spotlight, AI Sprint, and the AI Input Zone.
In May 2024, the European Securities and Markets Authority issued guidance to firms using AI technologies when providing investment services to retail clients. ESMA expects firms to comply with relevant MiFID II requirements, particularly regarding organisational aspects, conduct of business, and acting in clients' best interests. ESMA notes that whilst AI diffusion is still in its initial phase, the potential impact on retail investor protection is likely to be significant. Firms' decisions remain the responsibility of management bodies, irrespective of whether those decisions are taken by people or AI-based tools.
The EU's Artificial Intelligence Act kicked in on August 1, 2024, ranking AI systems by risk levels: unacceptable, high, limited, or minimal/no risk.
What Guardrails and Disclaimers Are Actually Needed?
So what does effective oversight actually look like? Based on regulatory guidance and industry best practices, several key elements emerge.
Disclosure requirements must be comprehensive. Investment firms using AI and machine learning models should abide by basic disclosures with clients. The SEC's proposal addresses conflicts of interest arising from AI use, requiring firms to evaluate and mitigate conflicts associated with their use of AI and predictive data analytics.
SEC Chair Gary Gensler emphasised that “Investor protection requires that the humans who deploy a model put in place appropriate guardrails” and “If you deploy a model, you've got to make sure that it complies with the law.” This human accountability remains crucial, even as systems become more autonomous.
The SEC, the North American Securities Administrators Association, and FINRA jointly warned that bad actors are using the growing popularity and complexity of AI to lure victims into scams. Investors should remember that securities laws generally require securities firms, professionals, exchanges, and other investment platforms to be registered. Red flags include high-pressure sales tactics by unregistered individuals, promises of quick profits, or claims of guaranteed returns with little or no risk.
Beyond regulatory requirements, platforms need practical safeguards. Firms like Morgan Stanley are implementing guardrails by limiting GPT-4 tools to internal use with proprietary data only, keeping risk low and compliance high.
Specific guardrails and disclaimers that should be standard include:
Clear Performance Disclaimers: AI-generated insights should carry explicit warnings that past performance does not guarantee future results, and that AI models can fail during unprecedented market conditions.
Confidence Interval Disclosure: Platforms should disclose confidence levels or uncertainty ranges associated with AI predictions, as Tickeron does with its Confidence Level system.
Data Source Transparency: Investors should know what data sources feed the AI models and how recent that data is, particularly important given how quickly market conditions change.
Limitation Acknowledgements: Clear statements about what the AI cannot do, such as predict black swan events, account for geopolitical shocks, or guarantee returns.
Human Oversight Indicators: Disclosure of whether human experts review AI recommendations and under what circumstances human intervention occurs.
Conflict of Interest Statements: Explicit disclosure if the platform benefits from directing users toward certain investments or products.
Algorithmic Audit Trails: Platforms should maintain comprehensive logs of how recommendations were generated to satisfy regulatory demands.
Education Resources: Rather than simply providing AI-generated recommendations, platforms should offer educational content to help users understand the reasoning and evaluate recommendations critically.
AI Literacy as a Prerequisite
Here's a fundamental problem: retail investors are adopting AI tools faster than they're developing AI literacy. According to the World Economic Forum's findings, 42% of people “learn by doing” when it comes to investing, 28% don't invest because they don't know how or find it confusing, and 70% of investors surveyed said they would invest more if they had more opportunities to learn.
Research highlights the importance of generative AI literacy along with climate and financial literacy in shaping investor outcomes. Research findings reveal disparities in current adoption and anticipated future use of generative AI across age groups, suggesting opportunities for targeted education.
The financial literacy of individual investors has a significant impact on stock market investment decisions. A large-scale randomised controlled trial with over 28,000 investors at a major Chinese brokerage firm found that GenAI-powered robo-advisors significantly improve financial literacy and shift investor behaviour toward more diversified, cost-efficient, and risk-aware investment choices.
This suggests a virtuous cycle: properly designed AI tools can actually enhance financial literacy whilst simultaneously providing investment guidance. But this only works if the tools are designed with education as a primary goal, not just maximising assets under management or trading volume.
AI is the leading topic that retail investors plan to learn more about over the next year (23%), followed by cryptoassets and blockchain technology (22%), tax rules (18%), and ETFs (17%), according to eToro research. This demonstrates investor awareness of the knowledge gap, but platforms and regulators must ensure educational resources are readily available and comprehensible.
The Double-Edged Sword
For investors, AI-synthesised alternative data can offer an information edge, enabling them to analyse and predict consumer behaviour to gain insight ahead of company earnings announcements. According to Michael Finnegan, CEO of Eagle Alpha, there were just 100 alternative data providers in the 2010s; now there are 2,000. In 2023, Deloitte predicted that the global market for alternative data would reach $137 billion by 2030, increasing at a compound annual growth rate of 53%.
But alternative data introduces transparency challenges. How was the data collected? Is it representative? Has it been verified? When AI models train on alternative data sources like satellite imagery of parking lots, credit card transaction data, or social media sentiment, the quality and reliability of insights depend entirely on the underlying data quality.
Adobe observed that between November 1 and December 31, 2024, traffic from generative AI sources to U.S. retail sites increased by 1,300 percent compared to the same period in 2023. This demonstrates how quickly AI is being integrated into consumer behaviour, but it also means AI models analysing retail trends are increasingly analysing other AI-generated traffic, creating potential feedback loops.
Combining Human and Machine Intelligence
Perhaps the most promising path forward isn't choosing between human and artificial intelligence, but thoughtfully combining them. The Ontario Securities Commission research found no discernible difference in adherence to investment suggestions provided by a human or AI tool, but the “blended” approach showed promise.
The likely trajectory points toward configurable, focused AI modules, explainable systems designed to satisfy regulators, and new user interfaces where investors interact with AI advisors through voice, chat, or immersive environments. What will matter most is not raw technological horsepower, but the ability to integrate machine insights with human oversight in a way that builds durable trust.
The future of automated trading will be shaped by demands for greater transparency and user empowerment. As traders become more educated and tech-savvy, they will expect full control and visibility over the tools they use. We are likely to see more platforms offering open-source strategy libraries, real-time risk dashboards, and community-driven AI training models.
Research examining volatility shows that market volatility triggers opposing trading behaviours: as volatility increases, Buy-side Algorithmic Traders retreat whilst High-Frequency Traders intensify trading, possibly driven by opposing hedging and speculative motives, respectively. This suggests that different types of AI systems serve different purposes and should be matched to different investor needs and risk tolerances.
Making the Verdict
So are AI-generated market insights improving retail investor decision-making or merely amplifying noise? The honest answer is both, depending on the implementation, regulation, and education surrounding these tools.
The evidence suggests AI can genuinely help. Research shows that properly designed robo-advisors reduce behavioural biases, improve diversification, and enhance financial literacy. The Ontario Securities Commission found that 90% of Canadians using AI for financial information are using it to inform their decisions to at least a moderate extent. AI maintains composure during market volatility when human traders panic.
But the risks are equally real. Black-box algorithms lack transparency. Herding behaviour can amplify market movements. Market manipulation becomes more sophisticated. Bias in training data perpetuates discrimination. Flash crashes demonstrate how algorithmic cascades can spiral out of control. The widespread adoption of similar AI strategies could create systemic fragility.
The platforms serving these insights must ensure transparency and model accountability through several mechanisms:
Mandatory Explainability: Regulators should require AI platforms to provide explanations comprehensible to retail investors, not just data scientists. XAI techniques need to be deployed as standard features, not optional add-ons.
Independent Auditing: Third-party audits of AI models should become standard practice, examining both performance and bias, with results publicly available in summary form.
Stress Testing: AI models should be stress-tested against historical market crises to understand how they would have performed during the 2008 financial crisis, the 2010 Flash Crash, or the 2020 pandemic crash.
Confidence Calibration: AI predictions should include properly calibrated confidence intervals, and platforms should track whether their stated confidence levels match actual outcomes over time.
Human Oversight Requirements: For retail investors, particularly those with limited experience, AI recommendations above certain risk thresholds should trigger human review or additional warnings.
Education Integration: Platforms should be required to provide educational content explaining how their AI works, what it can and cannot do, and how investors should evaluate its recommendations.
Bias Testing and Reporting: Regular testing for bias across demographic groups, with public reporting of results and remediation efforts.
Incident Reporting: When AI systems make significant errors or contribute to losses, platforms should be required to report these incidents to regulators and communicate them to affected users.
Interoperability and Portability: To prevent lock-in effects and enable informed comparison shopping, standards should enable investors to compare AI platform performance and move their data between platforms.
The fundamental challenge is that AI is neither inherently good nor inherently bad for retail investors. It's a powerful tool that can be used well or poorly, transparently or opaquely, in investors' interests or platforms' interests.
The widespread use of AI widens the gap between institutional investors and retail traders. Whilst large firms have access to advanced algorithms and capital, individual investors often lack such resources, creating an uneven playing field. AI has the potential to narrow this gap by democratising access to sophisticated analysis, but only if the platforms, regulators, and investors themselves commit to transparency and accountability.
As AI becomes the dominant force in retail investing, we need guardrails robust enough to prevent manipulation and protect investors, but flexible enough to allow innovation and genuine improvements in decision-making. We need disclaimers honest about both capabilities and limitations, not legal boilerplate designed to shield platforms from liability. We need education that empowers investors to use these tools critically, not marketing that encourages blind faith in algorithmic superiority.
The algorithm will see you now. The question is whether it's working for you or whether you're working for it. And the answer to that question depends on the choices we make today about transparency, accountability, and the kind of financial system we want to build.
References & Sources
eToro. (2025). Retail investors flock to AI tools, with usage up 46% in one year
Statista. (2024). Global: robo-advisors AUM 2019-2028
Fortune Business Insights. (2024). Robo Advisory Market Size, Share, Trends | Growth Report, 2032
Precedence Research. (2024). AI Trading Platform Market Size and Forecast 2025 to 2034
NerdWallet. (2024). Betterment vs. Wealthfront: 2024 Comparison
World Economic Forum. (2025). 2024 Global Retail Investor Outlook
Deutsche Bank. (2025). AI platforms and investor behaviour during market volatility. [Referenced in search results]
Taylor & Francis Online. (2025). The role of robo-advisors in behavioural finance, shaping investment decisions
Ontario Securities Commission. (2024). Artificial Intelligence and Retail Investing: Use Cases and Experimental Research
Deloitte. (2024). Retail investors may soon rely on generative AI tools for financial investment advice
uTrade Algos. (2024). Why Transparency Matters in Algorithmic Trading
Finance Magnates. (2024). Secret Agent: Deploying AI for Traders at Scale
CFA Institute. (2025). Explainable AI in Finance: Addressing the Needs of Diverse Stakeholders
IBM. (n.d.). What is Explainable AI (XAI)?
Springer. (2024). Explainable artificial intelligence (XAI) in finance: a systematic literature review
Wikipedia. (2024). 2010 flash crash
CFTC. (2010). The Flash Crash: The Impact of High Frequency Trading on an Electronic Market
Corporate Finance Institute. (n.d.). 2010 Flash Crash – Overview, Main Events, Investigation
Nature. (2025). The dynamics of the Reddit collective action leading to the GameStop short squeeze
Harvard Law School Forum on Corporate Governance. (2022). GameStop and the Reemergence of the Retail Investor
Roll Call. (2021). Social media offered lessons, rally point for GameStop trading
Nature. (2025). Research on the impact of algorithmic trading on market volatility
Wiley Online Library. (2024). Does Algorithmic Trading Induce Herding?
Sidley Austin. (2024). Artificial Intelligence in Financial Markets: Systemic Risk and Market Abuse Concerns
Wharton School. (2024). AI-Powered Collusion in Financial Markets
U.S. Securities and Exchange Commission. (2024). SEC enforcement actions regarding AI misrepresentation.
Brookings Institution. (2024). Reducing bias in AI-based financial services
EY. (2024). AI discrimination and bias in financial services
Proskauer Rose LLP. (2024). A Tale of Two Regulators: The SEC and FCA Address AI Regulation for Private Funds
Financial Conduct Authority. (2024). FCA AI lab launch and bias research initiative.
Sidley Austin. (2025). Artificial Intelligence: U.S. Securities and Commodities Guidelines for Responsible Use
FINRA. (2024). Artificial Intelligence (AI) and Investment Fraud
ESMA. (2024). ESMA provides guidance to firms using artificial intelligence in investment services
Deloitte. (2023). Alternative data market predictions.
Eagle Alpha. (2024). Growth of alternative data providers.
Adobe. (2024). Generative AI traffic to retail sites analysis.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk