SmarterArticles

AlgorithmicTrust

The future of shopping isn't happening on a screen. It's happening in the spaces between your words and a machine's understanding of what you want. When you ask an AI agent to find you the best noise-cancelling headphones under £300, you're not just outsourcing a Google search. You're delegating an entire decision-making process to an algorithmic intermediary that will reshape how billions of pounds flow through the digital economy.

This is agentic commerce: AI systems that browse, compare, negotiate, and purchase on behalf of humans. And it's already here. OpenAI's ChatGPT now offers instant checkout for purchases from over one million Shopify merchants. Perplexity launched its Comet browser with AI agents that can autonomously complete purchases from any retailer. Opera introduced Browser Operator, the first major browser with AI-based agentic capabilities built directly into its architecture. Google is expanding its AI Mode shopping interface across the United States, adding capabilities that let customers track prices and confirm purchases without ever visiting a retailer's website.

The numbers tell a story of exponential transformation. Traffic to US retail sites from generative AI browsers and chat services increased 4,700 per cent year-over-year in July 2025, according to industry tracking data. McKinsey projects that by 2030, the US business-to-consumer retail market alone could see up to one trillion dollars in orchestrated revenue from agentic commerce, with global projections reaching three trillion to five trillion dollars.

But these astronomical figures obscure a more fundamental question: When AI agents become the primary interface between consumers and commerce, who actually benefits? The answer is forcing a reckoning across the entire e-commerce ecosystem, from multinational retailers to affiliate marketers, from advertising platforms to regulatory bodies. Because agentic commerce doesn't just change how people shop. It fundamentally rewrites the rules about who gets paid, who gets seen, and who gets trusted in the digital marketplace.

The Funnel Collapses

The traditional e-commerce funnel has been the foundational model of online retail for two decades. Awareness leads to interest, interest leads to consideration, consideration leads to conversion. Each stage represented an opportunity for merchants to influence behaviour through advertising, product placement, personalised recommendations, and carefully optimised user experience. The funnel existed because friction existed: the cognitive load of comparing options, the time cost of browsing multiple sites, the effort required to complete a transaction.

AI agents eliminate that friction by compressing the entire funnel into a single conversational exchange. When a customer arriving via an AI agent reaches a retailer's site, they're already further down the sales funnel with stronger intent to purchase. Research shows these customers are ten per cent more engaged than traditional visitors. The agent has already filtered options, evaluated trade-offs, and narrowed the field. The customer isn't browsing. They're buying.

This compression creates a paradox for retailers. Higher conversion rates and more qualified traffic represent the holy grail of e-commerce optimisation. Yet if the AI agent can compress browsing, selection, and checkout into the same dialogue, retailers that sit outside the conversation risk ceding both visibility and sales entirely.

Boston Consulting Group's modelling suggests potential earnings before interest and taxes erosion of up to 500 basis points for retailers, stemming from price transparency, smaller order sizes, and agent platform fees. That five per cent margin compression might not sound catastrophic until you consider that many retailers operate on margins of ten to fifteen per cent. Agentic commerce could eliminate a third of their profitability.

The risks extend beyond margins. Retailers face diminished direct access to customers, weaker brand loyalty, and growing dependence on intermediary platforms. When customers interact primarily with an AI agent rather than a retailer's website or app, the retailer loses the ability to shape the shopping experience, collect first-party data, or build lasting relationships. The brand becomes commoditised: a product specification in an agent's database rather than a destination in its own right.

This isn't speculation. Walmart announced a partnership with OpenAI enabling seamless “chat to checkout” experiences. Shopify integrated with ChatGPT to allow instant purchases from its merchant base. Etsy followed suit. These aren't defensive moves. They're admissions that the platform layer is shifting, and retailers must establish presence where the conversations are happening, even if it means surrendering control over the customer relationship.

The Revenue Model Revolution

If agentic commerce destroys the traditional funnel, it also demolishes the advertising models built upon that funnel. Consider Google Shopping, which has operated for years on a cost-per-click model with effective commission rates around twelve per cent. Or Amazon, whose marketplace charges sellers approximately fifteen per cent in fees and generates billions more through advertising within search results and product pages. These models depend on human eyeballs viewing sponsored listings, clicking through to product pages, and making purchase decisions influenced by paid placement.

AI agents have no eyeballs. They don't see banner ads or sponsored listings. They process structured data, evaluate parameters, and optimise for the objectives their users specify. The entire edifice of digital retail advertising, which represents a 136 billion dollar industry in 2025, suddenly faces an existential question: How do you advertise to an algorithm?

The early answer appears to be: You don't advertise. You pay for performance. OpenAI has reportedly discussed a two per cent affiliate commission model for purchases made through its shopping features. That's six times lower than Google Shopping's traditional rates and seven times lower than Amazon's marketplace fees. The economics are straightforward. In a world where AI agents handle product discovery and comparison, platforms can charge lower fees because they're not operating expensive advertising infrastructure or maintaining complex seller marketplaces. They're simply connecting buyers and sellers, then taking a cut of completed transactions.

This shift from advertising-based revenue to performance-based commissions has profound implications. Advertisers will spend 12.42 billion dollars on affiliate programmes in 2025, up 10.2 per cent year-over-year, driving thirteen per cent of US e-commerce sales. The affiliate marketing ecosystem has adapted quickly to the rise of AI shopping agents, with seventy per cent of citations for some retailers in large language models stemming from affiliate content.

But the transition hasn't been smooth. Retail affiliate marketing revenues took a hit of over fifteen per cent year-over-year in the second quarter of 2024, when Google's search algorithm updates deprioritised many affiliate sites. If ChatGPT or Perplexity become the primary shopping interfaces, and those platforms negotiate direct relationships with merchants rather than relying on affiliate intermediaries, the affiliate model could face an existential threat.

Yet the performance-based structure of affiliate marketing may also be its salvation. Cost-per-acquisition and revenue-share pricing align perfectly with agentic commerce, where marketing dollars are spent only when a purchase is made. Industry analysts predict retail media networks will reshape into affiliate-like ecosystems, complete with new metrics such as “cost per agent conversion.”

The retail media network model faces even more severe disruption. Retail media networks, which allow brands to advertise within retailer websites and apps, are projected to reach 136 billion dollars in value during 2025. But these networks depend on high human traffic volumes consuming brand messages, sponsored product listings, and targeted advertisements. When agentic AI threatens those traffic volumes by handling shopping outside retailer environments, the entire business model begins to crumble.

The industry response has been to pivot from business-to-consumer advertising to what executives are calling business-to-AI: competing for algorithmic attention rather than human attention. Traditional brand building, with its emphasis on emotional connections, lifestyle aspirations, and community, suddenly becomes the most valuable marketing strategy. Because whilst AI agents can evaluate specifications and compare prices, they still rely on the corpus of available information to make recommendations. A brand that has invested in thought leadership, earned media coverage, and authentic community engagement will appear more frequently in that corpus than a brand that exists only as a product listing in a database.

The new battleground isn't the moment of purchase. It's the moment of instruction, when a human tells an AI agent what they're looking for. Influence that initial framing and you influence the entire transaction.

The Merchant's Dilemma

For retailers, agentic commerce presents an agonising choice. Participate in these new platforms and surrender margin, control, and customer data. Refuse to participate and risk becoming invisible to a growing segment of high-intent shoppers.

The mathematics of merchant incentives in this environment grow complex quickly. If Target and Walmart stock the same product at the same price, how does an AI agent decide which retailer to recommend? In traditional e-commerce, the answer involves search engine optimisation, paid advertising, customer reviews, shipping speed, and loyalty programme benefits. In agentic commerce, the answer increasingly depends on which merchant is willing to pay the AI platform a performance incentive.

Industry analysts worry this creates a “pay to play” dynamic reminiscent of Google's shift from organic search results to advertising-dominated listings. Anyone who has used Google knows how much the first page of search results is stuffed with sponsored listings. Could agentic commerce go the same way? Currently, executives at AI companies insist their algorithms pick the best possible choices without pay-to-play arrangements. But when significant money is involved, the concern is whether those principles can hold.

Perplexity has directly criticised Amazon for being “more interested in serving you ads, sponsored results, and influencing your purchasing decisions with upsells and confusing offers.” The criticism isn't just rhetorical posturing. It's a competitive claim: that AI agents provide a cleaner, more consumer-focused shopping experience precisely because they're not corrupted by advertising revenue. Whether that purity can survive as agentic commerce scales to trillions of pounds in transaction volume remains an open question.

Some merchants are exploring alternative incentive structures. Sales performance incentive funds, where retailers pay commissions to AI platforms only when purchases are completed, align merchant interests with platform performance. Dynamic pricing strategies, where retailers offer AI platforms exclusive pricing in exchange for preferential recommendations, create a more transparent marketplace for algorithmic attention. Subscription models, where merchants pay fixed fees for inclusion in AI agent recommendation databases, avoid the pay-per-click inflation that has plagued search advertising.

But each of these approaches raises questions about transparency, fairness, and consumer welfare. If an AI agent recommends Target over Walmart because Target pays a higher commission, is that a betrayal of the user's trust? Or is it simply the same economic reality that has always governed retail, now made more efficient through automation? The answer depends largely on disclosure: whether users understand the incentives shaping the recommendations they receive.

The Transparency Crisis

Trust is the currency of AI shopping agents. If users don't trust that an agent is acting in their best interests, they won't delegate purchasing decisions. And trust requires transparency: understanding how recommendations are generated, what incentives influence those recommendations, and whether the agent is optimising for the user's preferences or the platform's profit.

The current state of transparency in AI shopping is, charitably, opaque. Most AI platforms provide little visibility into their recommendation algorithms. Users don't know which merchants have paid for preferential placement, how commissions affect product rankings, or what data is being used to personalise suggestions. The Federal Trade Commission has made clear there is no AI exemption from existing consumer protection laws, and firms deploying AI systems have an obligation to ensure those systems are transparent, explainable, fair, and empirically sound.

But transparency in AI systems is technically challenging. The models underlying ChatGPT, Claude, or Perplexity are “black boxes” even to their creators: neural networks with billions of parameters that produce outputs through processes that defy simple explanation. Algorithmic accountability requires examination of how results are reached, including transparency and justification of the AI model design, setup, and operation. That level of scrutiny is difficult when the systems themselves are proprietary and commercially sensitive.

The FTC has responded by launching Operation AI Comply, taking action against companies that rely on artificial intelligence to supercharge deceptive or unfair conduct. Actions have targeted companies promoting AI tools that enable fake reviews, businesses making unsupported claims about AI capabilities, and platforms that mislead consumers about how AI systems operate. The message is clear: automation doesn't absolve responsibility. If an AI agent makes false claims, deceptive recommendations, or unfair comparisons, the platform operating that agent is liable.

Bias represents another dimension of the transparency challenge. Research on early AI shopping agents revealed troubling patterns. Agents failed to conduct exhaustive comparisons, instead settling for the first “good enough” option they encountered. This creates what researchers call a “first-proposal bias” that gives response speed a ten to thirty times advantage over actual quality. If an agent evaluates the first few results more thoroughly than later results, merchants have an incentive to ensure their products appear early in whatever databases the agent queries.

Data bias, algorithmic bias, and user bias are the main types of bias in AI e-commerce systems. Data bias occurs when training data isn't representative of actual shopping patterns, leading to recommendations that favour certain demographics, price points, or product categories. Algorithmic bias emerges from how models weigh different factors, potentially overvaluing characteristics that correlate with protected categories. User bias happens when AI agents learn from and amplify existing consumer prejudices rather than challenging them.

The automation bias problem compounds these challenges. People may be unduly trusting of answers from machines which seem neutral or impartial. Many chatbots are effectively built to persuade, designed to answer queries in confident language even when those answers are fictional. The tendency to trust AI output creates vulnerability when that output is shaped by undisclosed commercial incentives or reflects biased training data.

Microsoft recently conducted an experiment where they gave AI agents virtual currency and instructed them to make online purchases. The agents spent all the money on scams. This wasn't a failure of the AI's reasoning capability. It was a failure of the AI's ability to assess trust and legitimacy in an environment designed to deceive. If sophisticated AI systems from a leading technology company can be systematically fooled by online fraud, what does that mean for consumer protection when millions of people delegate purchasing decisions to similar agents?

The Regulatory Response

Regulators worldwide are scrambling to develop frameworks for agentic commerce before it becomes too embedded to govern effectively. New AI-specific laws have emerged to mandate proactive transparency, bias prevention, and consumer disclosures not otherwise required under baseline consumer protection statutes.

The FTC's position emphasises that existing consumer protection laws apply to AI systems. Using artificial intelligence and algorithms doesn't provide exemption from legal obligations around truthfulness, fairness, and non-discrimination. The agency has published guidance stating that AI tools should be transparent, explainable, fair, and empirically sound, whilst fostering accountability.

European regulators are taking a more prescriptive approach through the AI Act, which classifies AI systems by risk level and imposes requirements accordingly. Shopping agents that significantly influence purchasing decisions would likely qualify as high-risk systems, triggering obligations around transparency, human oversight, and impact assessment. The regulation mandates clear disclosure of whether an entity is human or artificial, responding to the increasing sophistication of AI interactions. Under the AI Act's framework, providers of high-risk AI systems must maintain detailed documentation of their training data, conduct conformity assessments before deployment, and implement post-market monitoring to detect emerging risks. Violations can result in fines up to seven per cent of global annual turnover.

But enforcement remains challenging. The opacity of black box models means consumers have no transparency into how exactly decisions are being made. Regulators often lack the technical expertise to evaluate these systems, and by the time they develop that expertise, the technology has evolved. The European Union is establishing an AI Office with dedicated technical staff and budget to build regulatory capacity, whilst the UK is pursuing a sector-specific approach that empowers existing regulators like the Competition and Markets Authority to address AI-related harms in their domains.

The cross-border nature of AI platforms creates additional complications. An AI agent operated by a US company, trained on data from multiple countries, making purchases from international merchants, creates a jurisdictional nightmare. Which country's consumer protection laws apply? Whose privacy regulations govern the data collection? Who has enforcement authority when harm occurs? The fragmentation extends beyond Western democracies. China's Personal Information Protection Law and algorithmic recommendation regulations impose requirements on AI systems operating within its borders, creating a third major regulatory regime that global platforms must navigate.

Industry self-regulation has emerged to fill some gaps. OpenAI and Anthropic developed the Agentic Commerce Protocol, a technical standard for how AI agents should interact with merchant systems. The protocol includes provisions for identifying agent traffic, disclosing commercial relationships, and maintaining transaction records. Google and Amazon rely on separate, incompatible systems, making it difficult for merchants to translate product catalogues into multiple formats.

The question of liability looms large. When an AI agent makes a purchase that the user later regrets, who is responsible? The user who gave the instruction? The platform that operated the agent? The merchant that fulfilled the order? Traditional consumer protection frameworks assume human decision-makers at each step. Agentic commerce distributes decision-making across human-AI interactions in ways that blur lines of responsibility.

The intellectual property dimensions add further complexity. Amazon has sued Perplexity, accusing the startup of violating its terms of service by using AI agents to access the platform without disclosing their automated nature. Amazon argues that Perplexity's agents degrade the Amazon shopping experience by showing products that don't incorporate personalised recommendations and may not reflect the fastest delivery options available. Perplexity counters that since its agent acts on behalf of a human user's direction, the agent automatically has the same permissions as the human user.

This dispute encapsulates the broader regulatory challenge: existing legal frameworks weren't designed for a world where software agents act autonomously on behalf of humans, making decisions, negotiating terms, and executing transactions.

The Power Redistribution

Step back from the technical and regulatory complexities, and agentic commerce reveals itself as fundamentally about power. Power to control the shopping interface. Power to influence purchasing decisions. Power to capture transaction fees. Power to shape which businesses thrive and which wither.

For decades, that power has been distributed across an ecosystem of search engines, social media platforms, e-commerce marketplaces, payment processors, and retailers themselves. Google controlled discovery through search. Facebook controlled attention through social feeds. Amazon controlled transactions through its marketplace. Each entity extracted value from its position in the funnel, and merchants paid tribute at multiple stages to reach customers.

Agentic commerce threatens to consolidate that distributed power into whoever operates the AI agents that consumers trust. If ChatGPT becomes the primary shopping interface for hundreds of millions of users, OpenAI captures influence that currently belongs to Google, Amazon, and every retailer's individual website. The company that mediates between consumer intent and commercial transaction holds extraordinary leverage over the entire economy.

This consolidation is already visible in partnership announcements. When Walmart, Shopify, and Etsy all integrate with ChatGPT within weeks of each other, they're acknowledging that OpenAI has become a gatekeeper they cannot afford to ignore. The partnerships are defensive, ensuring presence on a platform that could otherwise bypass them entirely.

But consolidation isn't inevitable. The market could fragment across multiple AI platforms, each with different strengths, biases, and commercial relationships. Google's AI Mode might excel at product discovery for certain categories. Perplexity's approach might appeal to users who value transparency over convenience. Smaller, specialised agents could emerge for specific verticals like fashion, electronics, or groceries.

The trajectory will depend partly on technical factors: which platforms build the most capable agents, integrate with the most merchants, and create the smoothest user experiences. But it will also depend on trust and regulation. If early AI shopping agents generate high-profile failures, consumer confidence could stall adoption. If regulators impose strict requirements that only the largest platforms can meet, consolidation accelerates.

For consumers, the implications are ambiguous. Agentic commerce promises convenience, efficiency, and potentially better deals through automated comparison and negotiation. Customers arriving via AI agents already demonstrate higher engagement and purchase intent. More than half of consumers anticipate using AI assistants for shopping by the end of 2025. Companies deploying AI shopping agents are delivering thirty per cent more conversions and forty per cent faster order fulfilment.

But those benefits come with risks. Loss of serendipity and discovery as agents optimise narrowly for stated preferences rather than exposing users to unexpected products. Erosion of privacy as more shopping behaviour flows through platforms that profile and monetise user data. Concentration of market power in the hands of a few AI companies that control access to billions of customers. Vulnerability to manipulation if agents' recommendations are influenced by undisclosed commercial arrangements.

Consider a concrete scenario. A parent asks an AI agent to find educational toys for a six-year-old who loves science. The agent might efficiently identify age-appropriate chemistry sets and astronomy kits based on thousands of product reviews and educational research. But if the agent prioritises products from merchants paying higher commissions over genuinely superior options, or if it lacks awareness of recent safety recalls, convenience becomes a liability. The parent saves time but potentially compromises on quality or safety in ways they would have caught through traditional research.

Marketplace or Manipulation

Agentic commerce is not a future possibility. It is a present reality growing at exponential rates. The question is not whether AI shopping agents will reshape retail, but how that reshaping will unfold and who will benefit from the transformation.

The optimistic scenario involves healthy competition between multiple AI platforms, strong transparency requirements that help users understand recommendation incentives, effective regulation that prevents the worst abuses whilst allowing innovation, and merchants who adapt by focusing on brand building, product quality, and authentic relationships.

In this scenario, consumers enjoy unprecedented convenience and potentially lower prices through automated comparison shopping. Merchants reach highly qualified customers with strong purchase intent. AI platforms create genuine value by reducing friction and improving matching between needs and products. Regulators establish guardrails that prevent manipulation whilst allowing experimentation. Picture a marketplace where an AI agent negotiates bulk discounts on behalf of a neighbourhood buying group, secures better warranty terms through automated comparison of coverage options, and flags counterfeit products by cross-referencing manufacturer databases, all whilst maintaining transparent logs of its decision-making process that users can audit.

The pessimistic scenario involves consolidation around one or two dominant AI platforms that extract monopoly rents, opaque algorithms shaped by undisclosed commercial relationships that systematically favour paying merchants over best products, regulatory capture or inadequacy that allows harmful practices to persist, and a race to the bottom on merchant margins that destroys business viability for all but the largest players.

In this scenario, consumers face an illusion of choice backed by recommendations shaped more by who pays the AI platform than by genuine product quality. Merchants become commodity suppliers in a system they can't influence without paying increasing fees. AI platforms accumulate extraordinary power and profit through their gatekeeper position. Imagine a future where small businesses cannot afford the fees to appear in AI agent recommendations, where platforms subtly steer purchases toward their own private-label products, and where consumers have no practical way to verify whether they're receiving genuinely optimal recommendations or algorithmically optimised profit extraction.

Reality will likely fall somewhere between these extremes. Some markets will consolidate whilst others fragment. Some AI platforms will maintain rigorous standards whilst others cut corners. Some regulators will successfully enforce transparency whilst others lack resources or authority. The outcome will be determined by choices made over the next few years by technology companies, policymakers, merchants, and consumers themselves.

The Stakeholder Reckoning

For technology companies building AI shopping agents, the critical choice is whether to prioritise short-term revenue maximisation through opaque commercial relationships or long-term trust building through transparency and user alignment. The companies that choose trust will likely capture sustainable competitive advantage as consumers grow more sophisticated about evaluating AI recommendations.

For policymakers, the challenge is crafting regulation that protects consumers without stifling the genuine benefits that agentic commerce can provide. Disclosure requirements, bias auditing, interoperability standards, and clear liability frameworks can establish baseline guardrails without prescribing specific technological approaches. The most effective regulatory strategies will focus on outcomes rather than methods: requiring transparency in how recommendations are generated, mandating disclosure of commercial relationships that influence agent behaviour, establishing accountability when AI systems cause consumer harm, and creating mechanisms for independent auditing of algorithmic decision-making. Policymakers must act quickly enough to prevent entrenchment of harmful practices but thoughtfully enough to avoid crushing innovation that could genuinely benefit consumers.

For merchants, adaptation means shifting from optimising for human eyeballs to optimising for algorithmic evaluation and human trust simultaneously. The retailers that will thrive are those that maintain compelling brands, deliver genuine value, and build direct relationships with customers that no AI intermediary can fully replace. This requires investment in product quality, authentic customer service, and brand building that goes beyond algorithmic gaming. Merchants who compete solely on price or visibility in AI agent recommendations will find themselves in a race to the bottom. Those who create products worth recommending and brands worth trusting will discover that AI agents amplify quality rather than obscuring it.

For consumers, the imperative is developing critical literacy about how AI shopping agents work, what incentives shape their recommendations, and when to trust algorithmic suggestions versus conducting independent research. Blind delegation is dangerous. Thoughtful use of AI as a tool for information gathering and comparison, combined with final human judgment, represents the responsible approach. This means asking questions about how agents generate recommendations, understanding what commercial relationships might influence those recommendations, and maintaining the habit of spot-checking AI suggestions against independent sources. Consumer demand for transparency can shape how these systems develop, but only if consumers actively seek that transparency rather than passively accepting algorithmic guidance.

Who Controls the Algorithm Controls Commerce

The fundamental question agentic commerce poses is who gets to shape the marketplace of the future. Will it be the AI platforms that control the interface? The merchants with the deepest pockets to pay for visibility? The regulators writing the rules? Or the consumers whose aggregate choices ultimately determine what succeeds?

The answer is all of the above, in complex interaction. But that interaction will produce very different outcomes depending on the balance of power. If consumers remain informed and engaged, if regulators act decisively to require transparency, if merchants compete on quality rather than just algorithmic gaming, and if AI platforms choose sustainable trust over exploitative extraction, then agentic commerce could genuinely improve how billions of people meet their needs.

If those conditions don't hold, we're building a shopping future where the invisible hand of the market gets replaced by the invisible hand of the algorithm, and where that algorithm serves the highest bidder rather than the human asking for help. The stakes are not just commercial. They're about what kind of economy we want to inhabit: one where technology amplifies human agency or one where it substitutes algorithmic optimisation for human choice.

The reshape is already underway. The revenue is already flowing through new channels. The questions about trust and transparency are already urgent. What happens next depends on decisions being made right now, in boardrooms and regulatory offices and user interfaces, about how to build the infrastructure of algorithmic commerce. Get those decisions right and we might create something genuinely better than what came before. Get them wrong and we'll spend decades untangling the consequences.

The invisible hand of AI is reaching for your wallet. The question is whether you'll notice before it's already spent your money.


Sources and References

  1. McKinsey & Company (2025). “The agentic commerce opportunity: How AI agents are ushering in a new era for consumers and merchants.” McKinsey QuantumBlack Insights.

  2. Boston Consulting Group (2025). “Agentic Commerce is Redefining Retail: How to Respond.” BCG Publications.

  3. Opera Software (March 2025). “Opera becomes the first major browser with AI-based agentic browsing.” Opera Newsroom Press Release.

  4. Opera Software (May 2025). “Meet Opera Neon, the new AI agentic browser.” Opera News Blog.

  5. Digital Commerce 360 (October 2025). “McKinsey forecasts up to $5 trillion in agentic commerce sales by 2030.”

  6. TechCrunch (September 2025). “OpenAI takes on Google, Amazon with new agentic shopping system.”

  7. TechCrunch (March 2025). “Opera announces a new agentic feature for its browser.”

  8. PYMNTS.com (2025). “Agentic AI Is Quietly Reshaping the eCommerce Funnel.”

  9. Retail Brew (October 2025). “AI agents are becoming a major e-commerce channel. Will retailers beat them or join them?”

  10. eMarketer (2025). “As consumers turn to AI for shopping, affiliate marketing is forging its own path.”

  11. Retail TouchPoints (2025). “Agentic Commerce Meets Retail ROI: How the Affiliate Model Powers the Future of AI-Led Shopping.”

  12. Federal Trade Commission (2023). “The Luring Test: AI and the engineering of consumer trust.”

  13. Federal Trade Commission (2025). “AI and the Risk of Consumer Harm.”

  14. Federal Trade Commission (2024). “FTC Announces Crackdown on Deceptive AI Claims and Schemes.”

  15. Bloomberg (November 2025). “Amazon Demands Perplexity Stop AI Tool's Purchasing Ability.”

  16. CNBC (November 2025). “Perplexity AI accuses Amazon of bullying with legal threat over Comet browser.”

  17. Retail Dive (November 2025). “Amazon sues Perplexity over AI shopping agents.”

  18. Criteo (2025). “Retail media in the agentic era.”

  19. Bizcommunity (2025). “Retail media: Agentic AI commerce arrives, estimated value of $136bn in 2025.”

  20. The Drum (June 2025). “How AI is already innovating retail media's next phase.”

  21. Brookings Institution (2024). “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms.”

  22. Lawfare Media (2024). “Are Existing Consumer Protections Enough for AI?”

  23. The Regulatory Review (2025). “A Modern Consumer Bill of Rights in the Age of AI.”

  24. Decrypt (November 2025). “Microsoft Gave AI Agents Fake Money to Buy Things Online. They Spent It All on Scams.”

  25. Mastercard (April 2025). “Mastercard unveils Agent Pay, pioneering agentic payments technology to power commerce in the age of AI.”

  26. Payments Dive (2025). “Visa, Mastercard race to agentic AI commerce.”

  27. Fortune (October 2025). “Walmart's deal with ChatGPT should worry every ecommerce small business.”

  28. Harvard Business Review (February 2025). “AI Agents Are Changing How People Shop. Here's What That Means for Brands.”

  29. Adweek (2025). “AI Shopping Is Here but Brands and Retailers Are Still on the Sidelines.”

  30. Klaviyo Blog (2025). “AI Shopping: 6 Ways Brands Can Adapt Their Online Presence.”


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...

#HumanInTheLoop #AgenticCommerce #DigitalEconomy #AlgorithmicTrust

Picture this: you photograph your electricity bill, speak a casual instruction in Manglish (“Pay this lah”), and watch as an artificial intelligence system parses the image, extracts the payment details, and completes the transaction in seconds. No app navigation. No account numbers. No authentication dance with one-time passwords.

This isn't speculative technology. It's Ryt Bank, Malaysia's first fully AI-powered financial institution, which launched to the public on 25 August 2025. Built on ILMU, the country's first homegrown large language model developed by YTL AI Labs in collaboration with Universiti Malaya, Ryt Bank represents something far more consequential than another digital banking app. It's a fundamental rethinking of the relationship between humans and their money, powered by conversational AI that understands not just English and Bahasa Melayu, but the linguistic hybrid of Manglish and even regional dialects like Kelantanese.

The stakes extend far beyond Malaysia's borders. As the world's first AI-native bank (rather than a traditional bank retrofitted with AI features), Ryt Bank is a living experiment in whether ordinary people will trust algorithms with their financial lives. The answer could reshape banking across Southeast Asia and beyond, particularly in emerging markets where digital infrastructure has leapfrogged traditional banking channels.

But here's the uncomfortable question underlying all the breathless press releases and promotional interest rates: are we witnessing genuine financial democratisation, or simply building more sophisticated systems that will ultimately concentrate power in the hands of those who control the algorithms?

The Digital Banking Gold Rush

To understand Ryt Bank's significance, you need to grasp the broader transformation sweeping through Malaysia's financial landscape. In April 2022, Bank Negara Malaysia (BNM), the country's central bank, issued five digital banking licences, deliberately setting out to disrupt a sector that had grown comfortably oligopolistic. The licensed entities included GXBank (backed by Grab), Boost Bank, AEON Bank, KAF Digital Bank, and Ryt Bank, a joint venture between YTL Digital Capital and Singapore-based Sea Limited.

The timing was strategic. Malaysia already possessed the infrastructure foundations for digital financial transformation: 97% internet penetration, 95% smartphone ownership, and 96% of adults with active deposit accounts, according to Bank Negara Malaysia data from 2024. The country had surpassed its 2026 digital payment target of 400 transactions per capita ahead of schedule, reaching 405 transactions per capita by 2024. What was missing wasn't connectivity but innovation in how financial services were delivered and experienced.

The results have been dramatic. GXBank, first to market, accumulated 2.16 billion ringgit (approximately 489 million US dollars) in customer deposits within the first nine months of 2024, becoming the largest digital bank by asset size at 2.4 billion ringgit by September 2024. Boost Bank, launching later, had attracted 399 million ringgit in assets within its first three months of operations.

Yet awareness hasn't automatically translated to adoption. Of the 93% of Malaysians who reported awareness of digital banks in Q4 2024, only 50% had actually become users. This gap reveals something crucial: people remain uncertain about entrusting their money to app-based financial institutions, particularly those without physical branches or familiar brand legacies.

Ryt Bank entered this cautious market with a differentiator: AI so deeply integrated that the bank's entire interface could theoretically be conversational. No menus to navigate. No forms to fill. Just talk to your bank like you'd talk to a financially savvy friend.

The Intelligence Behind the Interface

ILMU, the large language model powering Ryt Bank's AI assistant, represents a significant technological achievement beyond its banking application. Developed by YTL AI Labs, ILMU is designed to rival global AI leaders like GPT-4 whilst being specifically optimised for Malaysian linguistic and cultural contexts. In Malay MMLU benchmarks (which test language model understanding), ILMU reportedly outperforms GPT-4, DeepSeek V3, and GPT-5, particularly in handling regional dialects.

This localisation matters profoundly. Global AI models trained predominantly on English-language internet content often stumble when encountering the linguistic complexity of multilingual societies. Malaysia operates in at least three major languages (Bahasa Melayu, English, and Mandarin), plus numerous regional variations and the unique creole of Manglish. A banking AI that understands “I want to pindah duit to my mak's account lah” (mixing Malay, English, and colloquial structure) is genuinely useful in ways that a generic chatbot translated into Malay would never be.

The technical architecture allows Ryt AI to handle transactions through natural conversation in text or voice, process images to extract financial information (bills, receipts, payment QR codes), and provide spending insights by analysing transaction patterns. During the early access period, users reported completing full account onboarding, including electronic Know Your Customer (eKYC) verification, in approximately two minutes.

But technical sophistication creates new vulnerabilities. Every AI interaction involves sending potentially sensitive financial data to language model systems that process, interpret, and act on that information. Dr Adnan Zaylani Mohamad Zahid, Assistant Governor of Bank Negara Malaysia, has articulated these concerns explicitly. In a July 2024 speech on banking in the era of generative AI, he outlined risks including AI model bias, unstable performance in self-learning systems, third-party dependencies, data privacy vulnerabilities, and emerging cyber threats like AI-enabled phishing and deepfakes. His message was clear: “Human judgment must remain central to risk management oversight.”

The Trust Equation

Trust in financial institutions is a peculiar thing. It's simultaneously deeply rational (based on regulatory frameworks, deposit insurance, historical performance) and thoroughly emotional (shaped by brand familiarity, peer behaviour, and gut instinct). AI banking disrupts both dimensions.

On the rational side, Ryt Bank is licensed by Bank Negara Malaysia and protected by Perbadanan Insurans Deposit Malaysia (PIDM), which guarantees deposits up to 250,000 ringgit per depositor. Yet according to 2024 global banking surveys, 58% of banking customers across 39 countries worry about data security and hacking risks. Only 28% believe their bank effectively communicates data protection measures, and only 40% fully trust their bank's transparency about cybersecurity.

These trust deficits are amplified when AI enters the picture. Research on consumer trust in AI financial services reveals that despite technological sophistication, adoption “hinges significantly on human trust and confidence.” Malaysia isn't immune to these anxieties. A TikTok user named sherryatig captured the sentiment bluntly when commenting on Ryt Bank: “The current banking system is already susceptible to fraud. NOT in my wildest dream to allow transactions from prompt.”

The regional context intensifies these worries. Consumers across Southeast Asia hold banks and fintech firms primarily responsible for safeguarding against financial crimes, and surveys indicate that more than half of respondents across five Southeast Asian markets expressed growing fears about rising online fraud and hacking.

Yet early Ryt Bank user reviews suggest cautious optimism. Coach Alex Tan praised the “smooth user experience” and two-minute onboarding. Tech reviewers noted that “even in beta, Ryt AI is impressively intuitive, making banking feel less like a task and more like a conversation.” The AI's ability to process screenshots of bank account details shared via WhatsApp and automatically populate transfer fields has been highlighted as solving a genuine pain point.

These positive early signals, however, come from early adopters who tend to be more tech-savvy and risk-tolerant than the broader population. The real test will come when Ryt Bank attempts to expand beyond enthusiastic technophiles to the mass market, including older users, rural communities, and those with limited digital literacy.

The Personalisation Paradox

One of AI banking's most touted benefits is hyper-personalisation: financial services tailored precisely to individual circumstances, goals, and behaviour patterns. The global predictive analytics market in banking is forecast to grow at a compound annual growth rate of 19.42% through 2030. Bank of America's Erica virtual assistant, which uses predictive analytics, has over 19 million users and reportedly generated a 28% increase in product adoption compared to traditional marketing approaches.

This sounds wonderful until you examine the underlying dynamics. Personalisation requires extensive data collection and analysis. Every transaction, every app interaction, every moment of hesitation before clicking “confirm” becomes data that feeds the AI's understanding of you. The more personalised your banking experience, the more comprehensively you're surveilled.

Moreover, AI-driven personalisation in financial services has repeatedly demonstrated troubling patterns of bias and discrimination. An analysis of Home Mortgage Disclosure Act data from the Urban Institute in 2024 revealed that Black and Brown borrowers were more than twice as likely to be denied loans compared to white borrowers. Research on fintech algorithms found that whilst they discriminated 40% less than face-to-face lenders, Latinx and African-American groups still paid 5.3 basis points more for purchase mortgages and 2.0 basis points more for refinance mortgages compared to white counterparts.

These disparities emerge because AI models learn from historical data that encodes past discrimination. The technical challenge is compounded by what researchers call the “fairness paradox”: you cannot directly measure bias against protected categories without collecting data about those categories, yet collecting such data raises legitimate concerns about potential misuse.

Bank Negara Malaysia has acknowledged these challenges. The central bank's Chief Risk Officers' Forum developed an AI Governance Framework outlining responsible AI principles, including fairness, accountability, transparency, and reliability. In August 2025, BNM unveiled its AI financial regulation framework at MyFintech Week 2025 and initiated a ten-week public consultation period (running until 17 October 2025) seeking feedback on sector-specific AI definitions, regulatory clarity needs, and AI trends that could shape the sector over the next three to five years.

But regulatory frameworks often lag behind technological deployment. By the time comprehensive AI banking regulations are finalised and implemented, millions of Malaysians may already be using systems whose algorithmic decision-making remains opaque even to regulators.

The Inclusion Question

Digital banks, including AI-powered ones, have positioned themselves as champions of financial inclusion, promising to serve the underserved. The rhetoric is appealing, but does it match reality?

Malaysia's financial inclusion challenges are substantial. According to the 2023 RinggitPlus Malaysian Financial Literacy Survey, 71% of respondents could save 500 ringgit or less monthly, whilst 67% had emergency savings lasting three months or less. The Khazanah Research Institute reports that 55% of Malaysians spend equal to or more than their earnings, living paycheck to paycheck. Approximately 15% of the 23 million Malaysian adults remain unbanked, according to The Business Times. MSMEs face a particularly acute 90 billion ringgit funding gap.

Bank Negara Malaysia data indicates that close to 60% of customers at GXBank, AEON Bank, and Boost Bank come from traditionally underserved segments, including low-income households and rural communities. Boost Bank's surveys in Kuala Terengganu found that 97% of respondents did not have 2,000 ringgit readily available.

However, digital banks face inherent limitations in reaching the truly marginalised. One of the primary challenges is bridging the digital divide, particularly in underserved communities where many individuals and businesses, especially in rural areas, lack necessary devices and digital literacy. Immigrants and refugees often lack the documentation required for digital identity verification. Elderly populations may struggle with smartphone interfaces regardless of how “intuitive” they're designed to be.

There's also an economic tension in AI banking's inclusion promise. Building and maintaining sophisticated AI systems requires substantial ongoing investment. Those costs must eventually be recovered through fees, product cross-selling, or data monetisation. The business model that supports free or low-cost AI banking may ultimately depend on collecting and leveraging user data in ways that create new forms of exploitation, even as they expand access.

Ryt Bank launched with 4% annual interest on savings (on the first 20,000 ringgit, until 30 November 2025), unlimited 1.2% cashback on overseas transactions with no conversion fees, and a PayLater feature providing instant credit up to 1,499 ringgit with 0% interest if repaid within the first month. These are genuinely attractive terms. But as reviews have noted, “long-term value will depend on whether these benefits are extended after November 2025.” The pattern is familiar from countless fintech launches: aggressive promotional terms to build user base, followed by monetisation pivots.

The Human Cost of Efficiency

AI banking promises remarkable efficiency gains. Chatbots and virtual assistants can handle up to 50% of customer inquiries, according to industry estimates. Denmark's DNB bank reported that within six months, its chatbot had automated over 50% of all incoming chat traffic and interacted with over one million customers.

But efficiency has casualties. Across Southeast Asia, approximately 11,000 bank branches are expected to close by 2030, representing roughly 18% of current physical banking presence. In Malaysia specifically, strategy consulting firm Roland Berger projects nearly 567 bank branch closures by 2030, a 23% decline from 2,467 branches in 2020 to approximately 1,900 branches.

These closures disproportionately affect communities that already face financial service gaps. Rural areas lose physical banking presence. Elderly customers who prefer face-to-face service, immigrants who need in-person assistance, and small business owners who require relationship banking all find themselves pushed toward digital channels they may neither trust nor feel competent to use.

The employment implications extend beyond branch closures. By the end of 2024, 71% of banking institutions and development financial institutions had implemented at least one AI application, up 56% from the previous year. Each of those AI applications represents tasks previously performed by humans. Customer service representatives, loan officers, fraud analysts, and financial advisers increasingly find their roles either eliminated or transformed into oversight positions managing AI systems.

Industry estimates suggest AI could generate between 200 billion and 340 billion US dollars annually for banking. Yet there's a troubling asymmetry: those efficiency gains and cost savings accrue primarily to financial institutions and shareholders, whilst job losses and service degradation are borne by workers and vulnerable customer segments.

The Algorithmic Black Box

Perhaps the most profound challenge AI banking introduces is opacity. Traditional banking, for all its faults, operates on rules that can theoretically be understood, questioned, and challenged. AI systems, particularly large language models like ILMU, operate fundamentally differently. They make decisions based on pattern recognition across vast training datasets, identifying correlations that may not correspond to any human-comprehensible logic. Even the engineers who build these systems often cannot fully explain why an AI reached a particular conclusion, a problem known in the field as the “black box” dilemma.

This opacity has serious implications for financial fairness. If an AI denies you credit, declines a transaction, or flags your account for fraud investigation, can you meaningfully challenge that decision? Consumer complaints about banking chatbots reveal experiences of “feeling stuck and frustrated, receiving inaccurate information, and paying more in junk fees” when systems malfunction or misunderstand user intent.

Explainability is considered a core tenet of fair lending systems, yet may work against AI adoption. America's legal and regulatory structure to protect against discrimination and enforce fair lending “is not well equipped to handle AI,” according to legal analyses. The Consumer Financial Protection Bureau has outlined that financial institutions are expected to hold themselves accountable for protecting consumers against algorithmic bias and discrimination, but how regulators can effectively audit systems they don't fully understand remains an open question.

Bank Negara Malaysia's approach has been to apply technology-agnostic regulatory frameworks. Rather than targeting AI specifically, existing policies like Risk Management in IT (RMiT) and Management of Customer Information and Permitted Disclosures (MCIPD) address associated risks comprehensively. The BNM Regulatory Sandbox facilitates testing of innovative AI use cases, allowing supervised experimentation.

Yet regulatory sandboxes, by definition, exist outside normal rules. The question is whether lessons learned in sandboxes translate to effective regulation of AI systems operating at population scale.

The Cyber Dimension

AI banking's expanded attack surface introduces new cybersecurity challenges. According to research on AI cybersecurity in banking, 80% of organisational leaders express concerns about data privacy and security, whilst only 10% feel prepared to meet regulatory requirements. The areas of greatest concern for financial organisations are adaptive cyberattacks (93% of respondents), AI-powered botnets (92%), and polymorphic malware (83%).

These aren't theoretical threats. Malware specifically targeting mobile banking apps has emerged across Southeast Asia. ToxicPanda and TgToxic, which emerged in mid-2022, target Android mobile users with bank and finance apps in Indonesia, Taiwan, and Thailand. These threats will inevitably evolve to target AI banking interfaces, potentially exploiting the conversational nature of systems like Ryt AI to conduct sophisticated social engineering attacks.

Consider the scenario: a user receives a message that appears to be from Ryt Bank's AI assistant, using familiar conversational style and regional dialect, requesting confirmation of a transaction. The user, accustomed to interacting with their bank via natural language, might not scrutinise the interaction as carefully as they would a traditional suspicious email. AI-enabled phishing could exploit the very user-friendliness that makes AI banking appealing.

Poor data quality poses another challenge, with 40% of respondents citing it as a reason AI initiatives fail, followed by privacy concerns (38%) and limited data access (36%). An AI banking system is only as reliable as its training data and ongoing inputs. Corrupted data, whether through malicious attack or simple error, could lead to widespread incorrect decisions.

What Happens When the Algorithm Fails?

Every technological system eventually fails. Servers crash. Software has bugs. Networks go offline. In traditional banking, these failures are inconvenient but manageable. But what happens when an AI-native bank experiences a critical failure?

If ILMU's language processing system misunderstands a transaction instruction and sends your rent money to the wrong account, what recourse do you have? If a software update introduces bugs that cause the AI to provide incorrect financial advice, who bears responsibility for decisions made based on that advice?

These questions aren't adequately addressed in current regulatory frameworks. Consumer complaints about banking chatbots show that whilst they're useful for basic inquiries, “their effectiveness wanes as problems become more complex.” Users report “wasted time, feeling stuck and frustrated” when chatbots cannot resolve issues and no clear path to human assistance exists.

Ryt Bank's complete dependence on AI amplifies these concerns. Traditional banks and even other digital banks maintain human customer service channels as fallbacks. If Ryt Bank's differentiator is comprehensive AI integration, building parallel human systems undermines that efficiency model. Yet without adequate human backup, users become entirely dependent on algorithmic systems that may not be equipped to handle edge cases, emergencies, or their own malfunctions.

The phrase “computer says no” has become cultural shorthand for the frustrating experience of being denied something by an inflexible automated system with no human override. AI banking risks creating “algorithm says no” scenarios where financial access is controlled by systems that cannot be reasoned with, appealed to, or overridden even when obviously wrong.

The Sovereignty Dimension

An underappreciated aspect of ILMU's significance is technological sovereignty. For decades, Southeast Asian nations have depended on Western or Chinese technology companies for critical digital infrastructure. Malaysia's development of a homegrown large language model capable of competing with global leaders like GPT-4 represents a strategic assertion of technological independence.

This matters because AI systems encode the values, priorities, and cultural assumptions of their creators. A language model trained predominantly on Western internet content will inevitably reflect Western cultural norms. ILMU's deliberate optimisation for Bahasa Melayu, Manglish, and regional dialects ensures that Malaysian linguistic and cultural contexts are centred rather than accommodated as afterthoughts.

The geopolitical implications extend further. As AI becomes infrastructure for financial services, healthcare, governance, and other critical sectors, nations that control AI development gain significant strategic advantages. Malaysia's ILMU project demonstrates regional ambition to participate in AI development rather than remaining passive consumers of foreign technology.

However, technological sovereignty has costs. Maintaining and advancing ILMU requires sustained investment in AI research, computing infrastructure, and talent development. Malaysia must compete globally for AI expertise whilst building domestic capacity.

Ryt Bank's use of ILMU creates a testbed for Malaysian AI at scale. If ILMU performs reliably in the demanding environment of real-time financial transactions involving millions of users, it validates Malaysia's AI capabilities and could attract international attention and investment. If ILMU encounters significant problems, it could damage credibility and confidence in Malaysian AI development more broadly.

The Question of Control

Ultimately, the transformation AI banking represents is about control: who controls financial data, who controls access to financial services, and who controls the algorithms that increasingly mediate between people and their money.

Traditional banking, for all its inequities and exclusions, distributed control across multiple points. Bank employees exercised discretion in lending decisions. Regulators audited and enforced rules. Customers could negotiate, complain, and exert pressure through collective action. The system was far from perfectly democratic, but power wasn't entirely concentrated.

AI banking centralises control in the hands of those who design, train, and operate the algorithms. Those entities (corporations, in Ryt Bank's case the YTL Group and Sea Limited partnership) gain unprecedented insight into user behaviour, financial circumstances, and potentially even personal lives, given how much can be inferred from transaction patterns. They decide what features to build, what data to collect, which users to serve, and how to monetise the platform.

Regulatory oversight provides some counterbalance, but regulators face profound information asymmetries. They lack the technical expertise, computational resources, and internal access necessary to fully understand or audit complex AI systems. Even when regulators identify problems, enforcement mechanisms designed for traditional banking may be inadequate for addressing algorithmic harms that manifest subtly across millions of automated decisions.

The power imbalance between individual users and AI banking platforms is even more stark. Terms of service that few users read grant broad rights to collect, analyse, and use personal data. Algorithmic decision-making operates opaquely, with limited user visibility into why particular decisions are made. When problems occur, users face AI systems that may not understand complaints and human support channels that are deliberately limited to reduce costs.

Financial exclusion can cascade into broader life exclusion: difficulty renting housing, accessing credit for emergencies, or even proving identity in an increasingly digital society. If AI systems make errors or biased decisions, the affected individuals often have limited recourse.

The Path Forward

So will Malaysia's first AI-powered bank fundamentally change how ordinary people manage their money and trust financial institutions? The answer is almost certainly yes, but the nature of that change remains contested and uncertain.

In the optimistic scenario, AI banking delivers on its promises. Financial services become more accessible, affordable, and personalised. Underserved communities gain banking access that traditional institutions never provided. AI systems prove trustworthy and secure, whilst regulatory frameworks evolve to effectively address algorithmic risks. Malaysia demonstrates that developing nations can be AI innovators rather than passive technology consumers.

This scenario isn't impossible. The technological foundations exist. Regulatory attention is focused. Public awareness of both benefits and risks is growing. If stakeholders act responsibly and prioritise long-term sustainability over short-term gains, AI banking could genuinely improve financial inclusion and service quality.

But the pessimistic scenario is equally plausible. AI banking amplifies existing inequalities and creates new forms of exclusion. Algorithmic bias reproduces and scales historical discrimination. Data privacy violations and security breaches erode trust. Job losses and branch closures harm vulnerable populations. The concentration of power in AI platforms creates new forms of corporate control over economic life. The promised benefits accrue primarily to young, urban, digitally literate users whilst others are left behind.

This scenario isn't dystopian speculation. It reflects documented patterns from fintech and platform economy deployments worldwide. The optimistic and pessimistic scenarios will likely coexist, with AI banking simultaneously creating winners and losers.

What's most important is recognising that technological change isn't inevitable or predetermined. The impact of AI banking will be shaped by choices: regulatory choices about what to permit and require, corporate choices about what to build and how to operate it, and individual choices about what to adopt and how to use it.

Those choices require informed public discourse that moves beyond both techno-optimism and techno-pessimism to engage seriously with the complexities and trade-offs involved. Malaysians shouldn't simply accept AI banking as progress or reject it as threat, but rather interrogate it critically: Who benefits? Who is harmed? What alternatives exist? What safeguards are necessary?

The Conversation We Need

Ryt Bank's conversational AI interface is designed to make banking feel natural, like talking to a financially savvy friend. But perhaps what Malaysia most needs isn't a conversation with an algorithm, but a conversation amongst citizens, regulators, technologists, and financial institutions about what kind of financial system serves the public interest.

That conversation must address uncomfortable questions. How much privacy should people sacrifice for convenience? How much human judgment should be replaced by algorithmic efficiency? How do we ensure that AI systems serve the underserved rather than just serving themselves? Who bears responsibility when algorithms fail or discriminate?

The launch of Malaysia's first AI-powered bank is genuinely significant, not because it provides definitive answers to these questions, but because it makes them urgently tangible. Ryt Bank is no longer speculation about AI's potential impact on banking but a real system that real people will use to manage real money and real lives.

Early user reviews suggest that the technology works, that the interface is intuitive, that transactions happen smoothly. But technology working isn't the same as technology serving human flourishing. The question isn't whether AI can power a bank (clearly it can) but whether AI banking serves the public good or primarily serves corporate and technological interests.

Bank Negara Malaysia's public consultation on AI in financial services, running until 17 October 2025, represents an opportunity for Malaysians to shape regulatory approaches whilst they're still forming. But effective participation requires moving beyond the promotional narratives of frictionless, intelligent banking to examine the power structures and social implications underneath.

The 93% of Malaysians who are aware of digital banks but remain cautious about adoption aren't simply being backward or technophobic. They're exercising appropriate scepticism about entrusting their financial lives to systems they don't fully understand, controlled by entities whose interests may not align with their own.

That scepticism is valuable. It should inform regulatory design that insists on transparency, accountability, and human override mechanisms. It should shape corporate strategies that prioritise user control and data privacy over maximum data extraction. It should drive ongoing research into algorithmic bias, security vulnerabilities, and unintended consequences.

AI banking will change how Malaysians manage money and relate to financial institutions. But whether that change is fundamentally positive or negative, inclusive or exclusionary, empowering or exploitative remains to be determined. The algorithm will indeed see you now, but the crucial question is: are you being seen clearly, fairly, and on terms that serve your interests rather than merely its own?

The answer lies not in the technology itself but in the social, political, and ethical choices that surround its deployment. Malaysia's experiment with AI-powered banking is just beginning. How it unfolds will offer lessons far beyond the country's borders about whether artificial intelligence in finance can genuinely serve human needs or ultimately subordinates those needs to algorithmic logic.

That's the conversation worth having, and it's one that no AI, however sophisticated, can have for us.


Sources and References

  1. Bank Negara Malaysia. (2022). “Five successful applicants for the digital bank licences.” Retrieved from https://www.bnm.gov.my/-/digital-bank-5-licences

  2. Bank Negara Malaysia. (2020). “Policy Document on Licensing Framework for Digital Banks.” Retrieved from https://www.bnm.gov.my/-/policy-document-on-licensing-framework-for-digital-banks

  3. Zahid, Adnan Zaylani Mohamad. (2024, July 16). “Banking in the era of generative AI.” Speech by Assistant Governor of Bank Negara Malaysia. Bank for International Settlements. Retrieved from https://www.bis.org/review/r240716g.htm

  4. TechWire Asia. (2025, January). “Malaysia's first AI-powered bank revolutionises financial services.” Retrieved from https://techwireasia.com/2025/01/malaysia-first-ai-powered-bank-revolutionises-financial-services/

  5. SoyaCincau. (2025, August 12). “Ryt Bank First Look: Malaysia's first AI-powered Digital Bank.” Retrieved from https://soyacincau.com/2025/08/12/ryt-bank-ytl-digital-bank-first-look/

  6. Fintech News Malaysia. (2025). “Ryt Bank Debuts as Malaysia's First AI-Powered Digital Bank.” Retrieved from https://fintechnews.my/53734/digital-banking-news-malaysia/ryt-bank-launch/

  7. YTL AI Labs. (2025). “YTL Power Launches ILMU, Malaysia's First Homegrown Large Language Model.” Retrieved from https://www.ytlailabs.com/

  8. New Straits Times. (2025, August). “YTL launches ILMU – Malaysia's first multimodal AI, rivalling GPT-4.” Retrieved from https://www.nst.com.my/business/corporate/2025/08/1259122/ytl-launches-ilmu-malaysias-first-multimodal-ai-rivalling-gpt-4

  9. TechNode Global. (2025, March 21). “RAM: GXBank tops Malaysia's digital banking customer deposits with $489M for first nine months of 2024.” Retrieved from https://technode.global/2025/03/21/ram-gxbank-tops-malaysias-digital-banking-customer-deposits-with-489m-for-first-nine-months-of-2024/

  10. The Edge Malaysia. (2024). “GXBank tops digital banking sector deposits with RM2.16 bil as of September 2024 – RAM Ratings.” Retrieved from https://theedgemalaysia.com/node/748777

  11. The Edge Malaysia. (2024). “Banking for the underserved.” Retrieved from https://theedgemalaysia.com/node/727342

  12. RinggitPlus. (2023). “RinggitPlus Malaysian Financial Literacy Survey 2023.”

  13. Roland Berger. (2020). “Banking branch closure forecast for Southeast Asia.”

  14. Urban Institute. (2024). “Home Mortgage Disclosure Act data analysis.”

  15. MX. (2024). “Consumers Trust in AI Integration in Financial Services Is Shifting.” Retrieved from https://www.mx.com/blog/shifting-trust-in-ai/

  16. Brookings Institution. “Reducing bias in AI-based financial services.” Retrieved from https://www.brookings.org/articles/reducing-bias-in-ai-based-financial-services/

  17. ResearchGate. (2024). “AI-Powered Personalization In Digital Banking: A Review Of Customer Behavior Analytics And Engagement.” Retrieved from https://www.researchgate.net/publication/391810532

  18. Consumer Financial Protection Bureau. “Chatbots in consumer finance.” Retrieved from https://www.consumerfinance.gov/data-research/research-reports/chatbots-in-consumer-finance/

  19. Cyber Magazine. “How AI Adoption is Challenging Security in Banking.” Retrieved from https://cybermagazine.com/articles/how-ai-adoption-is-challenging-security-in-banking

  20. No Money Lah. (2025, August 27). “Ryt Bank Review: When AI meets banking for everyday Malaysians.” Retrieved from https://nomoneylah.com/2025/08/27/ryt-bank-review/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...

#HumanInTheLoop #AIFinancialServices #DigitalSovereignty #AlgorithmicTrust