She Heard Her Daughter Crying: Voice Cloning and the Elderly Fraud Crisis

On a humid July afternoon in Dover, Florida, an 82-year-old grandmother named Sharon Brightwell answered the telephone and heard her daughter crying. The voice was unmistakable. It sobbed, it choked on the words, it begged. There had been a car accident. A pregnant woman had been hurt. A lawyer would call shortly with instructions. Brightwell, who had raised this daughter, who had rocked her to sleep, who knew the exact timbre of her weeping because she had heard it a thousand times across half a century, did what any mother would do. She emptied an envelope of cash. She handed fifteen thousand dollars to a courier who appeared at her door. By the time she realised her daughter had never been in an accident at all, the money and the courier were gone.
The voice she heard was not her daughter. It was a synthetic reconstruction, stitched together by a generative model from audio scraped off the open internet, probably from a social media post, possibly from a voicemail greeting, certainly from nothing more than a few seconds of casual speech. The emotional content, the terror and the tears, was added by the same model as a matter of routine. Running the entire performance cost pennies. Producing a convincing clone of an unsuspecting person's voice, according to every major consumer research organisation that has tested the tools in the last eighteen months, now requires as little as thirty seconds of source material. Some academic demonstrations have done it with three.
That is the hinge on which this story turns, and it is the reason the United States has quietly slipped into one of the strangest fraud epidemics it has ever faced. In 2024, older Americans reported losing almost five billion dollars to scams and fraud, a figure that the Federal Trade Commission itself regards as a dramatic undercount. The agency's most recent report to Congress, issued in December 2025, estimates that the true cost of fraud against Americans aged 60 and over sat somewhere between ten billion and eighty-one and a half billion dollars last year, depending on the model used to correct for the torrent of cases that victims never report out of shame, confusion, or cognitive decline. Within that figure, the fastest-growing category, and by almost every measure the most psychologically ruinous, is the one that depends on cloned voices.
More than 75,000 consumers have now signed a petition urging the FTC to act. A bipartisan Senate bill introduced in December 2025 would criminalise the very act of using AI to impersonate someone with intent to defraud. And in April 2026, the Journal of Accountancy, a publication not generally given to panic, ran a feature instructing certified public accountants in the practical steps they now need to take to protect their elderly clients from being drained by phantom grandchildren. The professional class is catching up to what grandparents already know. Something has gone very wrong with the voice.
A weapon made of pennies
The mechanics are almost insultingly simple. A scammer needs three ingredients: a target, a sample of the target's loved one speaking, and a voice cloning tool. The first two are trivial. Older Americans are plentiful and reachable by phone. Voice samples have been uploaded by the billions to TikTok, Instagram, YouTube, Facebook, and countless wedding videos, podcast appearances, voicemail greetings, Zoom recordings, and church livestreams. The third ingredient used to be the hard part. It no longer is.
Commercial voice synthesis platforms such as ElevenLabs, Speechify, PlayHT, and LOVO have, over the last three years, made voice cloning available to anyone with a credit card and, in many cases, anyone willing to tick a box attesting that they have the legal right to reproduce the voice in question. A March 2025 assessment by Consumer Reports examined six leading voice cloning products and found that four of them, including three of the most widely used, relied exclusively on that self-attestation as their safeguard. The researchers who performed the test were able to clone real voices without providing any evidence of consent. The box was ticked, the clone was generated, the guardrail did nothing.
The FBI's Internet Crime Complaint Center reported that AI-enabled fraud accounted for roughly 893 million dollars in losses across more than 22,000 complaints in 2025, with around 352 million of that total attributed to elder fraud complaints in which an AI component was documented. Those are floors, not ceilings, because most victims do not file complaints and most families do not realise AI was involved when they do. Researchers inside Microsoft's AI for Good Lab, analysing 531,000 fraud reports drawn from AARP's Fraud Watch Network Helpline and the Better Business Bureau's Scam Tracker, reported that scams identified as AI-enabled, with realistic cloned voices or synthetic video, increased twentyfold between 2023 and 2025. Twentyfold. In two years.
The economics are what make it terrifying. A traditional phone scam requires a human operator, a convincing script, a plausible accent, and the nerve to hold a conversation with someone who may ask unexpected questions. Even the best of those operations run on thin margins. A generative voice scam requires none of that. The attacker can produce a bespoke audio deepfake in seconds, run the call through a VoIP provider, spoof the caller ID to match a known family number, and be off the line before a victim even registers that something is wrong. The cost of attacking one thousand targets is barely higher than the cost of attacking one. The cost of attacking ten thousand is not much higher than that.
Why the voice is special
There is a reason voice cloning is working as well as it is, and it has almost nothing to do with technology. It has to do with evolution.
The human auditory system is wired to treat voice as an exceptionally high-trust channel. Infants recognise their mother's voice within days of birth. By the time we reach adulthood, we can identify the voice of a close relative from a single syllable, across decades, through distortion, over a crackling phone line, even when we have not heard it for years. More importantly, voice is tightly coupled to emotion. The acoustic signatures of distress, fear, and pain trigger physiological responses in listeners that bypass conscious deliberation entirely. A mother hearing what she believes is her child crying on the telephone will release a cascade of stress hormones before any rational assessment has begun.
Jennifer DeStefano, a mother in Scottsdale, Arizona, described this experience in searing detail to the United States Senate Judiciary Committee in June 2023. She had received a call while her 15-year-old daughter Brianna was away skiing. She heard Brianna's voice sobbing “Mom!” and then a man's voice demanding a million dollars in ransom. She knew that voice. She had heard it, she testified, her entire life. She would never be able, she told the senators, to shake the sound of those desperate cries out of her mind. Only a coincidence saved her: a bystander handed her a separate phone, on which her actual daughter, safe and confused, was calling. Had that not happened, DeStefano would have wired whatever the callers demanded, because she was not making a financial decision. She was making a biological one.
This is the insight that voice cloning fraud weaponises with ruthless precision. The scams do not target rationality. They target the part of the brain that evolved to respond to a child in danger before the conscious mind has caught up. No amount of public awareness campaigning about “stop, verify, call back” survives first contact with that response, because the response is older than language itself. Older people, who are more likely to live alone, more likely to be isolated from the family members being impersonated, and more likely to have significant savings that can be moved in a single wire transfer, sit at the intersection of every vulnerability the attack model exploits.
Amy Nofziger, director of fraud victim support at AARP, has been telling anyone who will listen that the old mental model of the savvy-versus-gullible victim is obsolete. Her framing, delivered across AARP public briefings and research publications, is that this is not a matter of whether a person is smart enough to spot a scam but whether their nervous system can out-think a sound it has been trained to trust since birth. An AARP survey conducted in August 2024, fielded to a thousand adults aged 50 and over via a probability-based panel, found that 77 per cent of older Americans were concerned about becoming targets of AI-related fraud, and 85 per cent were worried about deepfakes generally. Concern, however, does not translate into immunity. In the same research, respondents massively overestimated their own ability to detect a cloned voice.
The regulatory scramble
Washington has noticed, but not at the speed the problem requires. The Federal Trade Commission finalised its rule on government and business impersonation in early 2024, following years of mounting complaints about scammers posing as the Internal Revenue Service, the Social Security Administration, Amazon customer service, and dozens of other familiar institutions. The rule was a genuine step forward. It gave the commission direct authority to sue impersonators and recover money for victims, and it laid the groundwork for an expansion to cover the impersonation of private individuals, an extension the agency had been seeking.
That expansion is where the fight has stalled. In February 2024, the FTC released a supplemental notice of proposed rulemaking that would cover the impersonation of individuals. Consumer Reports, joined by the Electronic Privacy Information Center, the National Consumers League, and other advocacy organisations, delivered a petition in August 2025 signed by more than 75,000 consumers demanding the commission finalise the individual impersonation rule and invoke its Section 5 powers to pursue the companies whose voice cloning products are enabling the scams in the first place. The petition called on the FTC to investigate the product-design failures, the absence of meaningful safeguards against cloning without consent, and the ease with which commercial voice tools can be turned into engines of fraud. As of April 2026, the rule remains under review. The commission has not yet moved to act against any of the major voice cloning vendors.
Congress has been marginally more active. In April 2025, Senators Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis reintroduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Act, universally referred to as the NO FAKES Act. Originally conceived as a tool to protect musicians, actors, and other creative professionals from having their voices and likenesses replicated without consent, the bill would establish a federal right for every American to their own voice and visual likeness, create a notice-and-takedown regime for unauthorised deepfakes, and preempt the patchwork of state laws that has grown up in the meantime. The legislation has drawn support from SAG-AFTRA, the Recording Industry Association of America, the Motion Picture Association, OpenAI, and YouTube, which is an unusual coalition by any measure.
It has also drawn criticism. The Foundation for Individual Rights and Expression has argued that the bill's takedown provisions could be abused to suppress legitimate speech, and the Electronic Frontier Foundation has raised concerns about the scope of secondary liability for platforms. The result is that the NO FAKES Act, now in its third legislative cycle, remains unpassed.
More narrowly focused is the Preventing Deep Fake Scams Act, introduced by Senators Jon Husted and Raphael Warnock in June 2025 and endorsed by AARP, which would establish a federal task force led by the Treasury Department and financial regulators to coordinate a response to AI-driven fraud against financial institutions and their customers. And on 17 December 2025, Senators Shelley Moore Capito and Amy Klobuchar introduced the Artificial Intelligence Scam Prevention Act, a bipartisan bill that would make it illegal to use AI to impersonate any person with intent to defraud, and would establish an interagency committee bringing together the FTC, the FCC, the Consumer Financial Protection Bureau, and the Justice Department to coordinate enforcement.
None of these bills has yet become law. All of them, read together, suggest that even the most energetic legislators understand the problem to be growing faster than the legislative response. The April 2026 issue of the Journal of Accountancy, in a feature authored by forensic accountants David Zweighaft and Howard Silverstone, framed the matter in terms the professional services industry cannot ignore. Certified public accountants, the article argued, now have a practical obligation to warn elderly clients about voice cloning, to help them establish family verification codewords, and to build transaction-review processes that can flag urgent-sounding wire requests before the money leaves the account. It was a notice, delivered to a constituency that does not panic easily, that this is no longer hypothetical.
Defences out of pace
The technical defences against voice cloning fraud fall into three categories, and in April 2026, none of them works reliably.
The first category is detection. Academic and industry researchers have, over the last two years, produced a growing literature on audio deepfake detection, using machine-learning classifiers trained to distinguish synthesised speech from natural speech. A systematic 2025 analysis published in the ACM Transactions on Internet Technology, along with a peer-reviewed survey appearing in Engineering Reports, concluded that detection models perform well on the datasets they were trained on and collapse, sometimes catastrophically, when confronted with audio generated by unseen models. A paper presented at the Network and Distributed System Security Symposium in 2025, introducing a system called VoiceRadar, demonstrated that micro-variations in synthetic speech can be detected under controlled conditions but noted that adversarial retraining by attackers can neutralise those signals within weeks. More disturbingly, a 2025 study on synthetic speech detection reported that current systems exhibit demographic bias, with markedly higher false-positive rates for elderly speakers, adolescents, and speakers of certain English dialects, precisely the populations most often impersonated or most often targeted.
In practical terms, this means that by the time a consumer-grade voice deepfake arrives on someone's phone, no widely deployed tool can reliably tell them it is fake. Phone carriers do not scan audio content in real time. Call authentication protocols such as STIR/SHAKEN, which the Federal Communications Commission mandated across the United States telecom industry to combat robocalling, verify the origin and legitimacy of a calling number but say nothing about whether the voice on the other end of a legitimately placed call is human, synthetic, or stolen. STIR/SHAKEN was built for a world before generative AI. In that world, if you knew who was calling, you had a reasonable chance of knowing what they were going to say. That assumption no longer holds.
The second category is provenance. The Coalition for Content Provenance and Authenticity, known as C2PA, has since 2021 been building an open technical standard for cryptographically signing media at the point of creation so that downstream consumers can verify its origin and editing history. A new version of the specification was published in May 2025, and the standard is expected to be adopted as an ISO international standard this year. Microsoft, Adobe, Intel, Google, and several major news organisations have committed to implementing it. In principle, a C2PA-signed audio file can be traced back to its source, and synthetic audio generated by a compliant tool can be flagged as such.
In practice, provenance solves a different problem. It works beautifully for controlled creative pipelines, where a tool like a professional video editor or a generative image app stamps its output with verifiable credentials. It is much less effective against the scam caller who records a voice from a TikTok clip, strips the metadata, feeds it into a compliant or non-compliant voice model, and then pipes the output through a telephony gateway that was never designed to preserve signed audio. The telephone network, the channel through which the overwhelming majority of voice fraud is delivered, strips content credentials by design. Provenance works upstream. Fraud happens downstream.
The third category is authentication at the human level, which is the category most likely to actually help in the short term. The single most consistently effective defence that every major fraud researcher, from the FBI to AARP to the Journal of Accountancy to Consumer Reports, now recommends, is the family verification codeword: a simple, private phrase shared among relatives and used to confirm identity in emergencies. If a grandmother receives a call from her weeping grandson, she asks for the codeword. The real grandson knows it. The fraudster does not. This is, in the end, pre-industrial cryptography, a shared secret used to verify identity in a world where the cryptographic infrastructure we have built cannot reach the kitchen telephone.
Who is supposed to stop this?
The hardest question is not technical. It is the question of responsibility, and the answer involves six constituencies that have historically preferred to pass the problem to one another.
The voice model developers, companies such as ElevenLabs, LOVO, Speechify, PlayHT, and the large foundation-model labs whose capabilities underpin all of them, are the closest to the attack surface. The product can do what it can do because they built it that way. Consumer Reports' March 2025 findings were unambiguous: the majority of commercial voice cloning products tested lacked meaningful safeguards against unauthorised cloning. Some, like Resemble AI and Descript, required more than a ticked box. Most did not. Senator Maggie Hassan, in formal letters sent to ElevenLabs and three competitors in April 2026, asked each company to explain exactly how they prevent their tools from being used for fraud. The replies have been limited. ElevenLabs, to its credit, blocks the cloning of certain high-profile celebrity voices and uses internal classifiers to monitor for misuse, but the systems are imperfect and the incentives to prioritise growth over gatekeeping are, for a venture-funded company, almost irresistible.
The platforms that host voice samples are the second constituency. TikTok, Meta, YouTube, and their peers have built their business models around the frictionless sharing of audiovisual content, including audio of children, grandparents, and other family members who have no knowledge that their voices are being harvested. Default privacy settings on most of these platforms are permissive. Third-party scraping tools are widely available. No major social platform has yet committed to audio-scraping countermeasures or to robust default privacy settings that would shield users' voices from mass harvesting. Until they do, the raw material for every cloning scam will remain trivially accessible.
The telecommunications carriers are the third. They are the pipeline through which every fraudulent call travels. Their current fraud-prevention investments focus on number-level authentication, not content-level scrutiny. A content-aware defence, one that could scan incoming audio for markers of synthesis in real time, is technically plausible but would raise serious questions about call-content surveillance and would require regulatory scaffolding the United States does not currently possess. In the absence of such a framework, carriers remain, essentially, neutral conduits. The fraud flows through them as surely as the legitimate calls do.
The banks and wire-transfer services sit at the fourth position, and they are arguably the most capable of intervening because they are the last line before the money is gone. Every large wire transfer, every emergency cash pickup, every cryptocurrency exchange purchase executed in a single afternoon by a panicked elderly customer is a signal. Some banks have built transaction-monitoring systems that flag such patterns, and some branch staff have been trained to ask verification questions when a long-standing customer suddenly demands a large cash withdrawal for an unknown recipient. The bank teller in the Canadian case cited by CBC News, who stopped a grandmother from wiring nine thousand dollars to a claimed kidnapper, is exactly the kind of intervention point that works. But training is inconsistent, oversight is voluntary, and liability for downstream losses remains, in most US jurisdictions, the customer's.
The regulators are the fifth. The FTC, the FCC, the Consumer Financial Protection Bureau, the Securities and Exchange Commission, and state attorneys general all have jurisdictional claims somewhere in the fraud chain. None of them has yet been given the explicit statutory authority, the budget, or the coordinated mandate required to act as a unified response. The interagency committee envisioned by the December 2025 Capito-Klobuchar bill, if it becomes law, would be a start. Whether it would be adequate to a problem that compounds at twentyfold every two years is another question.
The sixth constituency is families themselves, and this is the uncomfortable part. The codeword, the verification call-back, the insistence on privacy settings, the education of elderly relatives about what is possible and what to watch for, all of this is work that falls on people who did not ask to become the first line of defence against a technology they do not fully understand. Shifting the burden of fraud prevention onto victims is, historically, the signature of a policy failure. And yet the codeword, today, works. No legislation, no regulator, no platform commitment currently works as reliably.
The trust question
There is a larger question beneath all of this, and it does not reduce cleanly to policy. It is a question about what happens to intergenerational trust in a world where the sound of a loved one's voice can no longer be presumed to be the loved one.
For the whole of human history, voice has been what philosophers have called a “warrant” of presence. Hearing someone's voice was evidence, perhaps the oldest form of evidence, that they were there, that they were real, that they were who they claimed to be. This assumption underwrote almost every important human relationship that was not conducted face to face. It underwrote the telephone as a technology, the voicemail as a convenience, the grandparent-grandchild call as a ritual. It was the reason, in the end, that voice scams worked at all. You believed the voice because voice was something that could be believed.
That assumption is now being eroded in real time, and the erosion is happening unevenly. Younger people, who have grown up around voice filters, autotune, AI-generated podcasts, and synthetic TikTok audio, have developed a generalised scepticism that older generations have not had the time or cultural context to acquire. An older person who has spent sixty years treating the telephone as a trusted channel cannot retool that instinct in an afternoon. The asymmetry is precisely what the attackers exploit.
What gets lost, if the trend continues, is not just money. It is a feature of family life that has existed since the invention of the telephone: the ability to hear a grandchild's voice and to know, instantly and without effort, that it is the grandchild. If every such call must now be preceded by a verification protocol, if every loved voice must be met with a reflexive “what is our codeword?”, something has changed about what it means to be in a family. The burden of suspicion, which used to live on the perimeter of our lives, has migrated inward.
AARP and its peers have tried, carefully, to describe this shift without causing the very panic that would make older adults withdraw from the phone altogether. Their advice in 2026 is consistent: have the codeword conversation, practise it, make it a ritual, do not treat it as a sign of mistrust but as a sign of love in a changed landscape. The framing matters. If the codeword is understood as a form of hygiene, like locking the front door, it becomes bearable. If it is understood as a sign that the voice itself can no longer be trusted, it becomes something else. It becomes an admission of defeat.
What effective protection would actually require
A serious response, one proportionate to the scale of the threat, would combine technical, legal, social, and platform-level interventions. It would begin at the model layer, where voice cloning companies would be required, by binding regulation, to implement robust consent-verification before cloning any voice, to watermark their output in forms that survive normal telephony transmission, and to maintain traceable provenance records accessible to law enforcement. Self-attestation is not a safeguard. It is a liability shield dressed up as a safety feature.
It would continue at the platform layer, where social networks hosting audio content would default to privacy settings that prevent mass scraping of users' voices, particularly those of minors, and would provide users with tools to audit and restrict the use of their voice recordings. This is not a speech issue. It is an infrastructure issue. The scale at which audio can currently be harvested from public platforms was not the intent of any of the people who uploaded that audio.
It would extend to the telecommunications layer, where carriers would be required to develop and deploy content-aware fraud-detection capabilities, with regulatory frameworks that make clear what such systems can and cannot do. STIR/SHAKEN is not enough. A protocol that authenticates numbers without authenticating voices is a partial answer to a problem that has fully evolved.
It would impose liability at the financial-services layer, making banks and wire services responsible for detecting and halting transactions that bear the signature of urgent-request fraud, and giving victims clear legal recourse when obvious warning signs are missed. It would give the FTC the final authority it has been asking for to pursue the individual-impersonation rule and to act against the product manufacturers whose tools are being used. It would pass the Artificial Intelligence Scam Prevention Act, or something like it, to create the federal criminal prohibition that does not currently exist. And it would fund, with federal dollars, the awareness and codeword-education campaigns that are currently being run, on shoestring budgets, by AARP and a handful of consumer advocacy organisations.
None of this is impossible. Most of it has been proposed, some of it in serious detail, in legislation already pending in the 119th Congress. What is missing is not the blueprint. What is missing is the velocity. The gap between the speed at which generative voice technology is proliferating and the speed at which the country's regulatory and platform responses are arriving is, right now, widening. Every additional month in which the FTC does not finalise its individual-impersonation rule, in which no federal statute criminalises AI-driven impersonation fraud, in which major voice platforms continue to rely on tick-box consent, in which social networks continue to default to public audio sharing, is a month in which more Sharon Brightwells empty envelopes of cash to couriers they will never see again.
The mechanism of the scam is new. The dilemma is old. A society has to decide how much of the burden of new technological harm it is prepared to place on the people least equipped to bear it, and how much it is prepared to impose on the institutions that built, distributed, and profited from the tools in the first place. So far, the distribution has been badly skewed. The voice cloners are paying pennies. The victims are paying with their savings, their dignity, and, in some cases, their ability to ever again pick up the telephone and believe what they hear.
If effective protection is possible, and it is, it will look like a deliberate rebalancing of that ledger. It will involve regulators willing to act before every piece of evidence is in, platforms willing to inconvenience their growth curves, carriers willing to be more than neutral, banks willing to own the moment of the transfer, model developers willing to build products that refuse to do the most dangerous things they are capable of doing, and families willing, as they have always been willing when pressed, to protect one another with the tools they have. The codeword is a start. It is not a strategy. The strategy is still, in April 2026, being written, and the clock is running at the speed of the next cloned voice.
References
- Federal Trade Commission, Protecting Older Consumers 2024-2025: A Report of the Federal Trade Commission, December 2025. https://www.ftc.gov/system/files/ftc_gov/pdf/P144400-OlderAdultsReportDec2025.pdf
- AARP, “$12.5 Billion Lost to Scams and Fraud in 2024, Older Adults Hit Hard,” 2025. https://www.aarp.org/money/scams-fraud/older-adults-ftc-fraud-report/
- Greg Iacurci, “Financial fraud cost older adults up to $81.5 billion in 2024, FTC estimates,” CNBC, 13 December 2025. https://www.cnbc.com/2025/12/13/financial-fraud-seniors-ftc.html
- Consumer Reports, “More than 75,000 consumers urge FTC to crack down on AI voice cloning fraud,” press release, August 2025. https://advocacy.consumerreports.org/press_release/more-than-75000-consumers-urge-ftc-to-crack-down-on-ai-voice-cloning-fraud/
- Consumer Reports, “Consumer Reports' Assessment of AI Voice Cloning Products,” press release, March 2025. https://www.consumerreports.org/media-room/press-releases/2025/03/consumer-reports-assessment-of-ai-voice-cloning-products/
- Federal Trade Commission, “FTC Proposes New Protections to Combat AI Impersonation of Individuals,” press release, February 2024. https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-proposes-new-protections-combat-ai-impersonation-individuals
- Federal Trade Commission, “Approaches to Address AI-enabled Voice Cloning,” April 2024. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/04/approaches-address-ai-enabled-voice-cloning
- U.S. Senator Amy Klobuchar, “Klobuchar, Coons, Blackburn and Colleagues Reintroduce Bipartisan NO FAKES Act,” press release, April 2025. https://www.klobuchar.senate.gov/public/index.cfm/2025/4/klobuchar-coons-blackburn-and-colleagues-reintroduce-bipartisan-no-fakes-act
- H.R.2794, 119th Congress (2025-2026), NO FAKES Act of 2025. https://www.congress.gov/bill/119th-congress/house-bill/2794/text
- S.2117, 119th Congress (2025-2026), Preventing Deep Fake Scams Act. https://www.congress.gov/bill/119th-congress/senate-bill/2117/text
- U.S. Senator Amy Klobuchar, “Klobuchar, Capito Introduce Bipartisan Artificial Intelligence Scam Prevention Act,” press release, 17 December 2025. https://www.klobuchar.senate.gov/public/index.cfm/2025/12/klobuchar-capito-introduce-bipartisan-artificial-intelligence-scam-prevention-act
- AARP, “AARP Endorsement of Preventing Deep Fake Scams Act,” 2025. https://www.husted.senate.gov/wp-content/uploads/2025/06/AARP-Endorsement-of-Preventing-Deep-Fake-Scams-Act-119th-Senate1.pdf
- David Zweighaft and Howard Silverstone, “Elder fraud rises as scammers use AI,” Journal of Accountancy, April 2026. https://www.journalofaccountancy.com/issues/2026/apr/elder-fraud-rises-as-scammers-use-ai/
- Jennifer DeStefano, “Written Statement to the United States Senate Committee on the Judiciary,” 13 June 2023. https://www.judiciary.senate.gov/imo/media/doc/2023-06-13%20PM%20-%20Testimony%20-%20DeStefano.pdf
- AARP, “AI-Powered Scams Make Fraud Even Harder to Spot,” 2025. https://www.aarp.org/money/scams-fraud/detecting-ai-fraud/
- Biometric Update, “Voice cloning tools give rise to cacophony of impersonation fraud,” August 2025. https://www.biometricupdate.com/202508/voice-cloning-tools-give-rise-to-cacophony-of-impersonation-fraud
- ElevenLabs, “Safety.” https://elevenlabs.io/safety
- CBC Marketplace, “How con artists are using AI voice cloning to upgrade the grandparent scam,” 2025. https://www.cbc.ca/news/marketplace/marketplace-ai-voice-scam-1.7486437
- American Bar Association Senior Lawyers Division, “The Rise of the AI-Cloned Voice Scam,” September 2025. https://www.americanbar.org/groups/senior_lawyers/resources/voice-of-experience/2025-september/ai-cloned-voice-scam/
- Coalition for Content Provenance and Authenticity, Content Credentials White Paper, October 2025. https://c2pa.org/wp-content/uploads/sites/33/2025/10/content_credentials_wp_0925.pdf
- National Security Agency, “Strengthening Multimedia Integrity in the Generative AI Era,” January 2025. https://media.defense.gov/2025/Jan/29/2003634788/-1/-1/0/CSI-CONTENT-CREDENTIALS.PDF
- “Where are We in Audio Deepfake Detection? A Systematic Analysis over Generative and Detection Models,” ACM Transactions on Internet Technology, 2025. https://dl.acm.org/doi/10.1145/3736765
- H. Shaaban et al., “Audio Deepfake Detection Using Deep Learning,” Engineering Reports, 2025. https://onlinelibrary.wiley.com/doi/full/10.1002/eng2.70087
- Network and Distributed System Security Symposium, “VoiceRadar: Voice Deepfake Detection using Micro-frequency Variations,” 2025. https://www.ndss-symposium.org/wp-content/uploads/2025-3389-paper.pdf
- Federal Communications Commission, “Promoting Caller ID Authentication to Combat Illegal Robocalls.” https://docs.fcc.gov/public/attachments/DOC-366783A1.pdf

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk
Listen to the free weekly SmarterArticles Podcast








