When AI Composes Your Calm: The Ethics of Generated Therapeutic Music

The playlist arrives precisely when you need it. Your heart rate elevated, stress hormones climbing, the weight of another sleepless night pressing against your temples. The algorithm has been watching, learning, measuring. It knows you're stressed before you fully register it yourself. Within moments, your headphones fill with carefully crafted soundscapes: gentle piano motifs layered over ambient textures, pulsing tones at specific frequencies perfectly calibrated to guide your brain toward a deeply relaxed state. The music feels personal, almost prescient in its emotional resonance. You exhale. Your shoulders drop. The algorithm, once again, seems to understand you.

This is the promise of AI-generated therapeutic music, a rapidly expanding frontier where artificial intelligence meets mental health care. Companies such as Brain.fm, Endel, and AIVA are deploying sophisticated algorithms that analyse contextual signals (your daily rhythms, weather patterns, heart rate changes) to generate personalised soundscapes designed to improve focus, reduce anxiety, and promote sleep. The technology represents a seductive proposition: accessible, affordable mental health support delivered through your existing devices, available on demand, infinitely scalable. Yet beneath this appealing surface lies a constellation of profound ethical questions that we're only beginning to grapple with.

If AI can now compose music that genuinely resonates with our deepest emotions and positions itself as a tool for mental well-being, where should we draw the line between technological healing and the commodification of solace? And who truly holds the agency in this increasingly complex exchange: the scientist training the algorithm, the algorithm itself, the patient seeking relief, or the original artist whose work trained these systems?

The Neuroscience of Musical Healing

To understand why AI-generated music might work therapeutically, we must first understand how music affects the brain. When we listen to music, we activate not just the hearing centres in our brain but also the emotional control centres, that ancient network of neural circuits governing emotion, memory, and motivation. Research published in the Proceedings of the National Academy of Sciences has shown that music lights up multiple brain regions simultaneously: the memory centre and emotional processing centre (activating emotional responses through remembered associations), the pleasure and reward centres (the same regions that respond to food, sex, and other satisfying experiences), and numerous other areas including regions involved in decision-making and attention.

The brain's response to music is remarkably widespread and deeply emotional. Studies examining music-evoked emotions have found that emotional responses to pleasant and unpleasant music correlate with activity in the brain regions that connect emotion to physical responses. This isn't merely psychological; it's neurological, measurable, and profound. Recent research has demonstrated that live music can stimulate the emotional brain and create shared emotional experiences amongst listeners in real time, creating synchronised feelings through connected neural activity.

Traditional music therapy leverages these neural pathways systematically. Certified music therapists (who must complete a bachelor's degree in music therapy, 1,200 hours of clinical training, and pass a national certification examination) use various musical activities to intervene in mental health conditions. The evidence base is substantial. A large-scale analysis published in PLOS One examining controlled clinical trials found that music therapy showed significant reduction in depressive symptoms. In simple terms, people receiving music therapy experienced meaningful improvement in their depression that researchers could measure reliably. For anxiety, systematic reviews have found medium-to-large positive effects on stress, with results showing music therapy working about as well as many established psychological interventions.

Central to traditional music therapy's effectiveness is what researchers call the therapeutic alliance, the quality of connection between therapist and client. This human relationship has been consistently identified as one of the most important predictors of positive treatment outcomes across all therapeutic modalities. The music serves not just as intervention but as medium for developing trust, understanding, and emotional attunement between two humans. The therapist responds dynamically to the patient's emotional state, adjusts interventions in real time, and provides the irreplaceable element of human empathy.

Now, algorithms are attempting to replicate these processes. AI music generation systems employ deep learning architectures (advanced pattern-recognition neural networks that can learn from examples) that can analyse patterns in millions of musical pieces and generate new compositions incorporating specific emotional qualities. Some systems use brain-wave-driven generation, directly processing electrical brain signals to create music responsive to detected emotional states. Others incorporate biological feedback loops, adjusting musical parameters based on physiological measurements such as heart rate patterns, skin conductivity, or movement data.

The technology is genuinely sophisticated. Brain.fm uses what it describes as “rhythmic audio that guides brain activity through a process called entrainment,” with studies showing a 29% increase in deep sleep-related brain waves. Endel's system analyses multiple contextual signals simultaneously, generating soundscapes that theoretically align with your body's current state and needs.

Yet a critical distinction exists between these commercial applications and validated medical treatments. Brain.fm explicitly states that it “was not built for therapeutic purposes” and cannot “make any claims about using it as a medical treatment or replacement for music therapy.” This disclaimer reveals a fundamental tension: the products are marketed using the language and aesthetics of mental health treatment whilst carefully avoiding the regulatory scrutiny and evidentiary standards that actual therapeutic interventions must meet.

The Commodification Problem

The mental health wellness industry has become a trillion-pound sector encompassing everything from meditation apps and biometric rings to infrared saunas and mindfulness merchandise. Within this sprawling marketplace, AI-generated therapeutic music occupies an increasingly lucrative niche. The business model is straightforward: subscription-based access to algorithmically generated content that promises to improve mental health outcomes.

The appeal is obvious when we consider the systemic failures in mental healthcare access. Traditional therapy remains frustratingly inaccessible for millions. Cost barriers are substantial; a single 60-minute therapy session can range from £75 to £150 in the UK, and a patient with major depression can spend an average of $10,836 annually on treatment in the United States. Approximately 31% of Americans feel mental health treatment is financially out of reach. Nearly one in ten have incurred debt to pay for mental health treatment, with 60% of them accumulating over $1,000 in debt on average.

Provider shortages compound these financial barriers. More than 112 million Americans live in areas where mental health providers are scarce. The United States faces an overall shortage of doctors, with the shortage of mental health professionals steeper than in any other medical field. Rural areas often have few to no mental health care providers, whilst urban clinics often have long waiting lists, with patients suffering for months before getting a basic intake appointment.

Against this backdrop of unmet need, AI music apps present themselves as democratising solutions. They're affordable (typically £5 to £15 monthly), immediately accessible, free from waiting lists, and carry no stigma. For someone struggling with anxiety who cannot afford therapy or find an available therapist, an app promising evidence-based stress reduction through personalised soundscapes seems like a reasonable alternative.

But this framing obscures crucial questions about what's actually being commodified. When we purchase a streaming music subscription, we're buying access to artistic works with entertainment value. When we purchase a prescription medication, we're buying a regulated therapeutic intervention with demonstrated efficacy and monitored safety. AI therapeutic music apps exist in an ambiguous space between these categories. They employ the aesthetics and language of healthcare whilst functioning legally as consumer wellness products. They make soft claims about mental health benefits whilst avoiding hard commitments to therapeutic outcomes.

Critics argue this represents the broader commodification of mental health, where systemic problems are reframed as individual consumer choices. Rather than addressing structural barriers to mental healthcare access (provider shortages, insurance gaps, geographic disparities), the market offers apps. Rather than investing in training more therapists or expanding mental health infrastructure, venture capital flows toward algorithmic solutions. The emotional labour of healing becomes another extractive resource, with companies monetising our vulnerability.

There's a darker edge to this as well. The data required to personalise these systems is extraordinarily intimate. Apps tracking heart rate, movement patterns, sleep cycles, and music listening preferences are assembling comprehensive psychological profiles. This data has value beyond improving your individual experience; it represents an asset for data capitalism. Literature examining digital mental health technologies has raised serious concerns about the commodification of mental health data through what researchers call “the practice of data capitalism.” Who owns this data? How is it being used beyond the stated therapeutic purpose? What happens when your emotional vulnerabilities become datapoints in a system optimised for engagement and retention rather than genuine healing?

The wellness industry, broadly, has been criticised for what researchers describe as the oversimplification of complex mental health issues through self-help products that neglect the underlying complexity whilst potentially exacerbating struggles. When we reduce anxiety or depression to conditions that can be “fixed” through the right playlist, we risk misunderstanding the social, economic, psychological, and neurobiological factors that contribute to mental illness. We make systemic problems about the individual, promoting a “work hard enough and you'll make it” ethos rather than addressing root causes.

The Question of Artistic Agency

The discussion of agency in AI music generation inevitably circles back to a foundational question: whose music is this, actually? The algorithms generating therapeutic soundscapes weren't trained on abstract mathematical principles. They learned from existing music, vast datasets comprising millions of compositions created by human artists over decades or centuries. Every chord progression suggested by the algorithm, every melodic contour, every rhythmic pattern draws from this training data. The AI is fundamentally a sophisticated pattern-matching system that recombines elements learned from human creativity.

This raises profound questions about artist rights and compensation. When an AI generates a “new” piece of therapeutic music that helps someone through a panic attack, should the artists whose work trained that system receive recognition? Compensation? The current legal and technological infrastructure says no. AI training typically occurs without artist permission or payment. Universal Music Group and other major music publishers have filed lawsuits alleging that AI models were trained without permission on copyrighted works, a position with substantial legal and ethical weight. As critics point out, “training AI models on copyrighted work isn't fair use.”

The U.S. Copyright Office has stated that music made only by AI, without human intervention, might not be protected by copyright. This creates a peculiar situation where the output isn't owned by anyone, yet the input belonged to many. Artists have voiced alarm about this dynamic. The Recording Industry Association of America joined the Human Artistry Campaign to protect artists' rights amid the AI surge. States such as Tennessee have passed legislation (the ELVIS Act) offering civil and criminal remedies for unauthorised AI use of artistic voices and styles.

Yet the artist community is far from united on this issue. Some view AI as a threat to livelihoods; others see it as a creative tool. When AI can replicate voices and styles with increasing accuracy, it “threatens the position of need for actual artists if it's used with no restraints,” as concerns document. The technology can rob instrumentalists and musicians of recording opportunities, leading to direct work loss. Music platforms have financial incentives to support this shift; Spotify paid nine billion dollars in royalties in 2023, money that could be dramatically reduced through AI-generated content.

Conversely, some artists have embraced the technology proactively. Artist Grimes launched Elf.Tech, explicitly allowing algorithms to replicate her voice and share in the profits, believing that “creativity is a conversation across generations.” Singer-songwriter Holly Herndon created Holly+, a vocal deepfake of her own voice, encouraging artists to “take on a proactive role in these conversations and claim autonomy.” For these artists, AI represents not theft but evolution, a new medium for creative expression.

The therapeutic context adds another layer of complexity. If an AI system generates music that genuinely helps someone recover from depression, does that therapeutic value justify the uncredited, uncompensated use of training data? Is there moral distinction between AI-generated entertainment music and AI-generated therapeutic music? Some might argue that healing applications constitute a social good that outweighs individual artist claims. Others would counter that this merely adds exploitation of vulnerability to the exploitation of creative labour.

The cultural diversity dimension cannot be ignored either. Research examining algorithmic bias in music generation has found severe under-representation of non-Western music, with only 5.7% of existing music datasets coming from non-Western genres. Models trained predominantly on Western music perpetuate biases of Western culture, relying on Western tonal and rhythmic structures even when attempting to generate music for Indian, Middle Eastern, or other non-Western traditions. When AI therapeutic music systems are trained on datasets that dramatically under-represent global musical traditions, they risk encoding a narrow, culturally specific notion of what “healing” music should sound like. This raises profound questions about whose emotional experiences are centred, whose musical traditions are valued, and whose mental health needs are genuinely served by these technologies.

The Allocation of Agency

Agency, in this context, refers to the capacity to make autonomous decisions that shape one's experience and outcomes. In the traditional music therapy model, agency is distributed relatively clearly. The patient exercises agency by choosing to pursue therapy, selecting a therapist, and participating actively in treatment. The therapist exercises professional agency in designing interventions, responding to patient needs, and adjusting approaches based on clinical judgement. The therapeutic process is fundamentally collaborative, a negotiated space where both parties contribute to the healing work.

AI-generated therapeutic music disrupts this model in several ways. Consider the role of the patient. At first glance, these apps seem to enhance patient agency; you can access therapeutic music anytime, anywhere, without depending on professional gatekeepers. You control when you listen, for how long, and in what context. This is genuine autonomy compared to waiting weeks for an appointment slot or navigating insurance authorisation.

Yet beneath this surface autonomy lies a more constrained reality. The app determines which musical interventions you receive based on algorithmic assessment of your data. You didn't choose the specific frequencies, rhythms, or tonal qualities; the system selected them. You might not even know what criteria the algorithm is using to generate your “personalised” soundscape. As research on patient autonomy in digital health has documented, “a key challenge arises: how can patients provide truly informed consent if they do not fully understand how the AI system operates, its limitations, or its decision-making processes?”

The informed consent challenge is particularly acute because these systems operate as black boxes. Even the developers often cannot fully explain why a neural network generated a specific musical sequence. The system optimises for measured outcomes (did heart rate decrease? did the user report feeling better? did they continue their subscription?), but the relationship between specific musical qualities and therapeutic effects remains opaque. Traditional therapists can explain their reasoning; AI systems cannot, or at least not in ways that are meaningfully transparent.

The scientist or engineer training the algorithm exercises significant agency in shaping the system's capabilities and constraints. Decisions about training data, architectural design, optimisation objectives, and deployment contexts fundamentally determine what the system can and cannot do. These technical choices encode values, whether explicitly or implicitly. If the training data excludes certain musical traditions, the system's notion of “therapeutic” music will be culturally narrow. If the optimisation metric is user engagement rather than clinical outcome, the system might generate music that feels good in the short term but doesn't address underlying issues. If the deployment model prioritises scalability over personalisation, individual needs may be subordinated to averaged patterns.

Yet scientists and engineers typically don't have therapeutic training. They optimise algorithms; they don't treat patients. As research examining human-AI collaboration in music therapy has found, music therapists identify both benefits and serious concerns about AI integration. Therapists question their own readiness and whether they're “adequately equipped to harness or comprehend the potential power of AI in their practice.” They recognise that “AI lacks self-awareness and emotional awareness, which is a necessity for music therapists,” acknowledging that “for that aspect of music therapy, AI cannot be helpful quite yet.”

So does the algorithm itself hold agency? This philosophical question has practical implications. If the AI system makes a “decision” that harms a user (exacerbates anxiety, triggers traumatic memories, interferes with prescribed treatment), who is responsible? The algorithm is the immediate cause, but it's not a moral agent capable of accountability. We might hold the company liable, but companies frequently shield themselves through terms of service disclaimers and the “wellness product” categorisation that avoids medical device regulation.

Current regulatory frameworks haven't kept pace with these technologies. Of the approximately 20,000 mental health apps available, only five have FDA approval. The regulatory environment is what critics describe as a “patchwork system,” with the FDA reviewing only a small number of digital therapeutics using “pathways and processes that have not always been aligned with the rapid, dynamic, and iterative nature of treatments delivered as software.” Most AI music apps exist in a regulatory void, neither fully healthcare nor fully entertainment, exploiting the ambiguity to avoid stringent oversight.

This regulatory gap has implications for agency distribution. Without clear standards for efficacy, safety, and transparency, users cannot make genuinely informed choices. Without accountability mechanisms, companies face limited consequences for harms. Without professional oversight, there's no systemic check on whether these tools actually serve therapeutic purposes or merely provide emotional palliatives that might delay proper treatment.

The Therapeutic Alliance Problem

Perhaps the most fundamental question is whether AI-generated music can replicate the therapeutic alliance that research consistently identifies as crucial to healing. The therapeutic alliance encompasses three elements: agreement on treatment goals, agreement on the tasks needed to achieve those goals, and the development of a trusting bond between therapist and client. This alliance has been shown to be “the most important factor in successful therapeutic treatments across all types of therapies.”

Can an algorithm develop such an alliance? Proponents might argue that personalisation creates a form of bond; the system “knows” you through data and responds to your needs. The music feels tailored to you, creating a sense of being understood. Some users report genuine emotional connections to their therapeutic music apps, experiencing the algorithmically generated soundscapes as supportive presences in difficult moments.

Yet this is fundamentally different from human therapeutic alliance. The algorithm doesn't actually understand you; it correlates patterns in your data with patterns in its training data and generates outputs predicted to produce desired effects. It has no empathy, no genuine concern for your well-being, no capacity for the emotional attunement that human therapists provide. As music therapists in research studies have emphasised, the therapeutic alliance developed through music therapy “develops through them as dynamic forces of change,” a process that seems to require human reciprocity.

The distinction matters because therapeutic effectiveness isn't just about technical intervention; it's about the relational context in which that intervention occurs. Studies of music therapy's effectiveness emphasise that “the quality of the client's connection with the therapist is the best predictor of therapeutic outcome” and that positive alliance correlates with greater decrease in both depressive and anxiety symptoms throughout treatment. The relationship itself is therapeutic, not merely a delivery mechanism for the technical intervention.

Moreover, human therapists provide something algorithms cannot: adaptive responsiveness to the full complexity of human experience. They can recognise when a patient's presentation suggests underlying trauma, medical conditions, or crisis situations requiring different interventions. They can navigate cultural contexts, relational dynamics, and ethical complexities that arise in therapeutic work. They exercise clinical judgement informed by training, experience, and ongoing professional development. An algorithm optimising for heart rate reduction might miss signs of emotional disconnection, avoidance, or other responses that, while technically “calm,” indicate problems rather than progress.

Research specifically examining human-AI collaboration in music therapy has found that therapists identify “critical challenges” including “the lack of human-like empathy, impact on the therapeutic alliance, and client attitudes towards AI guidance.” These aren't merely sentimental objections to technology; they're substantive concerns about whether the essential elements of therapeutic effectiveness can be preserved when the human therapist is replaced by or subordinated to algorithmic systems.

The Evidence Gap

For all the sophisticated technology and compelling marketing, the evidentiary foundation for AI-generated therapeutic music remains surprisingly thin. Brain.fm has conducted studies, but the company explicitly acknowledges the product isn't intended as medical treatment. Endel's primary reference is a non-peer-reviewed white paper conducted by Arctop, an AI company, and partially funded by Endel itself. This is advocacy research, not independent validation.

More broadly, the evidence for technologies commonly incorporated into these apps (specialised audio tones that supposedly influence brainwaves) is mixed at best. Whilst some studies show promising results, systematic reviews have found the literature “inconclusive.” A comprehensive 2023 review of studies on brain-wave entrainment audio found that only five of fourteen studies showed evidence supporting the claimed effects. Researchers noted that whilst these technologies represent “promising areas of research,” they “did not yet have suitable scientific backing to adequately draw conclusions on efficacy.” Many studies suffer from methodological inconsistencies, small sample sizes, lack of adequate controls, and conflicts of interest.

This evidence gap is problematic because it means users cannot make truly informed decisions about these products. When marketing materials suggest mental health benefits whilst disclaimers deny medical claims, users exist in a state of cultivated ambiguity. The products trade on the credibility of scientific research and clinical practice whilst avoiding the standards those fields require.

The regulatory framework theoretically addresses this problem. Digital therapeutics intended to treat medical conditions are regulated by the FDA as Class II devices, requiring demonstration of safety and effectiveness. Several mental health digital therapeutics have successfully navigated this process. In May 2024, the FDA approved Rejoyn, the first app for treatment of depression in people who don't fully respond to antidepressants. In April 2024, MamaLift Plus became the first digital therapeutic for maternal mental health approved by the FDA. These products underwent rigorous evaluation demonstrating clinical efficacy.

But most AI music apps don't pursue this pathway. They position themselves as “wellness” products rather than medical devices, avoiding regulatory scrutiny whilst still suggesting health benefits. This has prompted critics to call for better regulation of mental health technologies to distinguish “useful mental health tech from digital snake oil.”

Building an Ethical Framework

Given this complex landscape, where should we draw ethical lines? Several principles emerge from examining the tensions between technological innovation, therapeutic effectiveness, and human well-being.

First, transparency must be non-negotiable. Users of AI-generated therapeutic music should understand clearly what they're receiving, how it works, what evidence supports its use, and what its limitations are. This means disclosure about training data sources, algorithmic decision-making processes, data collection and usage practices, and the difference between wellness products and validated medical treatments. Companies should not be permitted to suggest therapeutic benefits through marketing whilst disclaiming medical claims through legal language. If it's positioned as helping mental health, it should meet evidentiary and transparency standards appropriate to that positioning.

Second, informed consent must be genuinely informed. Current digital consent processes often fail to provide meaningful understanding, particularly regarding data usage and algorithmic operations. Dynamic consent models, which allow ongoing engagement with consent decisions as understanding evolves, represent one promising approach. Users should understand not just that their data will be collected, but how that data might be used, sold, or leveraged beyond the immediate therapeutic application.

Third, artist rights must be respected. If AI systems are trained on copyrighted works, artists deserve recognition and compensation. The therapeutic application doesn't exempt developers from these obligations. Industry-wide standards for licensing training data, similar to those in other creative industries, would help address this systematically. Artists should also have the right to opt out of having their work used for AI training, a position gaining legislative traction in various jurisdictions.

Fourth, cultural representation matters. AI systems trained predominantly on Western musical traditions should not be marketed as universal solutions. Developers have a responsibility to ensure their training data represents the cultural diversity of potential users, or to clearly disclose cultural limitations. This requires investment in expanding datasets to include marginalised musical genres and traditions, using specialised techniques to address bias, and involving diverse communities in system development.

Fifth, the therapeutic alliance cannot be fully replaced. AI-generated music might serve as a useful supplementary tool or stopgap measure, but it shouldn't be positioned as equivalent to professional music therapy or mental health treatment. The evidence consistently shows that human connection, clinical judgment, and adaptive responsiveness are central to therapeutic effectiveness. Systems that diminish or eliminate these elements should be transparent about this limitation.

Sixth, regulatory frameworks need updating. The current patchwork system allows products to exploit ambiguities between wellness and healthcare, avoiding oversight whilst suggesting medical benefits. Digital therapeutics regulations should evolve to cover AI-generated therapeutic interventions, establishing clear thresholds for what constitutes a medical claim, what evidence is required to support such claims, and what accountability exists for harms. This doesn't mean stifling innovation, but rather ensuring that innovation serves genuine therapeutic purposes rather than merely extracting value from vulnerable populations.

Seventh, accessibility cannot be an excuse for inadequacy. The fact that traditional therapy is expensive and inaccessible represents a systemic failure that demands systemic solutions: training more therapists, expanding insurance coverage, investing in community mental health infrastructure, and addressing economic inequalities that make healthcare unaffordable. AI tools might play a role in expanding access, but they shouldn't serve as justification for neglecting these deeper investments. We shouldn't accept algorithmic substitutes as sufficient simply because the real thing is too expensive.

Reclaiming Agency

Ultimately, the question of agency in AI-generated therapeutic music requires us to think carefully about what we want healthcare to be. Do we want mental health treatment to be a commodity optimised for scale, engagement, and profit? Or do we want it to remain a human practice grounded in relationship, expertise, and genuine care?

The answer, almost certainly, involves some combination. Technology has roles to play in expanding access, supporting professional practice, and providing tools for self-care. But these roles must be thoughtfully bounded by recognition of what technology cannot do and should not replace.

For patients, reclaiming agency means demanding transparency, insisting on evidence, and maintaining critical engagement with technological promises. It means recognising that apps can be useful tools but are not substitutes for professional care when serious conditions require it. It means understanding that your data has value and asking hard questions about how it's being used beyond your immediate benefit.

For clinicians and researchers, it means engaging proactively with these technologies rather than ceding the field to commercial interests. Music therapists, psychiatrists, psychologists, and other mental health professionals should be centrally involved in designing, evaluating, and deploying AI tools in mental health contexts. Their expertise in therapeutic process, clinical assessment, and human psychology is essential for ensuring these tools actually serve therapeutic purposes.

For artists, it means advocating forcefully for rights, recognition, and compensation. The creative labour that makes AI systems possible deserves respect and remuneration. Artists should be involved in discussions about how their work is used, should have meaningful consent processes, and should share in benefits derived from their creativity.

For technologists and companies, it means accepting responsibility for the power these systems wield. Building tools that intervene in people's emotional and mental states carries ethical obligations beyond legal compliance. It requires genuine commitment to transparency, evidence, fairness, and accountability. It means resisting the temptation to exploit regulatory gaps, data asymmetries, and market vulnerabilities for profit.

For policymakers and regulators, it means updating frameworks to match technological realities. This includes expanding digital therapeutics regulations, strengthening data protection specifically for sensitive mental health information, establishing clear standards for AI training data licensing, and investing in the traditional mental health infrastructure that technology is meant to supplement rather than replace.

The Sound of What's Coming

The algorithm is learning to read our inner states with increasing precision. Heart rate variability, keystroke patterns, voice tone analysis, facial expression recognition, sleep cycles, movement data; all of it feeding sophisticated models that predict our emotional needs before we're fully conscious of them ourselves. The next generation of AI therapeutic music will be even more personalised, even more responsive, even more persuasive in its intimate understanding of our vulnerabilities.

This trajectory presents both opportunities and dangers. On one hand, genuinely helpful tools might emerge that expand access to therapeutic interventions, support professional practice, and provide comfort to those who need it. On the other, we might see the further commodification of human emotional experience, the erosion of professional therapeutic practice, the exploitation of artists' creative labour, and the development of systems that prioritise engagement and profit over genuine healing.

The direction we move depends on choices we make now. These aren't merely technical choices about algorithms and interfaces; they're fundamentally ethical and political choices about what we value, whom we protect, and what vision of healthcare we want to build.

When the algorithm composes your calm, it's worth asking: calm toward what end? Soothing toward what future? If AI-generated music helps you survive another anxiety-ridden day in a society that makes many of us anxious, that's not nothing. But if it also normalises that anxiety, profits from your distress, replaces human connection with algorithmic mimicry, and allows systemic problems to persist unchallenged, then perhaps the real question isn't whether the music works, but what world it's working to create.

The line between technological healing and the commodification of solace isn't fixed or obvious. It must be drawn and redrawn through ongoing collective negotiation involving all stakeholders: patients, therapists, artists, scientists, companies, and society broadly. That negotiation requires transparency, evidence, genuine consent, cultural humility, and a commitment to human flourishing that extends beyond what can be captured in optimisation metrics.

The algorithm knows your heart rate is elevated right now. It's already composing something to bring you down. Before you press play, it's worth considering who that music is really for.


Sources and References

Peer-Reviewed Research

  1. “On the use of AI for Generation of Functional Music to Improve Mental Health,” Frontiers in Artificial Intelligence, 2020. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2020.497864/full

  2. “Advancing personalized digital therapeutics: integrating music therapy, brainwave entrainment methods, and AI-driven biofeedback,” PMC, 2025. https://pmc.ncbi.nlm.nih.gov/articles/PMC11893577/

  3. “Understanding Human-AI Collaboration in Music Therapy Through Co-Design with Therapists,” CHI Conference 2024. https://dl.acm.org/doi/10.1145/3613904.3642764

  4. “A review of artificial intelligence methods enabled music-evoked EEG emotion recognition,” PMC, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11408483/

  5. “Effectiveness of music therapy: a summary of systematic reviews,” PMC, 2014. https://pmc.ncbi.nlm.nih.gov/articles/PMC4036702/

  6. “Effects of music therapy on depression: A meta-analysis,” PLOS One, 2020. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0240862

  7. “Music therapy for stress reduction: systematic review and meta-analysis,” Health Psychology Review, 2020. https://www.tandfonline.com/doi/full/10.1080/17437199.2020.1846580

  8. “Cognitive Crescendo: How Music Shapes the Brain's Structure and Function,” PMC, 2023. https://pmc.ncbi.nlm.nih.gov/articles/PMC10605363/

  9. “Live music stimulates the affective brain and emotionally entrains listeners,” PNAS, 2024. https://www.pnas.org/doi/10.1073/pnas.2316306121

  10. “Music-Evoked Emotions—Current Studies,” Frontiers in Neuroscience, 2017. https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2017.00600/full

  11. “Common modulation of limbic network activation underlies musical emotions,” NeuroImage, 2016. https://www.sciencedirect.com/science/article/abs/pii/S1053811916303093

  12. “Neural Correlates of Emotion Regulation and Music,” PMC, 2017. https://pmc.ncbi.nlm.nih.gov/articles/PMC5376620/

  13. “Effects of binaural beats and isochronic tones on brain wave modulation,” Revista de Neuro-Psiquiatria, 2021. https://www.researchgate.net/publication/356174078

  14. “Binaural beats to entrain the brain? A systematic review,” PMC, 2023. https://pmc.ncbi.nlm.nih.gov/articles/PMC10198548/

  15. “Music Therapy and Therapeutic Alliance in Adult Mental Health,” PubMed, 2019. https://pubmed.ncbi.nlm.nih.gov/30597104/

  16. “Patient autonomy in a digitalized world,” PMC, 2016. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4800322/

  17. “Digital tools in the informed consent process: a systematic review,” BMC Medical Ethics, 2021. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00585-8

  18. “Exploring societal implications of digital mental health technologies,” ScienceDirect, 2024. https://www.sciencedirect.com/science/article/pii/S2666560324000781

Regulatory and Professional Standards

  1. Certification Board for Music Therapists. “Earning the MT-BC.” https://www.cbmt.org/candidates/certification/

  2. American Music Therapy Association. “Requirements to be a music therapist.” https://www.musictherapy.org/about/requirements/

  3. “FDA regulations and prescription digital therapeutics,” Frontiers in Digital Health, 2023. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1086219/full

Industry and Market Analysis

  1. Brain.fm. “Our science.” https://www.brain.fm/science

  2. “Mental Health Apps: Regulation and Validation Are Needed,” DIA Global Forum, November 2024. https://globalforum.diaglobal.org/issue/november-2024/

Healthcare Access and Costs

  1. “Access and Cost Barriers to Mental Health Care,” PMC, 2014. https://pmc.ncbi.nlm.nih.gov/articles/PMC4236908/

  2. “The Behavioral Health Care Affordability Problem,” Center for American Progress, 2023. https://www.americanprogress.org/article/the-behavioral-health-care-affordability-problem/

  3. “Exploring Barriers to Mental Health Care in the U.S.,” AAMC Research Institute. https://www.aamcresearchinstitute.org/our-work/issue-brief/exploring-barriers-mental-health-care-us

Ethics and Commodification

  1. “The Commodification of Mental Health: When Wellness Becomes a Product,” Life London, February 2024. https://life.london/2024/02/the-commodification-of-mental-health/

  2. “Has the $1.8 trillion Wellness Industry commodified mental wellbeing?” Inspire the Mind. https://www.inspirethemind.org/post/has-the-1-8-trillion-wellness-industry-commodified-mental-wellbeing

  1. “Defining Authorship for the Copyright of AI-Generated Music,” Harvard Undergraduate Law Review, Fall 2024. https://hulr.org/fall-2024/defining-authorship-for-the-copyright-of-ai-generated-music

  2. “Artists' Rights in the Age of Generative AI,” Georgetown Journal of International Affairs, July 2024. https://gjia.georgetown.edu/2024/07/10/innovation-and-artists-rights-in-the-age-of-generative-ai/

  3. “AI And Copyright: Protecting Music Creators,” Recording Academy. https://www.recordingacademy.com/advocacy/news/ai-copyright-protecting-music-creators-united-states-copyright-office

Algorithmic Bias and Cultural Diversity

  1. “Music for All: Representational Bias and Cross-Cultural Adaptability,” arXiv, February 2025. https://arxiv.org/html/2502.07328

  2. “Reducing Barriers to the Use of Marginalised Music Genres in AI,” arXiv, July 2024. https://arxiv.org/html/2407.13439v1

  3. “Ethical Implications of Generative Audio Models,” Montreal AI Ethics Institute. https://montrealethics.ai/the-ethical-implications-of-generative-audio-models-a-systematic-literature-review/

Artist Perspectives

  1. “AI-Generated Music: A Creative Revolution or a Cultural Crisis?” Rolling Stone Council. https://council.rollingstone.com/blog/the-impact-of-ai-generated-music/

  2. “How AI Is Transforming Music,” TIME, 2023. https://time.com/6340294/ai-transform-music-2023/

  3. “Artificial Intelligence and the Music Industry,” UK Music, 2024. https://www.ukmusic.org/research-reports/appg-on-music-report-on-ai-and-music-2024/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...