Your Body, Their Data: How Insurers Profit from Wellness Apps

Every morning, roughly 62 million Americans strap on a fitness tracker, open a wellness app, or tap through a mood journal before their first cup of coffee. They log sleep scores, heart rate variability, menstrual cycles, calorie counts, and anxiety levels with the casual ease of checking the weather. In the United Kingdom, 35 per cent of the population now owns and regularly uses a wearable health tracker, an 80 per cent increase from 2019 usage levels. The implicit bargain feels simple enough: hand over some personal data, receive personalised health insights in return. But that bargain contains a clause most users never read, and the consequences run far deeper than targeted adverts for protein powder.
The health information voluntarily uploaded to consumer wellness platforms occupies a regulatory void that would startle most people if they understood it. Unlike the records held by a hospital or a GP's surgery, this data sits almost entirely outside the protections of the Health Insurance Portability and Accountability Act, the 1996 law Americans have long assumed functions as a universal shield for medical information. It does not. And in that gap between assumption and legal reality, an entire industry has taken root, one that buys, aggregates, and resells the most intimate details of human biology to the highest bidder.
The question is no longer hypothetical. It is already happening.
The HIPAA Illusion
HIPAA was written for a world of paper charts and fax machines. It governs “covered entities,” a term of art that encompasses healthcare providers, health plans, and healthcare clearinghouses, along with the business associates who handle data on their behalf. When a hospital stores your blood test results, HIPAA applies. When your cardiologist emails your electrocardiogram to a specialist, HIPAA applies.
When you voluntarily upload that same electrocardiogram reading from your Apple Watch to a third-party wellness platform, HIPAA almost certainly does not apply.
The distinction matters enormously, yet most consumers do not grasp it. A survey reported by the HIPAA Journal found that a majority of Americans mistakenly believe that health app data is covered by HIPAA. The Department of Health and Human Services itself has published guidance clarifying the opposite: once health information is received from a covered entity at the individual's direction by an app that is neither a covered entity nor a business associate, the information is no longer subject to the protections of the HIPAA Rules.
That single sentence carries extraordinary implications. It means the covered entity bears no HIPAA responsibility or liability if the receiving app later experiences a breach, sells the data, or feeds it into an advertising algorithm. The data has, in regulatory terms, left the building.
Legal analysis from the law firm Dickinson Wright put it plainly: “It's important to keep in mind that HIPAA does not apply and was never intended to apply to general health and wellness applications.” While acknowledging that some of that information can be very sensitive and perhaps should be protected as a policy matter, the firm noted that “that's not HIPAA's job.”
There is also a grey zone around employer-sponsored wellness programmes. If a wellness programme is part of an employer's group health plan, such as a plan-sponsored biometric screening, HIPAA typically applies and participating vendors should operate under business associate agreements. But when a wellness programme is offered by the employer outside the health plan, think step challenges or third-party coaching apps paid as a workplace perk, HIPAA generally does not apply. The distinction often depends on contractual fine print that employees never see.
For the hundreds of millions of people now tracking everything from glucose levels to panic attacks on their phones, this is not an academic distinction. It is the difference between having legal recourse when your data is misused and having none at all.
A Nine-Billion-Dollar Appetite for Your Body
The market for health data has matured into an industry worth billions. According to Scientific American, a $9 billion sector called health care commercial intelligence purchases data from insurance companies and pharmacies, assembling it into searchable databases that pharmaceutical companies and other buyers subscribe to on an ongoing basis.
The scale of these operations is staggering. Companies like Definitive Healthcare maintain profiles on more than 2.6 million physicians, nurses, and other health care professionals, along with billions of insurance claims covering hundreds of millions of patients. These profiles are updated daily and include granular details such as prescription activity and a physician's propensity to prescribe brand-name over generic drugs.
But the data flowing from wellness apps opens an entirely new frontier. Unlike insurance claims, which at least originate within the HIPAA-regulated ecosystem, app-generated data arrives pre-stripped of regulatory protection. A user who tracks their anxiety symptoms through a mood-logging app, for instance, may find that information categorised and sold without their meaningful knowledge. The data collection methods are diverse: cookies and tracking pixels embedded in app interfaces, integration with social media platforms, purchase history from pharmacy loyalty programmes, and even public records including birth and death certificates all feed into broker profiles.
Research from Duke University's Sanford School of Public Policy laid bare the mechanics of this trade. In a study led by Joanne Kim, a Sanford Technology Policy Fellow, researchers contacted 37 data brokers and asked to purchase bulk mental health data. Eleven of those brokers agreed to sell information that identified individuals by specific conditions, including depression, anxiety, and bipolar disorder, often sorted by demographics such as age, race, credit score, and location. Some brokers charged as little as $275 for 5,000 aggregated records. Others offered annual subscriptions ranging from $75,000 to $100,000 for ongoing access. The transactions required little to no screening of potential buyers and imposed few restrictions on how the purchased data could be used.
Justin Sherman, Senior Fellow and Research Lead of Sanford's Data Brokerage Project, testified before the House Committee on Energy and Commerce at a hearing titled “Who is Selling Your Data?” that was directly prompted by the Duke research. Representative Morgan Griffith of Virginia, who chaired the hearing, cited the Sanford research as a driving force behind the proceedings. Sherman warned that data brokers claiming people consented to the collection and sale of their mental health data “are twisting the term so much it becomes meaningless.” He added that the findings “raise all kinds of questions about privacy, potential algorithmic discrimination, and the risk of companies taking advantage of consumers in vulnerable positions.”
The implications extend beyond marketing. Health insurance companies purchase data from brokers to inform their underwriting algorithms. As compliancy research has documented, data broker profiles categorise individuals into segments based on health characteristics such as “likelihood of having anxiety,” “diabetes,” “bladder control issues,” and “high blood pressure.” A data broker might share such information with an insurer or retailer, potentially increasing a person's rate or shaping the products they are offered. A 2025 IBM report found that the average security breach in the healthcare industry totalled over $7.4 million, and 97 per cent of organisations with AI-related security incidents lacked proper AI access controls, underscoring the vulnerability of these data pipelines.
When the Firewall Fails Entirely
Even data that should be protected sometimes is not. In April 2025, Blue Shield of California disclosed that Google Analytics had been misconfigured in a way that shared the protected health information of approximately 4.7 million members with Google Ads for nearly three years, from April 2021 to January 2024. The exposed data included patient names, insurance plan details, city, postcode, gender, family size, medical claim service dates, and service providers.
Blue Shield has 4.8 million members total, meaning the breach affected virtually its entire membership. Security researchers called it the largest healthcare data breach of 2025. The duration of the exposure, nearly three years before it was identified, pointed to systemic failures in data flow visibility, audit logging, and vendor oversight. The root cause was a tracking pixel, a standard tool in e-commerce marketing that proved dangerously misapplied in a regulated healthcare environment. Many healthcare organisations unknowingly introduce similar risks through website trackers, pixel tags, and marketing scripts that silently funnel patient data to advertising platforms.
The Blue Shield incident involved a traditional covered entity, one that should have been governed by HIPAA. It illustrated how even the regulatory protections that do exist can fail catastrophically when marketing technology intersects with healthcare infrastructure. For wellness apps that sit outside HIPAA entirely, the safeguards are often thinner still, and the oversight mechanisms are weaker.
The FTC Steps In, Partially
Recognising the regulatory vacuum, the Federal Trade Commission has expanded its enforcement toolkit. The agency's amended Health Breach Notification Rule, which took effect on 29 July 2024, extends breach notification requirements to apps and platforms not covered by HIPAA. Violations are treated as unfair or deceptive acts under Section 18 of the FTC Act, carrying civil penalties of up to $51,744 per violation. The updated rule specifically encompasses fitness, fertility, and mental health apps, closing at least some of the notification gap that previously left consumers in the dark about breaches involving their wellness data.
The FTC has already demonstrated willingness to act. In February 2023, GoodRx agreed to pay a $1.5 million civil penalty, the first enforcement action under the Health Breach Notification Rule, after the FTC alleged it had failed to notify customers and regulators of unauthorised disclosures of consumer health information.
In 2023, online therapy company BetterHelp was fined $7.8 million over allegations that it shared consumers' health data with companies including Facebook and Snapchat for advertising purposes. Easy Healthcare, the parent company of ovulation and period tracking app Premom, settled for $100,000 over similar concerns.
In April 2024, the FTC announced a $7.1 million penalty against Cerebral, a telehealth company that had provided sensitive information on nearly 3.2 million consumers to third parties such as LinkedIn, Snapchat, and TikTok through tracking tools embedded in its website and apps. The data shared included users' names, addresses, phone numbers, medical histories, and prescription information. The order permanently banned Cerebral from using or disclosing consumers' personal and health information to third parties for most marketing or advertising purposes.
The FTC has also moved against data brokers dealing in location data tied to health services. In 2024, the commission announced significant settlements with four data brokers, including X-Mode, InMarket Media, Mobilewalla, and Gravy Analytics, resolving allegations of unlawful collection and sale of precise location information. The FTC challenged the practice of categorising consumers based on sensitive characteristics derived from location data, such as medical conditions and religious beliefs, calling it “far outside the expectations and experience of consumers.”
These actions represent meaningful enforcement, but they remain reactive. The FTC can penalise companies after breaches occur. It cannot prevent wellness apps from collecting and monetising health data in the first place, provided the apps disclose their practices somewhere within their terms of service.
The Genetic Blind Spot
The intersection of wellness app data and genetic information creates a particularly dangerous blind spot. The Genetic Information Nondiscrimination Act of 2008, known as GINA, was designed to prevent discrimination based on genetic information. Its protections, however, are narrower than most people realise.
GINA's Title I prohibits group and individual health insurers from using genetic information to determine eligibility or premiums. It also prohibits health insurers from requesting or requiring that a person undergo a genetic test. Its Title II prevents employers from using genetic information in hiring, firing, or job assignment decisions. These protections are significant but incomplete.
GINA does not apply to life insurance, long-term care insurance, or disability insurance. Insurers in these markets are legally permitted to use genetic, personal, or family health information to make coverage or premium decisions. They can rate premiums higher or refuse to offer coverage entirely based on genetic test results. The law also does not apply to employers with fewer than 15 employees or to individuals insured through military programmes such as Tricare.
The American Medical Association has documented these gaps, noting that GINA's exclusions may cause reluctance among individuals to pursue genetic testing. The American Society of Human Genetics has similarly advocated for expanding GINA's protections to cover the insurance types currently excluded.
Research published in the European Journal of Human Genetics argued that because long-term care and disability insurance can be essential for wellbeing, there is no good reason to place them beyond GINA's reach. The authors noted that the ethical and economic implications of these exclusions grow more significant as genetic testing becomes cheaper, more accessible, and more predictive.
Now consider what happens when genetic information enters the wellness app ecosystem. Services like 23andMe and AncestryDNA have made direct-to-consumer genetic testing mainstream. Some wellness platforms integrate genetic data with fitness and health tracking to deliver personalised recommendations. But if that genetic data is held by a consumer app rather than a HIPAA-covered entity, it may lack even GINA's partial protections, and it certainly lacks protection from life, disability, and long-term care insurers who may eventually access it through data broker channels. Fewer than half of US states have laws providing additional protections against genetic discrimination beyond what GINA offers, leaving the majority of Americans reliant on federal protections that were designed before the wellness app era.
Your Steps, Your Identity
Even data that appears harmless can become a liability. Research highlighted by MobiHealthNews demonstrated that just six days of step counts are sufficient to uniquely identify an individual among 100 million other people. Fitness data, like DNA, forms a sequence. As the length of the sequence grows, the probability of someone else having exactly the same pattern for the same dates decreases exponentially.
The re-identification risk is not theoretical. Researchers described a concrete attack scenario: a person with a heart condition participates in a study that collects physical activity data through a wearable device. The same person uses a social network to share workout outcomes. After the study data is anonymised and made publicly available, a malicious actor retrieves both datasets and matches them on the physical activity time series, re-identifying the participant and linking their social network identity to their medical condition.
Platforms such as Garmin Connect and Fitbit have historically made certain user data, including daily step counts, publicly visible by default. Fitbit's own Research Pledge acknowledges the risk, requiring researchers who share datasets to mitigate re-identification and consider whether research datasets could be linked to publicly visible information. The NIH's All of Us Research Programme has implemented safeguards for Fitbit data used in research, including date-shifting timestamps by random numbers to reduce identification risks, but these protections apply only within the research context and not to commercially held data.
The consequence is stark. De-identification, the process that health data advocates have long relied upon as a privacy safeguard, is becoming increasingly unreliable. Modern artificial intelligence algorithms make re-identification substantially more feasible, and the proliferation of linked datasets means that a single data point from a wellness app can serve as the key that unlocks an entire medical profile. Genetic data, researchers have noted, is essentially impossible to completely de-identify, meaning that any database containing genetic markers carries an inherent and permanent privacy risk.
The sensitivity of fitness data also extends beyond step counts. Wearable devices now monitor heart function, respiratory patterns, sleep architecture, and blood oxygen levels. Researchers have argued that this data will soon encompass cognitive markers as well. Each additional data stream increases the precision with which an individual can be identified and the richness of the health profile that can be assembled without their knowledge.
The Insurance Pipeline
Life insurers are already building the infrastructure to use wearable data at scale. In August 2025, WTW and Klarity announced a collaboration to help life insurers improve pricing accuracy by integrating wearable technology data into their underwriting processes. The partnership draws on 12 years of health data spanning more than six million life years, producing individual-level mortality scores that incorporate data from smartwatches and other wearable devices tracking physical activity, heart rate, and sleep patterns.
According to GlobalData's 2024 Emerging Trends Insurance Consumer Survey, over half of US consumers, 54.5 per cent, said they would be quite or very likely to wear an activity tracker and share results with a life insurer in return for a more tailored policy. The prospect of financial savings was the primary incentive for 56.6 per cent of respondents.
Munich Re's research has found that steps per day stratify mortality risk even after controlling for age, gender, smoking status, and various health indicators, providing segmentation beyond traditional underwriting attributes such as BMI, cholesterol, and blood pressure. Traditional mortality risk measures, the reinsurer noted, often misclassify applicants because they fail to capture individualised measures such as resting heart rate, heart recovery rate, sleep quality, and the ratio of activity to inactivity. Wearable data fills those gaps. Programmes like John Hancock's Vitality already offer policyholders premium discounts of up to 15 per cent for meeting fitness goals such as walking 15,000 steps per day.
The rewards side of this equation receives the marketing emphasis. The risk side receives rather less attention. If a person is sedentary, if their sleep data reveals chronic insomnia, if their heart rate variability suggests unmanaged stress, a wearable device will record it. Insurance industry analysis has noted plainly that such individuals “may pay more or even be denied coverage.”
There is also an equity dimension. The EEOC released guidance in January 2025 on wearable technologies in the workplace, noting that inaccuracies in wearable devices disproportionately affect certain groups. Biometric devices that fail to calibrate for darker skin tones or larger body sizes may produce skewed results, leading to discriminatory outcomes. The EEOC further warned that using heart rate data to infer pregnancy and then making adverse employment decisions based on that information could violate equal employment opportunity laws.
These concerns apply with equal force to insurance underwriting. If wearable data that is inaccurate for certain demographics feeds into actuarial models, the result is not personalisation but discrimination laundered through an algorithm. The individuals most likely to be disadvantaged are those who already face barriers in the insurance market: older adults, people with disabilities, and members of racial and ethnic minority groups.
The Family Problem
The question in this article's title asks whether your health data could one day be used to deny coverage to your family members. The legal architecture already permits it in certain contexts.
GINA's protections in health insurance extend to genetic information about an individual and their family members. But GINA does not cover life, long-term care, or disability insurance. In those markets, family health history, including genetic information, can legally inform underwriting decisions. If a life insurer gains access to your genetic test results, whether directly or through a data broker chain that originates with a wellness app, that information could theoretically affect not only your premiums but the risk assessments applied to your blood relatives.
The data broker ecosystem compounds this risk. When mental health data, genetic information, and biometric data are aggregated, linked, and sold, the dossier assembled on one family member can reveal information about others. A genetic predisposition identified in one sibling's wellness app profile implies a statistical probability for the other. A family history of cardiac disease logged in one person's health tracker creates inferences about their children. Data brokers already categorise individuals by family size and household composition, as the Blue Shield breach demonstrated, making it straightforward to link related individuals within their databases.
This is not a speculative future scenario. The infrastructure for such assessments already exists. The only barriers are regulatory, and as this article has documented, those barriers are full of gaps.
Legislative Attempts to Close the Gap
The patchwork of protections is slowly being addressed. On 4 November 2025, Senator Bill Cassidy of Louisiana, chair of the Senate Health, Education, Labor, and Pensions Committee, introduced the Health Information Privacy Reform Act, known as HIPRA. The bill seeks to extend protections similar to HIPAA to health information collected by entities not currently regulated by that law, including fitness apps, wearable device manufacturers, and wellness platforms.
HIPRA defines “applicable health information” broadly as any identifiable or reasonably identifiable data about an individual's health or healthcare, regardless of whether it was created by a healthcare provider, health plan, or clearinghouse. This definition would bring wellness app data squarely within the regulatory framework for the first time.
HIPRA would require regulated entities to implement physical, technical, and administrative safeguards for health information. It would establish breach notification requirements mirroring the HIPAA model. Crucially, it would grant consumers a right to deletion of their health data, something HIPAA itself does not provide. It would also require wellness apps and wearable device companies to notify consumers explicitly that their data is not protected by HIPAA and provide an opt-out mechanism for data generation.
The bill directs HHS, in consultation with the FTC, to publish guidance on applying the “minimum necessary” standard to artificial intelligence and machine learning technologies and to create national de-identification standards. It further calls for the National Academies of Sciences, Engineering, and Medicine to study the feasibility of compensating consumers for sharing their identified health data.
HIPRA remains early in the legislative process, filed as S.3097 in the 119th Congress. Its passage is far from guaranteed. The bill's federal standards would override any conflicting state laws that offer weaker protections, though states would remain free to adopt stricter rules.
At the state level, progress has been more tangible. Washington's My Health My Data Act, which came into force on 31 March 2024, became the first privacy-focused law in the United States explicitly designed to protect personal health data falling outside HIPAA. The law was introduced as part of a legislative response to the Supreme Court's 2022 decision in Dobbs v. Jackson Women's Health Organization, with the primary sponsor, Representative Vandana Slatter, describing it as part of a comprehensive package aimed at protecting health privacy, especially for reproductive healthcare.
The Act defines consumer health data broadly, covering past, present, or future physical or mental health status, prescribed medications, gender-affirming care information, reproductive health data, and even precise location information that could indicate a consumer's attempt to receive health services. It requires affirmative, opt-in consent for any collection of consumer health data and grants consumers sweeping deletion rights that go beyond what any other privacy law provides.
Unlike most state privacy laws, Washington's MHMDA includes a private right of action, allowing consumers to sue directly for violations. Courts may award up to three times actual damages, not exceeding $25,000. Nevada enacted a similar law effective the same date, though without the private right of action. Connecticut amended its Consumer Data Privacy Act to include consumer health data within its definition of sensitive data.
These state efforts are meaningful but create their own problems. A patchwork of differing state standards places compliance burdens on companies while leaving residents of states without such laws entirely unprotected. The uneven coverage means that a consumer in Washington enjoys significantly more protection than one in Texas or Florida, creating a geography of privacy that maps poorly onto a digital ecosystem where data flows freely across state lines.
The Ownership Question
At the centre of this entire debate sits a deceptively simple question: who owns your health data?
The answer, under current US law, is surprisingly unclear. HIPAA grants patients rights of access to their medical records but does not establish ownership per se. When data leaves the HIPAA ecosystem and enters the wellness app world, even those access rights evaporate unless state law intervenes.
Most wellness apps address data ownership in their terms of service, and most of those terms grant the company broad licences to use, aggregate, and share the data. Users technically consent to these arrangements when they tap “I agree” on a screen of dense legal text, but meaningful informed consent is a fiction in this context. As Justin Sherman of Duke University observed, the consent framework that underpins the entire data brokerage industry has been twisted beyond recognition.
The commercial implementation of healthcare AI further complicates ownership. As research published in BMC Medical Ethics has noted, AI systems require patient health information to be placed under the control of for-profit corporations. The structure of this public-private interface means that these corporations have an increased role in obtaining, utilising, and protecting patient health information, even as the patients who generated that information exercise diminishing control over it.
OpenAI's launch of ChatGPT Health, which encourages users to connect medical records and wellness app data to the platform, exemplifies this trend. The feature allows users to link services such as Apple Health, Function, and MyFitnessPal so that ChatGPT can help interpret test results and health data. OpenAI has stated that the feature includes purpose-built encryption, isolation, and additional layered protections. But experts at the Center for Democracy and Technology have warned that while these LLM health tools offer the promise of empowering patients, health data remains some of the most sensitive information people can share and requires correspondingly rigorous protection.
The fundamental tension is structural. Users want the benefits of AI-powered health insights. Companies want the data that makes those insights possible. And the legal framework that should mediate between these interests was designed for an era when health records were paper files locked in a cabinet.
The Unresolved Reckoning
The trajectory is clear even if the timeline is not. Wearable adoption is projected to grow from 62 million US users in 2024 to over 92 million by 2029. AI-powered health platforms are proliferating. Data broker networks are expanding. Insurance companies are investing heavily in wearable-driven underwriting models. And the regulatory framework remains, for most Americans, a patchwork of partial protections riddled with exceptions.
The people who stand to lose the most from this arrangement are those who are already vulnerable: individuals with pre-existing conditions, people managing mental health challenges, members of demographic groups for whom wearable devices produce less accurate readings, and families whose genetic information enters the commercial data ecosystem without their full understanding of the consequences.
There is nothing inherently wrong with using technology to improve health outcomes. Fitness trackers save lives. AI diagnostic tools catch diseases earlier. Personalised wellness recommendations help people make better choices. But the infrastructure that delivers these benefits is the same infrastructure that enables surveillance, discrimination, and the quiet erosion of medical privacy.
The question is not whether to use these tools. It is whether the legal and regulatory systems will evolve quickly enough to ensure that the most intimate details of human biology remain under the control of the humans who generate them, rather than the corporations that collect, aggregate, and sell them.
Right now, the answer to that question is no.
References and Sources
U.S. Department of Health and Human Services. “The Access Right, Health Apps, and APIs.” HHS.gov. https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access-right-health-apps-apis/index.html
Dickinson Wright. “App Users Beware: Most Healthcare, Fitness Tracker, and Wellness Apps Are Not Covered by HIPAA.” Dickinson Wright Insights. https://www.dickinson-wright.com/news-alerts/app-users-beware
HIPAA Journal. “Majority of Americans Mistakenly Believe Health App Data is Covered by HIPAA.” https://www.hipaajournal.com/americans-mistakenly-believe-health-app-hipaa/
Kim, Joanne. “Data Brokers and the Sale of Americans' Mental Health Data.” Duke University Sanford School of Public Policy, Tech Policy Program, 2023. https://techpolicy.sanford.duke.edu/data-brokers-and-the-sale-of-americans-mental-health-data/
Scientific American. “How Data Brokers Make Money Off Your Medical Records.” https://www.scientificamerican.com/article/how-data-brokers-make-money-off-your-medical-records/
Compliancy Group. “Health Data Brokers: Data Collection Methods and Practices.” https://compliancy-group.com/health-data-brokers-sell-lists-of-depression-anxiety-sufferers/
Blue Shield of California / HIPAA Journal. “Blue Shield of California Announces Impermissible Disclosure of PHI to Google Ads: 4.7 Million Affected.” April 2025. https://www.hipaajournal.com/blue-shield-of-california-google-ads-data-breach/
Federal Trade Commission. “Updated FTC Health Breach Notification Rule Puts New Provisions in Place to Protect Users of Health Apps and Devices.” April 2024. https://www.ftc.gov/business-guidance/blog/2024/04/updated-ftc-health-breach-notification-rule-puts-new-provisions-place-protect-users-health-apps
Federal Trade Commission. “Proposed FTC Order Will Prohibit Telehealth Firm Cerebral from Using or Disclosing Sensitive Data for Advertising Purposes.” April 2024. https://www.ftc.gov/news-events/news/press-releases/2024/04/proposed-ftc-order-will-prohibit-telehealth-firm-cerebral-using-or-disclosing-sensitive-data
Genetic Information Nondiscrimination Act (GINA) Overview. National Human Genome Research Institute. https://www.genome.gov/about-genomics/policy-issues/Genetic-Discrimination
American Medical Association. “Genetic Discrimination.” https://www.ama-assn.org/public-health/population-health/genetic-discrimination
PMC / European Journal of Human Genetics. “Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-Term Care and Life Insurance.” https://pmc.ncbi.nlm.nih.gov/articles/PMC6354179/
MobiHealthNews. “When Fitness Data Becomes Research Data, Your Privacy May Be at Risk.” https://www.mobihealthnews.com/news/contributed-when-fitness-data-becomes-research-data-your-privacy-may-be-risk
WTW. “WTW and Klarity Collaborate to Boost Insurance Underwriting Accuracy by Harnessing Wearable Health Technology.” August 2025. https://www.wtwco.com/en-us/news/2025/08/wtw-and-klarity-collaborate-to-boost-insurance-underwriting-accuracy-by-harnessing-wearable-health
GlobalData. “2024 Emerging Trends Insurance Consumer Survey.” Referenced via Life Insurance International. https://www.lifeinsuranceinternational.com/analyst-comment/over-half-of-us-consumers-share-wearable-data-tailored-life-insurance/
Munich Re. “The Future Is Now: Wearables for Insurance Risk Assessment.” https://www.munichre.com/us-life/en/insights/future-of-risk/wearables-the-future-is-now-wearables-for-insurance-risk-asses.html
U.S. Equal Employment Opportunity Commission. “Wearables in the Workplace.” January 2025. https://www.disabilityleavelaw.com/2025/01/articles/eeoc-guidance/eeoc-issues-new-guidance-on-wearable-technologies-key-points-for-employers/
U.S. Senate Committee on Health, Education, Labor and Pensions. “Chair Cassidy Introduces Bill to Protect Americans' Private Health Data.” November 2025. https://www.help.senate.gov/rep/newsroom/press/chair-cassidy-introduces-bill-to-protect-americans-private-health-data
Congress.gov. “S.3097 – Health Information Privacy Reform Act.” 119th Congress (2025-2026). https://www.congress.gov/bill/119th-congress/senate-bill/3097/all-actions
Washington State Legislature. “Chapter 19.373 RCW: Washington My Health My Data Act.” https://app.leg.wa.gov/RCW/default.aspx?cite=19.373&full=true
Center for Democracy and Technology. “AI Health Tools Pose Risks for User Privacy.” https://cdt.org/insights/ai-health-tools-pose-risks-for-user-privacy/
OpenAI. “Introducing ChatGPT Health.” https://openai.com/index/introducing-chatgpt-health/
BMC Medical Ethics. “Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era.” https://link.springer.com/article/10.1186/s12910-021-00687-3
Beinsure. “Wearable Technology in Insurance Use Cases.” https://beinsure.com/wearable-technology-smart-watches-fitness-devices-changing-insurance/
Cyberscoop. “Your AI Doctor Doesn't Have to Follow the Same Privacy Rules as Your Real One.” https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/
IBM. “Cost of a Data Breach Report 2025.” Referenced via Wolters Kluwer. https://www.wolterskluwer.com/en/expert-insights/health-system-size-impacts-ai-privacy-and-security-concerns
FTC Privacy and Security Enforcement. “Privacy Law Recap 2024: Regulatory Enforcement.” Referenced via Perkins Coie. https://perkinscoie.com/insights/update/privacy-law-recap-2024-regulatory-enforcement
Goodwin Law. “Washington's My Health My Data Act Comes Into Force.” March 2024. https://www.goodwinlaw.com/en/insights/publications/2024/03/alerts-technology-hltc-my-health-my-data-act-mhmda
Wilson Sonsini. “Senator Cassidy Introduces Sweeping Health Privacy Bill.” November 2025. https://www.wsgr.com/en/insights/senator-cassidy-introduces-sweeping-health-privacy-bill.html
FACING OUR RISK (FORCE). “GINA Overview.” https://www.facingourrisk.org/privacy-policy-legal/laws-protections/privacy-nondiscrimination/GINA/overview

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk