Posted on: 10 January 2026
It's Saturday. As you read this, your smartwatch is measuring your heart rate, last night's sleep quality, the steps you took to reach the café where you're drinking your coffee. Your pharmacy app holds your prescription history. Your fitness tracker knows how much you moved this week and, if it's a recent model, your heart rate variability: a marker that medical literature correlates with longevity and cardiovascular risk.
Until yesterday, these data existed in separate silos. Your GP saw one thing, your cardiologist another, your pharmacy a third. Inefficient fragmentation, certainly, but also an accidental form of protection: nobody had the complete picture.
On 26 March 2025, that changed. EU Regulation 2025/327, known as the European Health Data Space, came into force. By January 2026, all European health systems must certify interoperability. This means your electronic health data, from medical records to prescriptions, from diagnostic imaging to lab results, will flow through a common infrastructure. Patients will access their clinical history anywhere in Europe. Researchers will request access to aggregated datasets for scientific studies. Pharmaceutical companies will use this data to develop new drugs and train artificial intelligence algorithms.
The official narrative is impeccable: better healthcare, faster research, personalised medicine, eleven billion euros in savings over the next decade. And technically, it's all true.
But there's a pattern that anyone who's lived through other technological transitions recognises immediately.
The Regulation contains explicit prohibitions. It forbids using health data for marketing, for insurance risk profiling, for credit decisions, for excluding people from policy access. On paper, the protections exist. Anyone reading the text can verify this.
The clinical question isn't whether the prohibitions exist. The question is how long they hold when the infrastructure has been built and the market has learned to value it.
This is where historical memory comes in. The American credit score was born in the 1950s with a precise purpose: letting banks assess loan applicants' reliability. Today that number influences apartment rentals, car insurance premiums, hiring decisions in certain sectors. Nobody planned this expansion. The mechanism is simple: when predictive data exists and a market can benefit from it, the pressure to use it becomes structural. Policies adapt, exceptions multiply, boundaries shift.
Mobile phone location data was protected by stringent policies. Today it's sold through chains of intermediaries so long that tracing the source is nearly impossible. Social media data was supposed to stay private according to terms of service. Then came Cambridge Analytica and dozens of lesser-known cases. Not through conspiracy: through economic logic.
The global healthcare big data market is worth approximately $130 billion today. Projections for 2035 range between $145 and $645 billion depending on estimates. The annual growth rate of health data volume exceeds 36 percent, faster than any other sector including finance. These numbers aren't theory: they're capital seeking returns.
While Europe builds its protected infrastructure, the future is already present in the United States. John Hancock, one of America's largest life insurers, has sold only policies that include wearable device monitoring since 2018. Customers who hit physical activity targets receive premium discounts. UnitedHealthcare offers similar programmes. Cigna rewards those who complete fitness challenges tracked by smartwatches.
The rhetoric is positive incentives: we help you stay healthy, we reward you for moving. But the underlying mechanism is risk reclassification based on continuous behavioural data. Those who move little, sleep poorly, have elevated resting heart rates, don't violate any rule. They simply pay more. Or, in insurance logic, they pay the correct price for their risk profile. The difference between these two formulations is purely semantic.
Medical research confirms that resting heart rate, heart rate variability, sleep quality, and physical activity levels are mortality and morbidity predictors. Not perfect, but statistically significant. Significant enough that Munich Re, one of the world's largest reinsurers, has published technical papers on integrating this data into actuarial models. Significant enough that insurers are investing in infrastructure to collect and process it.
Back to Europe and its Regulation with explicit prohibitions. Who defends these prohibitions? Data protection authorities, structurally underfunded. Patient associations, with limited bargaining power. Politicians, as long as the issue remains salient in public opinion.
Who has incentive to erode these prohibitions? A pharmaceutical industry that can accelerate drug development with access to real-world data. An insurance sector that can refine risk models. A startup ecosystem building its value on health data analysis. A market worth hundreds of billions growing at double digits annually.
No conspiracy theories required. Just observe the incentives and ask which configuration is more stable over time.
The EHDS allows citizens to opt out, to withdraw consent for secondary use of their data. The mechanism exists, it's documented, it's reversible. But anyone who has studied opt-out rates in systems requiring active action knows that the vast majority of people never take it. The default is inclusion. Architecture determines behaviour more than policy does.
Then there's pseudonymisation. Data for secondary use is stripped of direct identifiers: name, tax code, address. But the literature on re-identification demonstrates that sufficiently rich datasets allow identity reconstruction by cross-referencing apparently anonymous variables. A detailed health profile, combined with geographic and temporal data, can be traced back to a specific individual with surprisingly high probability. Not always, not easily, but often enough to constitute a systemic risk.
Experts interviewed in EHDS preparatory research raise precise concerns. They fear data will be used disproportionately for private profit rather than public benefit. They fear patients will "pay twice: first with their data for big companies, then again for expensive drugs." They fear hospitals will start treating patient data as assets to monetise. These aren't conspiracy theorist speculations: they're the words of sector professionals collected in peer-reviewed academic studies.
Predictive medicine, the kind that promises to identify diseases before they manifest, relies on the same infrastructure. In theory, it serves better treatment. In practice, it creates a category of people who aren't sick but are classified as probably sick. What does being probably sick mean for credit access, employment, insurance? The regulation says it shouldn't mean anything. The market says predictive data always finds a way to be used.
Some argue Europe is different, that GDPR created a unique data protection culture, that EHDS prohibitions will hold. Possible. But worth noting that the same GDPR is circumvented daily through dark patterns, consent buried in pages of terms and conditions, dubiously legal extra-EU transfers. The rules exist. Enforcement is another matter.
Healthcare's digital transition is inevitable and in many ways desirable. Carrying your clinical history on holiday and having a foreign doctor read it is objectively useful. Allowing researchers access to large datasets to study rare diseases is objectively valuable. The problem isn't digitalisation itself.
The problem is we're building permanent infrastructure on the basis of temporary protections. Infrastructures outlast governments, political majorities, regulations. Today's prohibitions can become tomorrow's exceptions and the day after's established practice.
When health data is discussed, debate almost always focuses on privacy as an abstract value. But privacy isn't the only issue. The issue is who controls the predictive power that emerges from aggregating this data and how that power is used to allocate resources: credit, insurance, access to care.
Your body produces data twenty-four hours a day. Every heartbeat, every step, every night of disturbed sleep. Until yesterday, this data was biological background noise. From today, it's an asset in a market worth hundreds of billions and growing faster than any other.
The cold question, the one deserving an answer before delegating everything to system trust, is simple: who is building the infrastructure to read that noise and transform it into decisions about you? And will that infrastructure, once built, answer to today's policies or to those who, tomorrow, will have the power to rewrite them?