Health care cybersecurity policy rests on a fundamental misunderstanding of what cybercriminals actually want. For years, regulators and providers have assumed that medical records — diagnoses, lab results, treatment histories — are the crown jewels hackers are after. This assumption has shaped everything from HIPAA compliance strategies to hospital security budgets.

But it’s wrong.

Cybercriminals targeting health care providers aren’t interested in your cholesterol levels or prescription history. They’re after your Social Security number, your insurance details, and your payment information. This isn’t just an academic distinction — it’s a policy blind spot that’s making health care both less secure and less collaborative than it needs to be.

In my research on health care data breaches, my team and I found that the vast majority involved financial and demographic information, not clinical records. When criminals steal data from hospitals or clinics, they’re not looking to embarrass patients or leak sensitive medical histories. They’re seeking information they can immediately monetize: identifiers to commit identity theft, insurance details to submit fraudulent claims, and credit card data to drain accounts.

This makes economic sense. A Social Security number has clear, transferable value on the dark web. A medication list? Not so much. Clinical data depreciates quickly, requires specialized buyers, and has no obvious payoff in most cases.

Yet under current U.S. regulations, all protected health information (PHI) is treated the same. HIPAA offers equal protection to a vaccination history and a billing address. That blanket approach has serious consequences.

By treating all PHI as equally sensitive, current policy discourages legitimate and often life-saving data sharing. Hospitals are reluctant to share clinical records with specialists. Researchers run into hurdles accessing patient outcomes data. Public health agencies face delays in getting the information they need to track disease and target interventions.

Even when data sharing is technically allowed under HIPAA — such as for treatment or operational improvement — fear of regulatory reprisal often leads institutions to err on the side of caution. That includes declining to participate in health information exchanges, limiting collaboration with research networks, or hesitating to share even deidentified data. In California, liability and cybersecurity concerns have slowed participation in the state’s new health data exchange framework, particularly among smaller and unaffiliated providers.

The result is a chilling effect on data sharing — even in moments of national crisis. During the Covid-19 pandemic, for example, only 38% of hospitals surveyed by the American Hospital Association agreed that they could electronically receive the patient information they needed from outside providers to effectively treat Covid-19 cases.

This overcaution can be dangerous. If a patient arrives unconscious in an emergency room, providers may not have access to their allergy list, past surgeries, or medication history — not because the data doesn’t exist, but because it can’t be easily shared. Rather than streamline care, privacy rules have, in many cases, paralyzed it.

The regulatory overreach also stifles innovation. As AI tools become increasingly central to health care — from diagnosing rare diseases to managing chronic conditions — they require large, diverse, and detailed datasets to function effectively. But because clinical data is treated with the same sensitivity as financial data, access for model training and validation is heavily restricted, even when all identifying information is removed. While concerns about re-identification are valid, current policies often overcorrect, limiting the potential of these tools to improve care.

Meanwhile, patients are already turning to large language models like ChatGPT for basic medical questions. With better data, these tools could level the playing field — offering more accurate guidance to underserved populations, improving early detection, and reducing misdiagnosis. But only if we can responsibly train them on real-world health data.

As economist John Cochrane wrote in a 2018 Wall Street Journal commentary, the U.S. is squandering a massive opportunity to improve care and lower costs by locking up data that could safely be used for public good. Countries with fewer constraints may leap ahead — not because they have better technology, but because they allow it to learn.

The fix sounds counterintuitive: Health care providers should flip their security priorities. Treat financial and demographic information as the highest-risk assets requiring maximum protection. Allow more flexibility in sharing clinical data for treatment, coordination, and research.

This doesn’t mean abandoning privacy. Patients still deserve confidentiality for their health information. It’s also true that in rare cases, such as for public figures, medical details can be specifically targeted.


HIPAA can’t keep up with health care’s security crisis

But for most patients, the greater systemic risk is the theft and misuse of financial and demographic data. Hospital security strategy should reflect these different types of threats.

In practical terms, this would mean two key changes:

First, HIPAA should distinguish among types of data. Financial and demographic information should carry stricter access rules, higher encryption standards, and more frequent audits. Clinical data, while still protected, should be easier to share when used for legitimate purposes.

Second, hospitals should restructure their cybersecurity efforts accordingly. Rather than spreading defenses evenly, they should concentrate their strongest protections on billing systems, patient registration databases, and insurance verification platforms — the systems criminals actually target.

Today’s policy creates a false choice between privacy and progress. By assuming that all data is equally sensitive, we’ve built a system where it’s easier to keep information locked up than to use it to improve care.

But it doesn’t have to be this way. Recognizing that different types of data carry different risks would allow health care providers to collaborate more freely, researchers to innovate more quickly, and patients to benefit from more accurate and efficient care — all while better protecting the information hackers actually want to steal.

To move forward, we must face a simple truth: Cybersecurity policy built on the wrong assumptions will produce the wrong outcomes. It’s time to stop protecting the wrong things — and start protecting what truly matters.

John X. Jiang is the Eli Broad endowed professor of accounting and information systems at Michigan State University, with research spanning financial reporting and health care cybersecurity.