Quantcast

Why doctors using ChatGPT are unknowingly violating HIPAA

Your doctor’s new assistant could be a chatbot – and that could be a privacy problem.

With the rise of artificial intelligence, clinicians are turning to chatbots like OpenAI’s ChatGPT to  organize notes, produce medical records or write letters to health insurers. But clinicians deploying this new technology may be violating health privacy laws, according to Genevieve Kanter, an associate professor of public policy at the USC Sol Price School of Public Policy.

Kanter, who is also a senior fellow at the Leonard D. Schaeffer Center for Health Policy & Economics, a partner organization of the USC Price School, recently co-authored an article explaining the emerging issue in the Journal of the American Medical Association. To learn more, we spoke to Kanter about how clinicians are using chatbots and why they could run afoul of the Health Insurance Portability and Accountability Act (HIPAA). HIPPA is a federal law that protects patient health information from being disclosed without the patient’s permission.

How are doctors using chatbots?

Physicians are using ChatGPT for many things, mainly to consolidate notes. There has been a lot of focus on using AI to quickly find answers to clinical questions, but a lot of practical interest among physicians has also been in summarizing visits or writing correspondence that everybody has to do, but nobody wants to do. A lot of that content has protected health information in it.

When physicians meet with their patients, they want to be fully engaged instead of taking notes. They may take brief notes but then have to elaborate on them for the medical record later. It’s much easier and better than the patient-physician interaction to have the physician focus on the patient and have the encounter be recorded and transcribed. Then the transcription could go into the Chat GPT window, which reorganizes it and summarizes it.

What’s the privacy risk? 

The protected health information is no longer internal to the health system. Once you enter something into ChatGPT, it is on OpenAI servers and they are not HIPAA compliant. That’s the real issue, and that is, technically, a data breach.

It’s more of a legal and financial risk – you could have the Department of Health and Human Services (HHS) investigate and fine you and the health system — but there are some real risks there as well with the data being on third-party servers.

Physicians can opt out of having OpenAI use the information to train ChatGPT. But regardless of whether you’ve opted out, you’ve just violated HIPAA because the data has left the health system.

What protected health information could be in these notes? 

There are 18 identifiers that are considered protected health information, so if those are included, then it would be a HIPAA violation. If you don’t have any of those, you’re fine. A lot of those identifiers are things like geographic regions smaller than a state – information that you wouldn’t normally think of as identifiable.

Other examples are patient names, including nicknames; dates of birth; admission or discharge dates; and Social Security numbers.

Would a patient know if a doctor put their data in ChatGPT? 

The only person who would know would be the person who accidentally entered it into the chat. and the clinic or health system they work for if the error is reported there, so a regular consumer wouldn’t necessarily know. Patients usually only hear about it through newspapers getting wind of it through the grapevine and reporting on an incident. Class action lawsuits from patients are a possibility once that happens, but the Office for Civil Rights – the office within HHS that enforces HIPAA – is usually already investigating the incident by the time patients find out.

How can physicians and health systems stay HIPAA compliant?

Clinicians should avoid entering any protected health information into a chatbot, but that can be harder than it sounds. Transcripts of patient encounters can include friendly chit chat that includes personal information – like where people live and the last time they were admitted to the hospital – that is considered identifying information. All of these identifiers need to be scrubbed before any chatbot is used.

As for health systems, they need to provide training on chatbot risks. They need to start now, as people are beginning to try the chatbots, and the training should continue as part of annual HIPAA and privacy training. A more restrictive approach would limit chatbot access to only employees who’ve been trained, or block network access to chatbots altogether. Along with the rest of us, health systems are trying to figure this out.

To learn more, read the “viewpoint” article co-authored by Kanter and Eric Packel, “Health Care Privacy Risks of AI Chatbots.”




The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.