Consider HIPAA Implications When Using PHI to Train AI Models, Experts Say

Health care entities and technology companies seeking to use health data within artificial intelligence (AI) systems need a good grasp of the HIPAA implications to avoid inadvertently creating privacy risks, experts say.

Ty Kayam, principal corporate counsel for digital health, artificial intelligence, and technology transactions at Microsoft, and Jodi Daniel, an attorney with Crowell & Moring, spoke on the use of protected health information (PHI) in AI systems at the 41st National HIPAA Summit Feb. 27.[1]

“In the space of using data for AI purposes, from where I sit or what I do day to day, I really ask myself four big questions,” Kayam said. “What are you trying to accomplish? What sort of universe or technology are you in? What is the use case that you want to accomplish? And when do you need data, and then what do you need?”

To answer these questions, it’s essential to determine whether the data needs to be identified or can be de-identified and whether there are mechanisms that can mitigate privacy risks, she said. Any entity looking to use data for AI needs to know what laws apply on the state, federal and global levels, and consider deploying some effective AI privacy risk mitigation strategies, Kayam said.

This document is only available to subscribers. Please log in or purchase access.
 


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field