Artificial intelligence: Compliance considerations for provider organizations

Listen to article
7 minute read

Artificial intelligence (AI) is nothing new to the healthcare industry, as many organizations and clinicians have utilized such tools in some capacity for many years. Imaging-related AI to support radiologists is not uncommon, to use one example. However, more recently, there has been a marked increase in interest in the use of such tools in healthcare (and across all industry sectors), including generative AI—i.e., where the technology creates a new output based on existing data—and the range of uses of such tools continues to expand. AI can create potential efficiencies in care delivery as well as in administrative activities and create new touchpoints for patient engagement. For instance, in addition to the development of AI as a clinical decision support tool for practitioners, AI tools can serve as virtual assistants for practice management and provide interactive symptom checkers for use by consumers. AI tools also have the potential to significantly improve healthcare outcomes, such as providing means for earlier detection of a disease or condition. More generally, it is likely that at least some individuals in every organization’s workforce have at least tried ChatGPT since its launch in late 2022 for purposes of research or drafting content as part of their responsibilities. All the innovation occurring makes for an exciting time in healthcare, but the opportunities presented by such innovation must be balanced with efforts to mitigate risks.

This document is only available to members. Please log in or become a member.
 


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field