Generative AI: Risks, responsibilities, and regulation

7 minute read

Bill Gates recently announced that the age of artificial intelligence (AI) has begun, noting that recent developments in generative AI—such as OpenAI’s ChatGPT and Google’s Bard—are “the most important advance in technology since the graphical user interface.”[1] Similarly, a report by Goldman Sachs predicted that 300 million jobs could be changed or replaced by advancements in generative AI, though most roles will be “complemented rather than substituted.”[2]

Over the past five years, the use cases of AI have become more apparent, with many compliance teams now employing AI-driven tools to assist with run-of-the-mill tasks, including regulatory change management or surveillance. However, while AI is becoming mainstream, new advances in generative AI are taking automated capabilities to a new level, posing various challenges as they evolve.

This document is only available to members. Please log in or become a member.
 


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field