Do we need a policy for our use of AI chatbots?

1 minute read

Since its release to the public in November 2022, ChatGPT, an artificial intelligence (AI) language model, has spawned a range of large language models (LLMs) that are becoming relied upon for work-related activities. While there is no doubt LLMs and AI chatbots can support our work by editing, fact-checking, drafting, researching, and coding, there are increasing concerns that they should not be relied upon as foolproof tools.

ChatGPT and other LLMs produce inaccurate results and may infringe the intellectual property rights of third parties, as well as breach data privacy rights of individuals. If they are used to create output at work, their use may breach confidentiality and expose trade secrets. Remember that the information you share with an AI chatbot is retained and reused.

This document is only available to members. Please log in or become a member.


Would you like to read this entire article?

If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

* required field