Introduction to AI and Current Hot Topics
Everyone seems to be talking about AI these days. There is no shortage of news stories about new advances in AI technology, the latest missteps of people using “bad” information generated from AI technology, and conjecture about what the future looks like as the use of AI spreads.
But does your organization need to worry about AI? For that matter, what is AI anyway?
What is AI?
There is currently no universally accepted definition of “AI” or “artificial intelligence,” but it is a broad term used to describe technologies that simulate functions we typically associate with human intelligence, such as making predictions, streamlining decisions, generating recommendations or even creating new content. The concept of AI is not new, however, and computers have had the ability to perform increasingly complex tasks since the 1940s.
Why is everyone talking about AI now?
In late 2022, OpenAI launched a publicly available version of ChatGPT which went viral. ChatGPT is a type of “generative AI,” meaning the technology can generate new, unique content such as documents, images, songs, computer code, and more. Users are able to engage ChatGPT using human-like dialogue, which makes the technology more accessible to the general public. Given ChatGPT’s success, other companies have been scrambling to release their own generative AI tools. The rise in wider use of generative AI technology and its often-times impressive outputs have brought generative AI to the forefront of public discourse.
Does my organization need to start thinking about AI?
The short answer is “yes.”
Even if your organization is not a “tech company,” employees and third-party vendors are likely already using generative AI technologies in their work for your organization.
Potential pitfalls of using generative AI
When evaluating the potential risks of using AI technology in your organization, specifically generative AI, it can be helpful to consider the three aspects of generative AI technology where problems are most likely to arise: (1) the information or data used to develop and train the tool; (2) how the tool works (i.e., the algorithms used), and (3) the information (or output) generated from the tool. Below are some of the most common issues to consider when evaluating whether to use AI:
IP issues
Infringement claims can arise if an AI tool was trained using a third party’s IP without permission or if the AI tool generates identical or nearly identical output to that which was input. At this time it is unsettled law as to exactly when output from a generative AI tool can receive copyright protection, given that copyright protection requires human creation. IP issues such as these can result in costly litigation or wasted resources on content that cannot be protected.
Confidentiality and privacy issues
Confidentiality issues can also arise if an organization’s confidential or proprietary information is input into an AI tool. Depending on the AI tool, this may be considered a public disclosure of that information, and the information may lose its trade secret protection or the disclosure may impede future patentability of products. Savvy competitors may also be able to prompt that AI tool to generate your organization’s input. Further, if information that is subject to a confidentiality agreement is input into an AI tool, this may raise breach of contract issues. In a similar vein, privacy issues can arise if personal information is input into an AI tool without the proper notice to or consent of the data subject and may violate applicable privacy laws.
Bias
Issues of bias (such as discrimination) can arise if either the information input into an AI tool was biased (even if unintentionally), or if the AI tool was not properly designed or tested to prevent biased outputs. In the U.S., the FTC and some state legislatures are already looking at issues of bias in the context of using AI tools in the recruiting and hiring process.
False information or “hallucinations”
AI tools generating false or incorrect information that appears true (also known as “hallucinations”) were the subject of numerous news stories in 2023. Aside from the obvious problem of using false or incorrect information, this can also cause reputational harm to people and organizations. It is important to always have a human double-check the output generated by an AI tool prior to using the output.
Where to start?
Understanding where and how an organization’s employees and third-party vendors may already be using generative AI tools is the first step. Implementing policies, providing training, and updating contract provisions are all important steps to help mitigate the risks associated with the use of AI. Each organization’s policies and training should be tailored based on that organization’s industry and risk tolerance. Keeping abreast of new and changing legislation is also key.
Laws addressing AI
In recent years there have been some, but not many, laws and regulations passed in the U.S. specifically targeting the development and use of AI- most notably in the healthcare and employment spaces. Further, in October 2023 President Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” so there is more regulation to come as federal agencies draft AI guidelines in response. The European Union is on the verge of formally adopting the EU AI Act (effective in 2026), the first of its kind in terms of sweeping AI legislation.
Despite the limited passage of new legislation pertaining specifically to AI at this time, the use of AI is already regulated under existing laws. The technology may be new(er), but the issues are not. For example, the FTC has already begun regulating issues related to AI under Section 5 of the FTC Act, which prohibits “unfair and deceptive trade practices.” It is important to re-review the regulations that impact your specific industry and consider how they could be adapted to apply to the use of AI technologies.
Join us for Privacy Day!
If you are interested in learning more about AI, please join us for a 1-hour webinar titled “Privacy in AI: Hot Issues and Priorities” presented by Heather Buchta, Meghan O’Connor, and Johanna Wilbert on February 1, 2024 for a deeper dive into privacy implications of AI, including input, contracting, and risk management considerations. You can also reach out to Elizabeth Wamboldt, Ashleigh Giovannini, or your Quarles data privacy attorney for more information.
For a recap of our Privacy Week events, please click here.