Processing Your Payment

Please do not leave this page until complete. This can take a few moments.

August 7, 2024

Addressing the AI in MAIne

When ChatGPT was released in November of 2022, it felt like the world would change. A powerful technology, generative artificial intelligence (GAI), emerged beneath our fingertips. GAI is a subset of artificial intelligence (AI) technology that allows users to rapidly create new content in multimodal formats with simple commands. ChatGPT, a popular GAI product, promised to fill our workdays with less drudgery and more productivity. ChatGPT largely delivered on this promise, streamlining to-do lists. With acuity, ChatGPT drafted letters notifying customers of outstanding bills, created personalized marketing brochures, analyzed large datasets, and conversed about actionable insights, among many other things.

Justin B. Cary, Attorney at Drummond Woodsum | AI Practice Group

But now, close to two years since the release of ChatGPT, an informal survey reveals that fewer than 25% of Maine businesses have experimented with GAI. When asked why, common reasons include the uncertainty associated with the nascent state of the technology and risks of misuse.

As an attorney at Drummond Woodsum, I appreciate skepticism of lofty promises. In fact, last year, as I presented locally and nationally on my experience in AI/GAI Law, I cautioned restraint about adoption of GAI technology into businesses. I spoke to hundreds of employers about the susceptibility for GAI misuse, the unreliability of GAI outputs, the myriad data privacy and intellectual property right issues with GAI, and the propensity for bias.

There is good reason to be cautious. Over the last year and a half, a parade of high-profile cases about GAI misuse marched through the headlines. A few of these cases became canon. Mata v. Avianca shows a lawyer sanctioned for submitting a legal brief that included non-existent citations generated by ChatGPT. In the Samsung GAI Conversational Leak, Samsung employees accidentally leaked sensitive company information by inputting confidential data into ChatGPT. The list goes on. These cases seem to confirm GAI is Pandora’s Box — with increased capabilities comes misuse, lawsuits, corner-cutting, and unauthorized disclosures.

But closer examination reveals a more important lesson. The throughline in each case is an individual’s unfamiliarity with the technology. In Avianca, lawyers failed to grasp that GAI models are prone to hallucinations (the creation of fake but plausible information without disclosure to the user). In Samsung, employees misunderstood the data privacy implications of submitting a prompt to ChatGPT and Samsung lacked a policy to guide employee conduct. In every GAI run amuck case, decisionmakers failed to understand a basic capability or deficiency of GAI.

Like the internet, the true risk of GAI is not found in technology, it is found in how humans understand it, or fail to do so. In many cases, the problem is not the presence of artificial intelligence, but the absence of human understanding.

Like the internet, the true risk of GAI is not found in technology, it is found in how humans understand it, or fail to do so. In many cases, the problem is not the presence of artificial intelligence, but the absence of human comprehension.

Taking a passive approach to GAI has legal risks: you might not use GAI to streamline work, but your employees will. GAI materializes unbidden — such as GAI-created deepfakes or scams, GAI-interfacing software with lax data privacy protections, or undisclosed subcontractor use of GAI, catching the uninitiated decisionmaker flatfooted. The passive approach also has business risks: money will likely be spent on repackaged GAI services that are free elsewhere, time wasted on tasks that are aided by GAI, competitive edges lost to GAI-savvy competitors.

To remedy this, our Artificial Intelligence Practice Group has been training businesses on best practices for using GAI, drafting policies for GAI in the workplace, counseling clients on negotiating Data Privacy Agreements and Service Agreements with an eye to GAI-related issues, providing presentations and webinars teaching the first-draft approach to GAI, and IP/confidential information-safe prompt engineering.

While clients tell me that these services help, the first step is free: create an account for a trusted GAI service — try ChatGPT, Google’s Gemini or Microsoft Edge, and safely experiment. Ask questions you know the answers to and check the GAI’s work. Then, if you find yourself perplexed, concerned, or excited about what GAI means for your business, consider reaching out to Drummond Woodsum’s Artificial Intelligence Law Practice Group. Call 207-253-0568 or email jcary@dwmlaw.com to start the conversation.