Whether it’s a high school student frantically completing last night’s homework in the remaining few minutes before class, or an overworked, underpaid intern trying desperately to finish his research paper, or a government official who simply does not have enough caffeine in his system to write that damn speech, there are many potential ways to benefit from ChatGPT. From its original release in November 2022, ChatGPT has been the saving grace for many. From large facilities to smaller companies to law firms, many organisations have implemented ChatGPT to further their productivity and efficiency. But for all of its convenience and benefits, some need to be reminded of its flaws.
May 2023 brought that stark reminder for Steven A. Schwartz, a US attorney for over thirty years. After his legal team was given a case involving an attempt to sue an airline, Schwartz used ChatGPT to reference previous cases that would help the current case move forward. In a brief they submitted to the court, Schwartz and his team ended up citing several cases they got from ChatGPT. Upon receiving this brief, however, Judge Kevin Castel,a federal Senior judge for the Southern District of New York, was shocked to see that six of these referenced scenarios were completely fabricated. With this being a previously unheard of scenario for the court, Judge Castel demanded that the legal team explain themselves.
In the screenshot messages between ChatGPT and Schwartz, it’s clear that . Schwartz asked ChatGPT if the cases provided are authentic. ChatGPT responded “yes.” Schwartz’s explanation of how this happened, is that he was “unaware its content could be false.”
In the end, Schwartz was given a $5,000 fine for misleading the court. While his use of AI did lead to this mishap, it was not the reason for his punishment, but rather his incompetence in handling the information. Instead of verifying the AI generated results himself, Schwartz relied on ChatGPT’s self-verification. In even more severe cases, ignorant handling of AI in law practice could result in a lawyer’s suspension or even revocation of a law licence.
How can employees using ChatGPT avoid this? Here are a few tips:
- Transparency and disclosure statements. Be upfront and transparent in disclosing the use of AI in business functions; always inform clientele when AI is being used.
- Legal and compliance considerations. Non-compliance with governmental regulations surrounding AI can result in hefty fines. Understand the policies that are applicable and follow them accordingly.
- Human oversight. Verify AI generated results to ensure that the information is accurate and relevant.
- Corporate policy. Create a set of guidelines for how AI must be used in the workplace, including decision making and accountability.
- AI insurance coverage. Many businesses consider AI insurance coverage options to provide protection against liabilities arising from AI related issues, including protections against claims of negligence or inadequate work stemming from AI systems, coverage against data breaches/cyberattacks facilitated by AI vulnerabilities, or protections against regulatory fines due to non-compliance with AI related regulations.
- Assessing vendor AI usage. Understand how AI is being used in suppliers’ processes to mitigate potential risks.
Don’t miss our upcoming webinar, AI compliance and ethical practices – Ensuring the responsible use of AI in your organisation. Register here.