- | 9:00 am
Artificial stupidity: Can GPT handle the truth?
As a legal tech application, generative AI lacks a few of the attributes that tend to come in handy when practicing law—namely, the ability to think, reason, or exercise judgment.
“Not that you lied to me but that I no longer believe you
has shaken me.” —Friedrich Nietzsche, Beyond Good and Evil
Imagine you’re a litigation partner at a law firm. You have a brand-new associate who seems to surpass all expectations: works 24 hours a day, seven days a week, 365 days a year; reads and absorbs millions of pages of material in a fraction of a second.
What’s the catch? The associate is subject to random hallucinations and occasionally making things up out of whole cloth that are extremely convincing—but totally false. In short, that’s AI at work for you—at least, as it currently exists. More specifically, generative pre-trained transformers, or GPT as you’ve probably heard it referred to.
Yet, the legal technology-market messaging is very different. Amazing. Groundbreaking. Game-changing. Revolutionary. While some healthy skepticism has begun to creep into the larger cultural milieu surrounding the AI revolution, when it comes to marketing the newest generation of artificial intelligence tools rapidly making their way into lawyers’ toolboxes, hyperbole fails—putting it mildly. If you listen to even a fraction of the unabashed market razzmatazz surrounding new legal AI tools, it’s the greatest boon to the legal profession since the copy machine.
But is it that straightforward? Are glory and prosperity just a few well-placed keystrokes away? Have we carefully considered the risks with prudence and deliberation, doing our due diligence on every possible pitfall? Let’s just say the jury is still out on that.
Putting aside all the nifty demos and incessant cheerleading, if this technology is going to have the impact on the legal industry predicted for it, there is one issue that needs to be understood and addressed: Can generative AI handle the truth? As the AI technology becomes more pervasive, answering that question is something that more than just a few good lawyers should be taking a hard look at.
As much as the hyperventilating hype purveyors may want to avoid or gloss over the subject, there’s no algorithm for determining whether something is true or false, accurate, misleading, or completely made-up. In fact, highly convincing, completely made-up statements are regular occurrences. Indeed, a whole industry has sprung up attempting to vet GPT generated content for veracity.
Yet, figuring out what the facts are—i.e., seeking the truth—is fundamental to our legal system. We refer to juries (and judges in bench trials) as triers or finders of fact. Along with applying the law to the facts, determining what really happened (i.e., the truth, again) is the raison d’etre of the legal process.
Regardless of your position on where AI currently falls on its evolutionary spectrum, there are some irrefutable truths to keep in mind as you are confronted with the marketing machine trying to sell you on the AI utopia just around the corner . . . if you will just part with a subscription fee:
- The underlying technology is not new. The technology collectively referred to as GPT has been around for a long time. Machine learning and neural networks since the 1950s, generative AI since the 1960s, natural language processing since the 1970s, and deep learning since the 1980s. Advances in computing power and the vast amount of content available on the entire internet (garbage in, anyone?) have enabled what is being portrayed as a revolution in computer thinking.
- GPT chooses a word to follow the previous one based on probabilities. It puts together sentences based on analyzing vast amounts of text (which have been converted into numbers) and then calculates the probability that a given word will follow another one (with a random element built in to avoid complete replication). It has been described as autocomplete on steroids. That’s a pretty good analogy.
- GPT doesn’t think, comprehend, understand, reason, or have any judgment whatsoever. It views your questions and its answers as numerical patterns devoid of meaning or context. Neither truth, falsity, reliability, or accuracy enter into it. To sum it up, you might say, it’s not intelligent.
When I asked ChatGPT whether it understands the truth, I got the following response:
I don’t possess personal beliefs or consciousness to understand truth or falsity in the way humans do. My responses are based on patterns in the data I’ve been trained on, but I don’t have the capacity to discern truth or falsity in a subjective or philosophical sense.
At a time in which the truth and the facts are increasingly viewed by some as rare commodities, our legal system needs to maintain trust and faith in its integrity. It is imperative therefore that lawyers using this new generation of AI tools exercise the highest level of care and diligence in their application and work hard to understand the technology’s strengths as well as its limitations.
In the end, GPT is just a tool—nothing more, nothing less. It can be used well or badly. There are some potentially very helpful uses of this technology, but there still are some important issues to be worked out. The probability that it’s going to replace human lawyers any time soon is about the same as the odds that your smartphone and your garage-door opener are going to conspire to steal your self-driving electric car.