- | 8:00 am
Manage GenAI tools like a smart intern
To gain real value, use this approach as a best practice in evaluating and adopting new generative AI technologies.
Many conversations about how generative artificial intelligence (GenAI) will impact the workplace seem to focus on one of two opposing views: AI will replace humans, or AI projects should be abandoned now because they aren’t delivering sufficient business value. In the legal world, research shows that companies are increasingly incorporating AI-powered tools, but at the same time feel employees are unprepared for the impacts.
The current reality is that, right now, GenAI and some professions (legal, for one) aren’t working together as well as they could be. Some users trust AI outputs too much, others not at all. It feels hard to strike the right balance. That may be due in part to legal professionals’ misinterpretation of how AI works, a hesitation perhaps catalyzed into full-blown fears recently by early high-profile failures in both the courtroom and the data science lab.
In navigating these uncertainties, we’re seeing the rise of a middle ground: an understanding that practitioners using AI and GenAI tools can gain demonstrable, quantifiable value from them when their output is managed like the work product of a smart intern. It’s an idea that’s easy for anyone to get their arms around. It’s also part of a three-step approach your company can use to get comfortable with GenAI not working quite like more conventional computer experiences.
3 Steps to gain trust––and value
Here are three steps any organization can take to build trust in GenAI solutions––the necessary foundation of any business value. To help ensure that using the tool will build trust, rather than erode it, organizations should:
1. Choose tools that embed GenAI improvements into existing workflows:
Users should not have to go to a separate website or exit the task they’re
doing to use the GenAI tool. Any prospective tool should arrive with precise
interaction points already integrated into existing workflows. Curated
integration, out of the box, can increase a GenAI tool’s reliability by:
- Avoiding open-ended, unreliable user interactions––think clicking a single button, rather than requiring users to master prompt engineering, or risk a chatbot going off the rails.
- Ensuring that all the relevant context can be included automatically, rather than relying on users to find and upload the necessary supporting documents each time.
2. Reshape users’ mental model of the tech: GenAI solutions are different from algorithmic tools such as search because they don’t produce the same results every time. Instead, they’re flexible, powerful and sometimes inconsistent. It’s best to think of GenAI tools’ capabilities in human terms—like a smart intern. As with a smart intern, these tools can help users do their daily jobs better. And also just like an intern, they sometimes make mistakes.
Mindset shift is a critical point in successful GenAI adoption—use of the tool must be accompanied by a change in the tacit belief most users have that “the computer is always right” (or the converse, that if it’s wrong once, you can never rely on it again). Users will need to get comfortable with the indeterminate aspects of this still-new technology to better grasp its potential utility and limitations.
In practice, an intern mentality encourages users to think about working with GenAI as the evolution of trust in a relationship. When you first start using GenAI, just like on an intern’s first day on the job, you’re going to want to check every bit of work it produces. Over time, analogous to being a couple of months into the summer internship, you may find some tasks that the AI intern performs well enough to accept as a first pass, but still need to check and make your own. There may be other tasks the intern performs so reliably that you don’t even need to check its work. And there may be still other tasks that you don’t want to entrust to the intern at all.
This kind of approach can be implemented at a personal or organizational level, and can be emphasized when training new users on new GenAI tools. Importantly, it accounts for variances in different AI models’ capabilities and performance, the particular tasks at hand, and user risk tolerances.
3. Be able to immediately verify results: A critical corollary to the point above, users should be able to immediately confirm the validity of any output a GenAI tool produces, in the platform. This capability should be a component of any GenAI tool your company may consider, so that users can check the AI’s work, and implement and evolve their trust profile with it over time. Correctness concerns can be addressed if the output contains citations, links, or other quick ways to point back to the facts that support the tool’s generated work product.
Professionals in many fields, including law, are finding it difficult to figure out exactly where and how GenAI can make tasks easier. If you consider the three steps above as a best practice in how to evaluate and adopt GenAI tools, I believe your organization will be much closer to achieving trustworthy results that deliver real business value. At the end of the day, users need to think of GenAI not as infallible, but as a powerful collaborative partner in which you build trust the more you understand and work with it––just like a smart intern.
AJ Shankar is CEO and founder of Everlaw.