• | 9:00 am

Can generative AI master emotional intelligence?

It’s possible AI chatbots will achieve humanlike ‘intuition’ and ‘agency,’ researchers say.

Can generative AI master emotional intelligence?
[Source photo: Florian Olivo/Unsplash; Baran Lotfollahi/Unsplash; Milad Fakurian/Unsplash]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

GENERATIVE AI’S NEXT FRONTIERS INCLUDE EMOTIONAL INTELLIGENCE AND ADVANCED INFERENCE

Even though the tech world are still working through early kinks with generative AI—training the large language models (LLMs) to understand, for example, when a user prompt calls for a factual answer and when it calls for a creative one—researchers are already thinking about the next frontiers of communication skills they hope to program into LLMs.

Compared to humans, LLMs are still lacking in complex cognitive and communicative skills. We humans have intuitions that take into account factors beyond the plain facts of a problem or situation. We can read between the lines of the verbal or written messages we receive. We can imply things without explicitly saying them, and understand when others are doing so. Researchers are working on ways to imbue LLMs with such capabilities. They also hope to give AIs a far better understanding of the emotional layer that influences how we humans communicate and interpret messages.

AI companies are also thinking about how to make chatbots more “agentic”—that is, better at autonomously taking a set of actions to achieve a larger goal. (For example, a bot might arrange all aspects of a trip or carry out a complex stock trading strategy.) But this raises obvious safety questions: What if a chatbot goes off to work toward a goal without a clear understanding of both the letter and the intent of the human’s commands? Can they be trained to intuit that they don’t have a proper understanding of the human’s real intentions? And, most importantly, shouldn’t AI companies give chatbots the skills to fully understand the fineries of human commands before training them to act autonomously?

HOW MICROSOFT AND SATYA NADELLA ARE WINNING BIG TECH’S AI WAR

In his latest feature storyFast Company global tech editor Harry McCracken sheds light on Microsoft’s early courtship of OpenAI, with insights from Microsoft CEO Satya Nadella and others who were directly involved in the deal. As McCracken shows, the two companies’ relationship wasn’t always smooth sailing:

Seeing a prospective customer for its Azure cloud platform, Microsoft had given the fledgling company [OpenAI] some credits for complimentary computing time. As those freebies dwindled, OpenAI began shifting its workload to Google Cloud, seemingly winding down its relationship with Microsoft before it really got underway.

By 2017, OpenAI was working on its version of the transformer models that had been developed at Google. OpenAI dramatically increased the size of the model, as well as the corpus of training data and the compute power used. If there’s a “secret sauce” in OpenAI’s work, it’s the way the company’s scientists foresaw and planned for the stunning performance increases the dramatic scale-up delivered:

After running into OpenAI CEO Sam Altman at a conference and briefly discussing the possibility of official collaboration, he [Nadella] asked Microsoft CTO Kevin Scott to visit the company and assess GPT with a dispassionate eye. “I went there definitely a little bit skeptical,” Scott recalls. “And they had such excellent clarity of vision about where they thought things were headed, and some experimental data to show that it wasn’t just ungrounded hypothesizing about the future—that something was really happening.”

As McCracken writes, Scott immediately saw how the GPT model could enhance Microsoft products. This led Microsoft to invest an initial $1 billion in OpenAI in July 2019, which bought it “preferred partner” status for commercializing OpenAI’s models. OpenAI got a cash infusion and access to ample computing power on Microsoft’s Azure servers:

After that, Microsoft’s Github began experimenting with GPT-3 to generate computer code based on plain-language prompts, and the results were astonishingly good (while not perfect). This led to Github’s release of the “Copilot” coding assistant, which is now used by more than a million developers, to take some of the grunt work out of coding. This was an important event because it offered proof to Microsoft that OpenAI’s models could be productized. Microsoft would later adopt the “copilot” concept to brand its new GPT-driven features:

In the late summer of 2022, Microsoft executives had yet another “holy shit” experience when OpenAI engineers showed them a rough draft of its most capable LLM to date. Code-named Davinci 3 (and later called GPT-4), it generated text that was far more fluid and factual than that of its predecessors.

This was just a few months before generative AI’s “big bang”—the public release of ChatGPT in late November 2022. In January of this year, Microsoft locked in its priority access to the OpenAI models by acquiring an estimated 49% of the startup for a reported $10 billion. By then, numerous Microsoft teams were sprinting to build GPT-4-powered integrations for everything from Bing search to Microsoft 365.

A FEDERAL COURT DEALS ANOTHER BLOW TO AI-GENERATED ART

The U.S. legal system is beginning to hash out a jurisprudence around AI-generated art, and it looks like bad news for artists who eschew paintbrushes for Stable Diffusion.

Back in 2018, when AI artist Stephan Thalen sought to copyright an image generated by an AI tool he himself created, the Copyright Office refused, stating that the work “lacked human authorship.” Now a federal court has upheld the Copyright Office’s denial, saying that human authorship “is an essential part of a valid copyright claim.”

Last year, AI artists appeared to have gained a victory when Adobe AI evangelist Kris Kashtanova was granted copyright protection for a comic book that contained images generated by the AI tool, Midjourney. Kashtanova said he applied for the copyright with the intention of setting a copyright precedent for AI works. But the Copyright Office later revoked the copyright protection from the AI-generated images, leaving only the human-created words and layout protected.

The Copyright Office’s two rejections, backed up by a federal court decision, could form the outlines of a legal doctrine that will be both cited as a precedent, and challenged, in the many cases involving generative-art ownership that are sure to come.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS