• | 9:00 am

POV: Let’s not forget Isaac Asimov’s advice on developing ethical AI

With the proper infrastructure in place, AI can be a force to lift humanity to new heights—but if left to develop without rules or ethical considerations, it could be disastrous.

POV: Let’s not forget Isaac Asimov’s advice on developing ethical AI
[Source photo: Byron Sullivan/Pexels; Luca Bravo/Unsplash]

For decades, intellectual leaders have anticipated a revolution in artificial intelligence. Going back to the 1950s, writers like Isaac Asimov imagined man-made computer systems that could do human tasks better than humans could. Many saw the ways this technology could uplift humanity—but they also saw the harmful potential of such a powerful tool.

That insight sparked a conversation about AI ethics that has lasted 70 years and led to guidelines like Asimov’s three laws of robotics—the first and highest of which was that no robot should harm a human or allow a human to be harmed. Asimov’s laws of robotics are designed to ensure that AI is built for the benefit and advancement of humans.

Now that the AI revolution has arrived—and just as importantly, been made available to the public—there has been an explosion of generative AI that can write essays, paint pictures, and even crack jokes. We also know that it has the capacity to hack systemsspread disinformation, and infringe on privacy and intellectual property.

The giddy rush to explore the possibilities presented by such a powerful tool has left AI ethics in the dust. Seven decades of thought, planning, and rule development went out the window in practically a moment—and now, we find ourselves in a dangerous position. Many of these popular programs have woefully inadequate infrastructure in place to help them vet data and ethically process it.

That is creating a few systemic problems. AI is subject to hallucination, in which models consume so much undifferentiated information that they cannot discern fact from fiction. They are prone to copyright infringement, which is made more likely by an AI system that doesn’t show its sources or guard against plagiarism. And because these models rarely have infrastructure to keep walls between where information comes from and where it goes, they are likely to ingest and share proprietary information.

These problems are serious. Yet perhaps most troubling is that at a moment when we are seeing the potential damage that can result when AI is misused or irresponsibly developed and deployed, we are also seeing some of the largest organizations on earth make the decision to cut their AI ethics teams and budgets. It’s as if an airline were alerted to safety issues with a new jetliner—and instead of investing in fixes and better controls, it fired its safety teams and doubled down on producing more of the same flawed plane at even faster rates.

The risks we face stem from a lack of rules—and while we have not felt the full consequences of these flaws yet, the repercussions are enormous. Without rules, AI can be used to influence elections and to spread misinformation that deceives voters. Without rules, these models can be used to steal proprietary information and undermine intellectual property. Without rules, these models can be used to erode the most faithful and accurate consensus on what is and is not true.

It is clear that, by collectively rushing headfirst into transformational technology without a strong sense of how to use it responsibly, we have lost track of our collective ethical accountability. That’s why leading tech figures have called for a halt to this kind of AI development, with more leaders sounding the alarm daily. And with widely respected artificial intelligence leader Geoffrey Hinton recently departing Google in order to speak openly about the dangers of AI, the alarm bell has reached a deafening decibel.

The good news is that there is a better way to approach this challenge: by creating our AI models with a core set of ethics. Any AI creator should start by defining core principles and building structures around those rather than building the program first and dealing with the ramifications later. By thinking through guardrails before deploying technology, we can mitigate or prevent the most problematic uses and abuses of these systems.

We can impose boundaries on our AI models through rules to make them safer and more responsible. Instead of using open systems that pull information from everywhere, we can use closed systems that only use the information you feed them, and then cite their sources. That helps prevent AI hallucinations by keeping control over what the system learns, and helps avoid plagiarism by making it clear what is and what is not original language. At the same time, it helps protect proprietary information by placing guardrails around where that information can go.

The reality is that artificial intelligence is going to change the world whether we like it or not. Reflecting on the dangers AI poses, some have said that while software is eating the world, AI is its teeth. The truth is, AI was always supposed to be the heart. AI was originally envisioned to heighten accessibility for people in our communities surmounting handicaps, bridge world cultures with machine translation, and create hands-free experiences while driving to reduce accidents.

With the proper infrastructure in place, it can be a force to lift humanity to new heights—but if left to develop without rules or ethical considerations, it could be disastrous. In the early days of computers, Asimov presented a clear, simple mandate for the fields of robotics and artificial intelligence: Do no harm. We should honor that rule—and optimize our efforts to do good.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Igor Jablokov is founder and CEO of Pryon, an AI company focused on enterprise-knowledge management. More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter