- | 9:00 am
2024 will be the pivotal year for AI regulation
Companies big and small should be held responsible by governing bodies for doing more than just moving the needle.
ChatGPT sparked a new wave of interest, innovation, and investment in AI. But there are still plenty of questions around the tech that need to be addressed and answered.
One of the biggest is: How do we regulate AI? With this question come other questions such as how do we ensure that AI is safe and secure as well as accurate and accountable? How will current and future laws and legislation regulate a technology that we are still testing and learning about daily?
Last month, Meta, OpenAI, Microsoft, and Google as well as other major players signed an agreement with the White House with the promise to invest in more responsible AI and not long after, several of these companies formed an industry coalition called Frontier Model Forum. The aim is to “promote the safe and responsible use of frontier AI systems.”
Big names, small promises.
We are so far behind the E.U., and publicity stunts won’t help us catch up. Just two months ago, decision makers in the E.U. passed a draft law known as the AI Act. This law would place new restrictions on “risky” use cases as well as require companies like OpenAI to disclose how data is used to create their programs. If passed, the AI Act would be the first major law to regulate artificial intelligence.
Here in the United States, we released plans and statements, but nothing is enforceable or binding. In case you missed it, in October 2022, the White House released its AI Bill of Rights. The set of guidelines was created to encourage companies “to make and deploy AI more responsibly and limit AI-based surveillance.” The next topics of discussion are likely to be whether we will see the banning of certain applications or whether we will see more comprehensive tech legislation.
As someone who is in the trenches daily, I think that the AI Bill of Rights is missing the point. Before we can put pen to paper, we must recognize big tech is the problem, not the solution.
This leads to another issue at play here. Larger companies receive more headlines and seats at the table when discussions such as AI regulation take place. What about the startups and smaller companies that are also making headwinds in the AI space with apps and platforms? How do we bring them into the mix to ensure that they have a voice when it comes to regulation? It’s only fair since they will face the same scrutiny as the bigger companies.
Quite frankly, we can’t afford to waste any more time on passing comprehensive AI regulation in the U.S. If 2023 is the Year of AI, then 2024 needs to be the Year of AI Regulation. Companies big and small should be held responsible by governing bodies for doing more than just moving the needle. We all need to keep each other responsible and walk the walk.
Although there isn’t a consensus on how regulation should be handled and officially rolled out in the U.S., it’s fair to say that we all want comprehensive tech legislation and federal privacy law. Some states have gone ahead and passed their own laws e.g., the Biometric Info Privacy Act in Illinois, but this is only enforceable at the state level. How will we enforce these laws and perhaps more importantly, who will enforce them? Will the U.S. have hefty fines like the E.U.?
As we enter the last quarter of the year, there are many milestones and moments to look forward to and despite the fear and concern around A.I. regulation, let’s not discount the exciting opportunities that A.I. brings to the table. Companies in the U.S. and around the world have introduced cutting-edge platforms and initiatives to transform their industries and sectors. We just need to help everyone understand and refine A.I. creations to ensure lasting impact.