AI is still in its early stages, but its development is fast. And while different tech companies are already embracing and injecting it into their products and services, no clear sets of regulations or standards can regulate them. OpenAI, one of the leading groups in the superintelligence industry, recently outlined its view on this and some ideas that might be considered a good start for lawmakers to consider.
In its blog post on Monday, OpenAI executives Sam Altman, Greg Brockman, and Ilya Sutskever expressed how OpenAI is supportive of AI regulation. The post compared AI to nuclear energy and synthetic biology as samples of creations that could benefit humanity despite possible risks. As such, the execs voiced how there should be some regulations that AI creators will be compelled to follow in order to “manage risk.”
The leaders also detailed three points that could guide AI companies in developing the tech: coordination, a governing body and standards, and the “technical capability to make a superintelligence safe.” Nonetheless, the three also added that the public should have a say regarding the matter.
“But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight,” the execs wrote. “We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.”
OpenAI’s views detailed in the post are not a surprise, given the company itself has always been vocal about AI regulation. During the Senate hearing on AI this month, Altman agreed about this. Yet, other private companies certainly have their own varying comments and beliefs about this, including Microsoft and Google, which didn’t send any official representatives to the hearing. Microsoft, which has been investing multi-billion dollars in OpenAI, even has its own ways of implementing it in its business. According to reports in March, the company removed its “ethics and society” team, with employees believing the decision was made to allow faster shipping of AI features to Microsoft products. Even more, different crimes using AI have already been reported.
With this, the need for regulation and standards for AI development is certainly necessary. And with more businesses investing in AI, which could indicate a more AI-centered future, they are needed as soon as possible. However, this move needs more than just a supportive voice from one of the biggest AI leaders in the industry: it also needs proactive government and lawmaker efforts. Yet, looking at the present time, we’re still waiting for bills to be proposed to efficiently prevent potential AI-related dangers.