
Amazon, Google, Meta, Microsoft and other leading firms in the development of Artificial Intelligence technology have agreed to comply with a series of non-binding control measures proposed by the administration of President Joe Biden related to this technology.
The White House said on Friday that seven US companies have committed to having their artificial intelligence products verified as safe before making them public.
Part of this commitment is accepting third-party oversight of the operation of AI systems for widespread use, although no details have been provided on who will verify the technology or who will hold companies accountable.
A surge in investment in devices that work with Artificial Intelligence to produce text similar to what humans write themselves and produce new images has become an instant favorite with the public but has also fueled concern about their ability to deceive people and spread misinformation.
The four biggest tech companies, along with ChatGPT creator OpenAI and the firms Anthropic and Inflection, have committed to vetting “conducted in part by independent experts” to protect against key risks, including cybersecurity, the White House said in a statement.
This verification will also examine the potential for harm to society for example from prejudice and discrimination, and theoretical risks around advanced Artificial Intelligence systems that can take over control of physical systems.
They will also publicly report flaws and risks in their technology, including the effects it may have on inciting bias, the White House said.
The non-binding commitment by tech companies is meant to quickly react to AI risks before Congress passes laws to control them, although the passage of those laws is part of long-term plans.
Tech executives are expected to meet with President Biden at the White House on Friday.
Some critics who want artificial intelligence to be regulated by law said the Biden administration’s decision is a good start, but more needs to be done to hold companies and their products accountable.
Some experts and new competitors worry that the type of regulation being floated could be a boon to big, wealthier companies like OpenAI, Google and Microsoft as smaller firms are excluded due to the high cost of developing AI systems.
Several countries are considering ways to set AI rules, including European Union lawmakers who are negotiating comprehensive AI rules for the 27-nation bloc, something that could limit applications considered to have the highest risks.
The pledge is heavily focused on security risks, but does not address other concerns including the impact on jobs and market competition, the environmental resources needed to build AI models, and copyright concerns when it comes to writing, art, and other human-created works used to teach AI systems how to produce human-like content.