AI startups OpenAI and Anthropic have entered into agreements with the United States government for the research, testing, and evaluation of their artificial intelligence models, according to the U.S. Artificial Intelligence Safety Institute on Thursday.
These groundbreaking agreements occur as both companies are under regulatory review concerning the safe and ethical application of AI technologies. California lawmakers are poised to vote on a comprehensive AI regulation bill this week, which would govern the development and deployment of AI within the state. Under these agreements, the U.S. AI Safety Institute will gain access to significant new models from both OpenAI and Anthropic, both before and after their public launch. These collaborations will facilitate research to assess the capabilities and associated risks of the AI models.
"We believe the institute plays a crucial role in establishing U.S. leadership in the responsible development of artificial intelligence, and we hope our joint efforts provide a framework for global adoption," stated Jason Kwon, chief strategy officer at OpenAI, the creator of ChatGPT. Anthropic, supported by Amazon and Alphabet, has not yet commented on a request for statement from Reuters.
"These agreements mark the beginning of a significant journey in responsibly guiding the future of AI," commented Elizabeth Kelly, director of the U.S. AI Safety Institute. The institute, affiliated with the U.S. Department of Commerce's National Institute of Standards and Technology (NIST), will also work with the U.K. AI Safety Institute and offer feedback to the companies on potential safety enhancements. The U.S. AI Safety Institute was established last year under an executive order from President Joe Biden's administration to evaluate both known and emerging risks associated with artificial intelligence models.