OpenAI and Anthropic Partner with US AI Safety Institute for Enhanced AI Safety Measures

OpenAI and Anthropic Partner with US AI Safety Institute for Enhanced AI Safety MeasuresOpenAI and Anthropic Collaborate with US Government on AI Safety Measures OpenAI and Anthropic, two prominent AI companies, have agreed to allow the US government's AI Safety Institute to assess their new AI models. The evaluation will focus on the models' capabilities and risks, and the parties will work together to develop methods to mitigate any potential issues, according to a Bloomberg report. The ChatGPT app, a product of OpenAI, was showcased on an iPhone in New York on May 18, 2023. "Safety is a crucial component in driving technological innovation," Elizabeth Kelley, the director of the US AI Safety Institute, stated. She added that these agreements are a significant milestone in ensuring responsible stewardship of AI's future. The National Institute of Standards and Technology (NIST) of the Commerce Department announced these agreements on Thursday. In close collaboration with the UK's AI Safety Institute, the government will offer feedback on potential safety enhancements. OpenAI's Chief Strategy Officer, Jason Kwon, expressed strong support for the mission of the US AI Safety Institute. He stated that the institute plays a critical role in defining US leadership in the responsible development of AI. He also expressed hope that their collaborative work would provide a framework for the rest of the world to build upon. Anthropic also emphasized the importance of creating the capacity to effectively test AI models. Jack Clark, Anthropic's co-founder and head of policy, stated that safe, trustworthy AI is crucial for maximizing the technology's positive impact. He added that this collaboration would enhance their ability to identify and mitigate risks, thereby advancing responsible AI development. The US AI Safety Institute was established in 2023 as part of the Biden-Harris administration's Executive Order on AI. The institute is responsible for developing tests, evaluations, and guidelines for responsible AI innovation. In other news, OpenAI, which is backed by Microsoft, is reportedly planning to raise at least a billion dollars in a new funding round. This would increase the company's value to over $100 billion. Microsoft owns a 49% share of OpenAI's profit, having invested $13 billion in the chatbot company since 2019. This news follows OpenAI's announcement last month that it is testing SearchGPT, a combination of AI technology and real-time search data that allows users to search the internet with ChatGPT. Bottom Line The collaboration between OpenAI, Anthropic, and the US AI Safety Institute signifies a significant step towards ensuring the safety and responsible development of AI technologies. It highlights the importance of government and private sector collaboration in setting safety standards and best practices for AI models. What are your thoughts on this development? Do you think it will effectively mitigate potential risks associated with AI? Share this article with your friends and let's discuss. Don't forget to sign up for the Daily Briefing, which is delivered every day at 6pm.

Some articles will contain credit or partial credit to other authors even if we do not repost the article and are only inspired by the original content.

Some articles will contain credit or partial credit to other authors even if we do not repost the article and are only inspired by the original content.