White House boosts safety cooperation with eight additional AI companies

The Biden-Harris Administration Secures Voluntary Commitments from Eight AI Companies to Manage the Risks Posed by AI

The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. These commitments are aimed at promoting the development of safe, secure, and trustworthy AI technology.

Companies Involved in the Commitments

  • Adobe
  • Cohere
  • IBM
  • Nvidia
  • Palantir
  • Salesforce
  • Scale AI
  • Stability

The Administration’s Efforts in Responsible AI Development

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure that the US leads in responsible AI development while managing its risks.

Commitments Made by the Companies

1. Ensure products are safe before introduction:

– The companies will conduct rigorous internal and external security testing of their AI systems before releasing them to the public.

– They will involve independent experts in the assessment process to guard against significant AI risks like biosecurity, cybersecurity, and broader societal effects.

– Information on AI risk management will be shared collaboratively with governments, civil society, academia, and the industry.

– Best practices for safety, information on attempts to circumvent safeguards, and technical cooperation will also be shared.

2. Build systems with security as a top priority:

– The companies will invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

– Model weights will only be released when intended and when security risks are adequately addressed.

– The companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems.

3. Earn the public’s trust:

– The companies will develop robust technical mechanisms, such as watermarking systems, to indicate when content is AI-generated.

– Public reporting on AI systems’ capabilities, limitations, and appropriate and inappropriate use will be done to increase transparency and accountability.

– Research on the societal risks posed by AI systems, including bias and discrimination, will be prioritized.

Global Engagement and Collaboration

The Biden-Harris Administration’s engagement with these commitments extends beyond the US with consultations involving international partners and allies. These commitments complement global initiatives such as the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

Conclusion

The voluntary safety commitments from these AI companies mark a significant milestone in responsible AI development. By working together with industry leaders and the government, the aim is to ensure that AI technology benefits society while mitigating its inherent risks.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *