Reed NewsReed News

Anthropic and OpenAI Hiring Experts to Mitigate AI Risks in Chemical and Biological Weapons

Science & technologyScience
Key Points
  • Anthropic is hiring a chemical weapons expert to develop safeguards against AI misuse.
  • OpenAI is also recruiting researchers to analyze biological and chemical risks from AI models.
  • Both companies aim to prevent AI from providing instructions for dangerous weapons like 'dirty bombs.'

The American AI company Anthropic is reportedly hiring an expert in chemical weapons and explosives to reduce the risk of its AI systems being used for dangerous purposes, according to BBC. ' Anthropic has previously recruited experts in other sensitive areas for similar work. Competitor OpenAI is also seeking researchers to analyze biological and chemical risks associated with AI models to prevent the technology from being exploited for harmful purposes.

Currently, international regulations in this area are lacking.

Transparency

How we verified this article

LowBased on 2 sources
2 sources2 Involved