A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer. OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs. The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.
Judge throws out conviction of man found guilty of killing Run-DMC star
...
Read moreDetails
