OpenAI, known for developing ChatGPT, has shifted its stance on military collaboration, now actively working with the Pentagon on several projects, including cybersecurity and veteran suicide prevention. This new direction marks a departure from OpenAI’s previous policy against using its artificial intelligence for military purposes.
Anna Makanju, OpenAI’s Vice President of Global Affairs, shared insights during an interview at Bloomberg House at the World Economic Forum in Davos. The company is contributing to open-source cybersecurity software development in partnership with the US Defense Department. This collaboration involves DARPA’s AI Cyber Challenge, announced last year, aimed at identifying software solutions to automatically rectify vulnerabilities and defend against cyberattacks.
In addition, OpenAI has initiated discussions with the US government on how its technologies might help prevent veteran suicides, reflecting the company’s expanding role in addressing critical societal issues.
OpenAI recently updated its terms of service, removing the clause that explicitly banned AI use in “military and warfare applications.” Makanju explained that this change was part of a broader policy update to accommodate new applications of ChatGPT and other tools. Despite this shift, OpenAI maintains a strict prohibition against using its technology to develop weapons, cause destruction, or harm individuals.
Microsoft Corp., OpenAI’s largest investor, already has several software contracts with the US armed forces and other government branches, indicating a broader trend of tech giants engaging with defense sectors.
Beyond its defense collaborations, OpenAI is also escalating efforts in election security. The company is dedicating resources to ensure its generative AI tools are not misused for spreading political misinformation. Sam Altman, CEO of OpenAI, emphasized the importance of safeguarding elections, highlighting the company’s commitment to addressing the challenges posed by AI in the political arena.