OpenAI, the creator of the ChatGPT chatbot, has discreetly removed its ban on the military use of its artificial intelligence tools.
The development comes as the AI company's policies continue to state that users should not "use our service to harm yourself or others," including the development or usage of weapons. OpenAI's policies page specified up until Wednesday that it does not allow the usage of its AI models for activity that would have a high risk of physical harm.
The discreet change also comes as the AI company is starting to work with the United States Department of Defense (DOD) on AI tools, including open-source cybersecurity tools. This particular news was announced by OpenAI's VP of global affairs, Anna Makanju, during an interview at the World Economic Forum in Davos, Switzerland.
While OpenAI has removed the specific reference to the military in its policies page, a company spokesperson said that the aim of the policy shift is to provide clarity and allow for military use cases that the company does agree with, as per CNBC.
The spokesperson added that while the company's policy does not allow its tools to be used to harm others, develop weapons, for communications surveillance, or cause injury to others, there are some national security use cases that align with OpenAI's mission.
The development comes after several years of controversy regarding tech companies developing technology for military use. This was highlighted by the public concerns of tech workers, particularly those who were working on artificial intelligence.
Workers at nearly every tech giant that is involved with military contracts have voiced concerns after thousands of Google employees protested Project Maven. This is a Pentagon project that would utilize Google AI to analyze drone surveillance footage.
OpenAI's cooperation with the DOD involves cybersecurity projects as well as exploring options to prevent veteran suicide. The changes to the AI company's policy appear to have been made on Jan. 10, 2024, as per ArsTechnica.
The development appears to align OpenAI more closely with the needs of various governmental departments. These include the possibility of preventing veterans from committing suicide due to a variety of factors.
A Shift in Policy
During an interview, Makanju said that they have been working with the DOD on cybersecurity tools for open-source software that secures critical infrastructure. The latest efforts mark a significant change from the company's initial stance on military partnerships.
In an email, OpenAI spokesperson Niko Felix said that the company aims to create a set of universal principles that are both easy to remember and apply. This is particularly true as the firm's tools are now globally used by everyday users who are now also capable of building GPTs themselves.
He added that a principle such as "Don't harm others" is broad but also easily grasped and relevant in numerous contexts. Felix noted that they specifically cited weapons and injury in the company's policy to offer clear examples.
However, the spokesperson declined to say whether or not the vaguer "harm" ban encompassed all military use. In a separate email, Felix said that OpenAI has plans to create "cybersecurity tools" with Darpa, said The Intercept.