One of the biggest concerns about artificial intelligence is how it can be used to alter or manipulate an election.
ChatGPT maker OpenAI has similar concerns, or so they say. The company recently outlined plans to prevent tools they've created from being used to spread election misinformation to voters.
This comes as more than 50 countries prepare to cast ballots in national elections in 2024 alone, including the United States. These preventative measures include preexisting policies mixed in with newer initiatives to prevent the misuse of AI tools, which have been very popular.
AI tools can be used in a novel manner. Meaning, that they can transform, manipulate, and combine data from multiple media forms in just seconds. This is very dangerous, especially with imagery and even video created in this manner being so well done that it fools even the keenest eye.
These new steps will only apply to OpenAI. The company says it plans to continue platform safety work by elevating accurate voting information, enforcing policies, and improving transparency.
Recently, ChatGPT became immersed in a battle with the New York Times when it was revealed that the application was "trained" on NYT content, which is subscription-based.
The World Economic Forum believes that false and misleading information combined with artificial intelligence is an immediate threat to the global economy.
OpenAI's Legal Issues
ChatGPT maker OpenAI braces for a legal battle with The New York Times and authors on 'fair use' of copyrighted works. As part of the new measures, OpenAI says it will not allow people to use its tech to create chatbots that impersonate real candidates or governments, misrepresent how voting works, or discourage people from voting.
The company further stated that they will not allow individuals to use the application for political campaigning or fundraising until the full extent of how persuasive the technology can be is researched.
OpenAI said it will digitally watermark AI images created using DALL-E image generator and mark content with information about its origin, making it easier to identify if an image appears elsewhere on the web or was created using an AI tool.
The company stated that it will also partner with the National Association of Secretaries of State to steer ChatGPT users who ask logistical questions about voting to accurate information on that group's nonpartisan website, CanIVote.org.