Earlier this year, the EU became the world's first to implement sweeping rules to help govern artificial intelligence, specifically highlighting powerful programs such as OpenAI's ChatGPT, after long and tense negotiations.
Despite the initial proposal for rules in 2021, it became imperative when ChatGPT was introduced in 2022, demonstrating AI's human-like capabilities to produce articulating text within a matter of seconds.
Other generative AI systems include Dall-E and Midjourney, both of which are able to generate images in almost any style with just a simple input using everyday language.
"With our artificial intelligence act, we create new guardrails not only to protect people and their interests but also to give businesses and innovators clear rules and certainty," European Commission President Ursula von der Leyen said.
By 2026, companies will have to comply with the "AI Act" law, which strictly bans using AI predictive policing based on profiling and systems that use biometric information to infer an individual's race, religion, or sexual orientation.
If a system is considered high-risk, the company will have a stricter set of obligations to protect the rights of citizens.
"The geographic scope of the AI Act is very broad, so organizations with any connections to the EU in their business or customer base will need an AI governance program in place to identify and comply with their obligations," said Marcus Evans, partner at law firm Norton Rose Fulbright.
Companies that refuse to adhere to the rules on such banned practices or data obligations will face fines of up to seven percent of worldwide annual revenue.
The EU established an "AI Office" of tech experts, lawyers, and economists under the new law to ensure compliance in May, according to France24.