In the aftermath of Meta CEO Mark Zuckerberg announcing his intention for the company to build an artificial general intelligence (AGI) system and make it open source, several tech experts sounded the alarm about how it could negatively affect or even threaten humanity.
In his Facebook post, Zuckerberg explained that it was clear that the next generation of tech services required "building full general intelligence," and that Meta's plans to open source its AGI system would be similar to its Llama 2 AI model.
While AGI was not a strictly defined term, it commonly referred to a theoretical AI system that could carry out an array of tasks at a level of intelligence that could match or exceed humans.
In an interview on Thursday with the tech news website The Verge, Zuckerberg said he would lean towards open sourcing for as long as it made sense and was the "safe and responsible thing to do."
Tech Experts Slam Zuckerberg, Other Tech Giants
However, University of Southampton computer science professor Wendy Hall told The Guardian that the prospect of open-sourcing AGI was "very scary" and called Zuckerberg "irresponsible" for considering it.
"Thankfully I think it's still many years away before such an aspiration [AGI] can be achieved in any meaningful way, so we have time to put the regulation systems in place," she said. "But it is a matter of public safety that we progress this work with some urgency."
University of Surrey's Dr. Andrew Rogoyski also emphasized that the open-sourcing of current AI models could either save or condemn the world, meaning that the decisions to do so "need to be taken by international consensus" and not in the boardrooms of tech giants.
Aside from Meta, OpenAI - the developer of ChatGPT - was also building its own AGI system, defined as AI systems that were "generally smarter than humans."
Related Article : Navigating the Future of Software Testing with AI Test Automation Tools