Google's CEO, Sundar Pichai, has expressed concerns over the negative side of artificial intelligence and its potential for harm if deployed wrongly. He has called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use, and warned that safety concerns may be pushed aside in the competition for technological advancement. In an interview, Pichai also acknowledged that society may not be ready for rapid advances in AI.

Google's CEO, Sundar Pichai, has expressed concerns over the negative side of artificial intelligence and its potential for harm if deployed wrongly. He has called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use, and warned that safety concerns may be pushed aside in the competition for technological advancement. In an interview, Pichai also acknowledged that society may not be ready for rapid advances in AI. Pichai believes that governments must establish global regulatory frameworks to regulate AI as it develops. He also called for a pause in the creation of 'giant' AIs for at least six months, amid concerns that the technology could get out of control. Pichai admitted that Google did not fully understand how its AI technology produced certain responses, referring to it as a 'black box.' Pichai also said that AI could cause harm through its ability to produce disinformation, creating videos that look accurate but were never actually said. He added that society appears to be mismatched with the pace at which AI technologies are advancing, and that it is important to ensure that the technology is safe for public use. Google's parent company, Alphabet, has already launched an AI-powered chatbot, which Pichai claims is safe for public use. However, more advanced versions of the Bard chatbot are being held back for testing.