top of page

OpenAI's CEO Sam Altman Discusses Potential Dangers and Benefits of GPT-4 AI Language Model

The CEO of OpenAI, Sam Altman, has stated that AI has the potential to become "the greatest technology humanity has yet developed," but that society and regulators must be involved in its rollout to prevent potential negative consequences.

"We've got to be careful here," Altman said. "I think people should be happy that we are a little bit scared of this."

Picture: Kumaon Jagran
CEO of OpenAI, Sam Altman

GPT-4, the latest iteration of the AI language model, was released only a few months ago and is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months, surpassing the growth rate of TikTok and Instagram.

While Altman celebrated the success of his product, he also acknowledged the potential dangers of AI technology that keep him up at night. One of his biggest concerns is that these models could be used for large-scale disinformation, or even for offensive cyberattacks as they become better at writing computer code.

Altman dismissed the common sci-fi fear of AI models that don't need humans, that make their own decisions, and plot world domination. "This is a tool that is very much in human control," he said.

However, he does worry about which humans could be in control of AI technology. "There will be other people who don't put some of the safety limits that we put on," he added. "Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it."

Altman also addressed the issue of misinformation with AI language models, noting that the program can give users factually inaccurate information. He cautioned users about the "hallucinations problem," where the model will confidently state things as if they were facts that are entirely made up.

Altman and his team hope that GPT-4 will become a reasoning engine over time, eventually being able to use the internet and its own deductive reasoning to separate fact from fiction. The model is 40% more likely to produce accurate information than its previous version, but Altman still advises users to double-check the program's results and not rely on it as a primary source of accurate information.

Another concern with AI language models is the type of information they contain. Altman assured that ChatGPT and other AI language models are coded with safety measures to prevent bad actors from using them for malicious purposes, such as learning how to make a bomb.

Despite the potential dangers, Altman remains optimistic about the future of AI technology and its ability to amplify human will. "What I hope, instead, is that we successively develop more and more powerful systems that we can all use in different ways that integrate it into our daily lives, into the economy, and become an amplifier of human will," he said.

123 views0 comments
bottom of page