‘AI hallucinates’: Sam Altman warns users against putting blind trust in ChatGPT
Ever since its first public rollout in late 2022, ChatGPT has become not just the most popular AI chatbot on the market but also a necessity in th lives of most users. However, OpenAI CEO Sam Altman warns against putting blind trust in ChatGPT given that the AI chatbot is prone to hallucinations (making stuff up).
Speaking in the first ever episode of the OpenAI podcast, Altman said, “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much.”
Talking about the limitations of ChatGPT, Altman added, “It’s not super reliable… we need to be honest about that,”
Notably, AI chatbots are prone to hallucination i.e. making stuff up with confidence that isn’t completely true. There are a number of reasons behind hallucination of LLMs (building blocks behind AI chatbots) like biased training data, lack of grounding in real-world knowledge, pressure to always respond and predictive text generation. The problem of hallucination in AI seems to be systematic and no major AI company claims at the moment that its chatbots are free from hallucination.
Altman also reiterated his previous prediction during the podcast, stating that his kids will never be smarter than AI. However, the OpenAI CEO added, “But they will grow up like vastly more capable than we grew up and able to do things that would just, we cannot imagine.”
Are Ads coming to ChatGPT?
The OpenAI CEO was also asked on if ads will be coming to ChatGPT in the future, to which he replied, “I’m not totally against it. I can point to areas where I like ads. I think ads on Instagram, kinda cool. I bought a bunch of stuff from them. But I think it’d be very hard to I mean, take a lot of care to get right.”
Altman then went on to talk about the ways in which OpenAI could implement ads inside ChatGPT without totally disrupting the user experience.
“The burden of proof there would have to be very high, and it would have to feel really useful to users and really clear that it was not messing with the LLM’s output,” he added.