Artificial hallucinations – what’s real?
What explains these incidents?
AI is more complicated and less predictable than one might think. Hallucinations and strange behavior have occurred in multiple AI-based services like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney. All these AI systems are based on a technology called Large Language Models (LLM).
LLM employs deep learning and self-supervised learning systems that consist of layers of linked networks. Though this enables AI to provide answers to complicated questions, the structure also leaves room for error.
Some hallucinations may be due to the way that AI is trained. When building text-based answers, a sentence is based on the relationships between the previously generated words. This creates a bias and the longer the text gets, the greater the chance of error. Long chat sessions can confuse the model and the answers might change in style because the AI is trying to reflect the tone of the questions it is being asked. There might also be errors in encoding and decoding between text and representation. Another cause can be the training data used, especially when there are variations in the source content of large datasets.
Moreover, AI has no real-world experience. In the real world, a great deal of knowledge isn’t written down and is only passed on orally. But AI has to learn things from text alone, which is slower than learning through observation. Therefore, LLM models are limited in terms of how smart and accurate the output can be.