The Hallucinating Machine: Why AI Needs to Dream

Image Created by Elisa Castagnari with ChatGPT

Core argument: Hallucinations in Large Language Models (LLMs) may not be merely errors. They may emerge from the same cognitive dynamics that make human dreaming useful, according to neuroscientist Erik Hoel’s overfitted brain hypothesis, which posits that dreams act as a counterweight to rigid learning. In both biological and artificial systems, hallucination may be the cost and catalyst of creativity and generalisation.

Tagline: LLMs, like human brains, hallucinate when they overfit to reality. What if that’s not a bug, but a spark of creativity? 

Humans dream; machines hallucinate. Both imagine what isn’t there, and both get criticised for it. We are told dreams are irrational, while AI is told its hallucinations are inaccurate. Yet the more we build machines that think like us, the more their mistakes start to resemble our own. 

Neuroscience proposes that dreaming serves as a corrective mechanism, helping the brain reorganise and refine its internal models. Without them, cognition becomes brittle and obsessive. This suggests that dreaming is not a random by-product of sleep but a critical cognitive process. During REM sleep, the brain replays, reorganises, and integrates experiences, emotions, and memories gathered during the day. This “self-correction” helps resolve emotional conflicts, smooth out overly rigid associations, and restore cognitive flexibility. This internal simulation space allows the mind to safely process uncertainties, rehearse responses, and break down maladaptive patterns of thought. When this process is disrupted, cognition becomes brittle, repetitive, and overly constrained by the patterns formed during waking life.

This idea aligns closely with neuroscientist Erik Hoel’s overfitted brain hypothesis, which argues that dreams function much like regularisation in machine learning. Regularisation is a technique used in AI to stop a model from becoming too “perfect” on the data it already knows. A model that memorises everything too precisely performs poorly when faced with something new; adding some noise or unpredictability forces it to generalise instead of overfitting. Hoel suggests that the brain faces a similar problem: our waking experiences can become overly predictable and repetitive, risking “overfitting” to familiar patterns. Dreams act as a kind of biological regularisation by introducing noise and randomness, generating unusual, surreal scenarios unlike anything we normally encounter. This nightly injection of surprise keeps the brain’s generative models–its systems for imagining, predicting, and interpreting the world–flexible and able to handle novelty. In short, dreams prevent the brain from becoming too narrow or specialised, helping us remain adaptable in a constantly changing environment.

When combined, the two perspectives highlight a unified function of dreaming:

  • Emotionally, dreams help weaken maladaptive loops and soften intense memories.
  • Computationally, they act as a biological regulariser, ensuring that our cognitive models do not become rigid or over-trained on repetitive daily stimuli.

Without sufficient dreaming, both systems fail to function effectively.

In the digital realm, LLMs hallucinate when pressed for facts they were never truly taught. Hallucination in this context refers to the model generating text that is fluent and confident but factually incorrect, unsupported by its training data, or logically incoherent. Because LLMs do not retrieve information but instead predict the most likely continuation of text, they sometimes fabricate details, people, events, citations, or entire explanations when the underlying knowledge is missing or ambiguous.

They fill gaps with imagination, weaving continuity from uncertainty. These inventions can be dangerous: AI can produce falsehoods, misinformation, and fake citations with the same stylistic confidence as true information, making errors harder to detect. Yet, they can also generate novelty, unexpected connections, creative hypotheses, or imaginative ideas, something we rarely acknowledge.

Creativity itself is a controlled hallucination: combining fragments of knowledge and experiences into something new. Humans do this constantly. Dreams, for instance, recombine pieces of memory, emotion, and sensory impressions into novel scenes that never literally happened. They are not hallucinations in a clinical sense, but rather the mind’s way of stitching together disparate elements to explore possibilities and rehearse meaning.

When an LLM hallucinates a plausible but nonexistent connection, it is rehearsing a similar cognitive trick, speculative synthesis. Like dreaming, the model generates new configurations from old parts. While its “hallucinations” can be problematic in factual contexts, the underlying mechanism mirrors a fundamental aspect of human creativity: the ability to imagine beyond what is strictly known.

The challenge is not to eradicate hallucinations, but to govern them. Just as REM sleep helps regularise the brain, perhaps artificial “dreaming cycles” could train LLMs to hallucinate productively–to imagine beyond their data without betraying truth. Researchers are already exploring this idea. Techniques such as self-supervised refinement (where a model evaluates and improves its own outputs without needing human labels), chain-of-thought distillation (teaching a smaller model to reproduce the reasoning steps of a larger one), and synthetic data rehearsal (models generating new training examples for themselves) let systems create their own scenarios, critique them, and learn from the mismatch, much like a brain evaluating the strangeness of its dreams. Other approaches, such as adversarial training (feeding models intentionally tricky, misleading, or noisy inputs to strengthen their resistance) and sleep-phase fine-tuning (periodically giving models controlled, off-distribution data to stabilise their reasoning), echo the biological logic of dreaming by injecting structured unpredictability to build resilience. Think of these methods as placing the model inside a Hunger Games–style simulation arena where it battles unusual situations, confronts its own mistakes, and learns to recover, just as the brain uses bizarre dream content to stay flexible and robust. 

We once thought intelligence meant perfect recall and factual precision. But both evolution and computation suggest otherwise. Flexibility, imagination, and creativity arise only when a system dares to wander from its inputs. 

The dream, human or artificial, is not a flaw in intelligence. It is its proof. 


Written by Elisa Castagnari, a PhD Student in the AI4BI CDT, University of Edinburgh.


Article edited by Priscilla Wong, a Fourth-Year Biological Sciences (Immunology) student at the University of Edinburgh, and an Online News Editor for EUSci.


References: 

  1. Hoel, E., 2021. The overfitted brain: Dreams evolved to assist generalization.  Patterns, 2(5), p.100244. doi:10.1016/j.patter.2021.100244. 
  2. Bentley, S. and Naughtin, C., 2023. Both humans and AI hallucinate, but not in the same way. The Conversation, 19 June. Available at: https://theconversation.com/both-humans-and-ai-hallucinate-but-not-in-the same-way-205754 [Accessed 8 Nov 2025]. 
  3. Raieli, S., 2025. Can Machines Dream? On the Creativity of Large Language  Models. Towards Data Science, 31 Jan. Available at: https://towardsdatascience.com/can-machines-dream-on-the-creativity-of large-language-models-d1d20cf51939/ [Accessed 8 Nov 2025].
  4. Deperrois, N., Petrovici, M.A., Senn, W. and Jordan, J., 2022. Learning cortical representations through perturbed and adversarial dreaming. eLife, 11, e76384.  doi:10.7554/eLife.76384.

Comments

5 responses to “The Hallucinating Machine: Why AI Needs to Dream”

  1. This topic really needed to be talked about. Thank you.

  2. I never thought about it that way before. Great insight!

  3. This was incredibly useful and well written.

  4. You made some excellent points here. Well done!

  5. I never thought about it that way before. Great insight!

Leave a Reply

Your email address will not be published. Required fields are marked *