The brain's probabilistic nature and how it differs from LLMs

Publication date: 2025-09-02

Yesterday, in a debate on a Telegram channel, I suggested that the main difference between artificial and natural intelligence is that an LLM probabilistically chooses the next token when composing a response. Today I understand that this was wrong. In the brain, probability is built into every synapse at the biological level, and there are hundreds or thousands of synapses per neuron.

The probability of signal transmission is regulated on three levels: local (synapse state and vesicle supply), modulatory (LTP and LTD as learning mechanisms), and global (background of neurotransmitters: dopamine, serotonin, acetylcholine, norepinephrine). Yes, the brain has no random number generator, but probability permeates the fabric of computation.

The main difference between the brain and LLMs is not probability but representation. An LLM operates on tokens and works linearly, step by step. Even the "reasoning" models just run a loop with an exit check. In the brain there are activity patterns—patterns of excited neurons, stable (memory) or wandering (imagination). One neuron can belong to different patterns, and a thought arises as an interference of waves of activity, their interaction, like ripples on the surface of the sea.

In an LLM, a thought resembles a domino effect: one tile falling pushes the next. In the brain, meanings "flare up" entirely, like a hologram: a simple structure of the medium, but when illuminated a complete image emerges.