Publication on Habr: when an LLM becomes predictable

Publication date: 2025-08-19

A note titled "When an LLM becomes predictable" has been published. It examines the idea that when prompts are formulated as executable specifications, models begin to behave more deterministically. This opens the possibility of using them as an engineering tool for code generation, rather than just a "probabilistic interlocutor".

For ADSM this idea is especially important: formulating rules and specifications becomes the key to predictable agent work.

The note included a poll: "Can an LLM be used as an engineering tool for generating code?"
73% responded "no, the model's probabilistic nature is too strong," 27% — "yes, a prompt can precisely describe the desired result".

Most people don't believe in the controllability of probabilistic models. I, however, am convinced they are deterministic enough for engineering solutions. Triggers in computer memory also spontaneously change state under hard radiation — and that problem is solvable. It's the same with models: the issue isn't that they are probabilistic, but how we work with their predictable zones.