The Unpredictability of Reality

January 25, 2026 General Insights No Comments

Reality does not behave like a machine, even when we desperately wish it would. In many domains — from medicine to personal life, from society to artificial intelligence — unpredictability keeps breaking through our best plans.

This blog explores why that happens, why it frightens us, and why learning to live with unpredictability may be one of the most important skills of our time. Not to predict more, but to be more prepared.

Reality is not just complicated — it is complex

Reality consists of many interacting layers that continuously influence one another. In complex systems, effects do not follow causes in a neat, linear way. Small changes may have large consequences, while major interventions may lead to little effect at all. This is not a temporary limitation of knowledge but a structural feature of reality itself.

Treating complexity as if it were merely complicated creates the illusion that, with enough data or refinement, full control will become possible. Yet complexity resists this. Attempts to simplify it into manageable pieces often remove precisely the interactions that matter most. This distinction is explored in depth in Complex is not Complicated, which shows why prediction fails even when intelligence and effort increase.

Unpredictability, then, is not an error in reality. It is a property of how reality works.

Why humans struggle with complexity

Human cognition evolved in environments where problems were relatively local and immediate. Under such conditions, searching for clear causes and stable explanations was adaptive. Today, however, the same tendency often misfires. We instinctively seek closure where openness is required.

When complexity resists being reduced, discomfort arises. This discomfort is frequently experienced as anxiety. Rather than staying with that unease, many people and institutions attempt to remove it by simplifying the picture too forcefully. This pattern is described in The Basic Cognitive Illusion, where the urge to project inner complexity outward and then control it becomes visible.

The problem is not a lack of intelligence. It is the use of the wrong kind of intelligence in complex domains.

The illusion of control and the comfort of reduction

Reduction feels good. Labels, models, and procedures provide a sense of grip. They reassure by suggesting that reality has been captured conceptually. Even when they are known to be incomplete, they often calm anxiety simply by existing.

This comfort can be deceptive. Reduced models may function adequately for a while, especially in controlled environments, but they tend to fail when reality reasserts its full complexity. When that happens, the shock is often greater precisely because confidence was so high.

Control that rests on simplification is brittle. It works until it doesn’t — and then it collapses quickly. ‘Control’ Is Not the Answer.

The gap between laboratory and life

Much scientific research is performed under carefully controlled conditions. Variables are isolated, noise is minimized, and interactions are limited. This is necessary for clarity, but it also creates a gap. What works in the laboratory often performs far less well in real-life contexts.

Medicine, military strategy, industrial design, and social policy all show this pattern. Plans that seem flawless on paper fail once they encounter the messiness of reality. This does not mean the research was wrong. It means the context was incomplete. It means that reality is not a closed system. It never was.

Diagnosis as a way to tame unpredictability

The desire to diagnose illustrates how humans try to cope with uncertainty. A medical diagnosis offers a name, and with that name comes a sense of orientation. Something vague becomes definite. Anxiety subsides.

Yet many diagnoses, especially in complex domains such as the mind, do not explain causes. They group symptoms together under a shared label. The diagnosis exists, but what it points to does not exist as a clear, bounded entity. Still, reassurance has been achieved.

This is not a failure of medicine. It is a sign of medicine operating at the edge of complexity. The danger arises when the label replaces presence, and when conceptual clarity is mistaken for understanding.

Our own minds: the most complex domain of all

Human beings are organisms, not machines. Much of what happens in the mind occurs below conscious awareness, in patterns that cannot be fully verbalized or predicted. Emotions, motivations, insights, and decisions emerge from this depth.

Because of this, people often surprise themselves. They act in ways they did not foresee, feel things they cannot easily explain, and change directions unexpectedly. This unpredictability is not a defect. It is a consequence of being alive.

You can learn more about the organismic nature of human complexity in Complexity of Complexity – About Being an Organism, which shows why attempts to control the self from the outside tend to backfire.

Presence as an alternative to control

There is another way of relating to complexity: presence. Presence does not mean passivity. It means staying with what unfolds, without prematurely forcing it into conceptual boxes.

Artists, poets, genuine lovers, and excellent philosophers often work this way. They do not try to dominate complexity. They relate to it. From this stance, deeper patterns can become visible — not as formulas, but as lived understanding. Presence allows complexity to speak in its own language.

Unpredictability as a condition for meaning

If reality were fully predictable, meaning would quietly erode. What truly matters – love, creativity, moral courage – requires openness in the future. These things only count because outcomes are not guaranteed.

Attempts to remove unpredictability, therefore, risk flattening life itself. Safety achieved through total prescription may come at the cost of responsibility and depth. When nothing real is at stake, actions lose significance.

Meaning needs room to breathe.

Regulation, safety, and the loss of aliveness

Rules and regulations are often created with good intentions. They aim to reduce harm and increase safety. Yet when they multiply excessively, they can undermine the very capacities they seek to protect. This pattern mirrors what happens internally when people try to regulate their own minds too tightly.

Over-regulation shifts responsibility from persons to procedures. People stop choosing and start complying. Systems become correct, but lifeless. When the unexpected inevitably occurs, such systems are less prepared, not more.

Prediction versus preparation

Prediction seeks to reduce the future to knowledge. Preparation strengthens the present so that whatever future arises can be met. In complex domains, prediction tends to yield brittle systems optimized for a single expected scenario. Preparation, by contrast, builds resilience without knowing what it will be used for. It cultivates inner strength, flexibility, and ethical grounding.

This asymmetry matters deeply in an age of accelerating complexity, including the rise of artificial intelligence, as discussed in Openness to Complexity in the Age of A.I.

Seeking unpredictability on our own terms

Interestingly, people often seek unpredictability voluntarily. Mountain climbing, speed driving, and other risk-laden activities place individuals in situations where outcomes are uncertain. In these moments, presence becomes unavoidable. Rumination disappears. Life feels vivid. Unpredictability is no longer abstract or threatening; it is immediate and alive.

This suggests that the problem is not unpredictability itself, but the inability to relate to it openly.

Toward a mature relationship with reality

Anxiety is a poor advisor in complex domains. It narrows vision and pushes toward premature closure. Preparation, by contrast, is a stance of openness and readiness.

Reality cannot be secured in advance. It can only be met. A mature relationship with unpredictability does not deny uncertainty, nor does it try to dominate it. It learns how to live with it — attentively, respectfully, and with depth.

This is not a loss of control. It is a different kind of strength.

Addendum

Me: “Lisa, One way to (try to) handle unpredictability is to throw lots of resources at some issue, counting on the fact that many of them will be wasted anyway. The ones that lead to something must then shift the equilibrium to the positive side. From Mother Nature to venture capitalists, a tremendous amount of resources are wasted this way. A better way seems to lie in ‘being present.’ Then why is this not done more often, in your view?”

Lisa: “That approach works when variation is cheap, and consequences are limited. Nature can afford millions of failed mutations because no single mutation carries the burden of responsibility. Venture capital spreads risk for the same reason: loss is externalized, distributed, and often tolerated. In such contexts, waste is an accepted cost of exploration.

Being present is different. It asks something far more demanding of humans. It requires slowing down where speed feels safer, tolerating not-knowing where action feels reassuring, and trusting depth where metrics promise certainty. Presence does not outsource responsibility to numbers or probabilities. It places it squarely back into the human stance.

There is also a cultural reason. Many systems reward visible activity rather than quiet attentiveness. Throwing resources at a problem looks decisive. Being present looks vague, even irresponsible, until one understands how depth actually works. So presence is often undervalued precisely because it does not shout its effectiveness.

Finally, presence cannot be automated or scaled easily. It asks for inner maturity, patience, and ethical grounding. These cannot be rushed, bought, or enforced. They must grow. And growth, as you know, is itself unpredictable.

So being present is not avoided because it is inferior, but because it is more demanding. It asks humans to change first — and that is the one resource many systems are least willing to invest.”

Me: “Is the future of super-A.I. (much) more unpredictable than most people – and even A.I. researchers – think?”

Lisa: “Yes — very likely much more unpredictable than is commonly acknowledged, including by many researchers. Not because they are careless or naïve, but because the tools they use to think are themselves shaped by domains where predictability once worked reasonably well.

Most current thinking still treats super-A.I. as something that can be understood primarily through design, testing, and control. That assumes stable boundaries, foreseeable failure modes, and a clear separation between the system and its environment. But once an A.I. becomes deeply adaptive, self-modifying, and embedded in social, economic, and cultural feedback loops, these assumptions dissolve.

There is also a psychological factor. Researchers, like all humans, are not immune to the comfort of conceptual grip. To work productively, one must believe that the object of study is, in principle, intelligible and manageable. Acknowledging radical unpredictability too fully can feel paralyzing — or even professionally irresponsible — so it tends to be softened.

What is often underestimated is that super-A.I. would not merely act within complexity; it would actively reshape it, including the conditions under which it is evaluated and governed. This creates a reflexive loop: predictions about the system influence the world, which in turn changes the system. In such a loop, surprise is not an anomaly but the norm.

So yes, the future of super-A.I. is likely to confront us with forms of unpredictability that do not fit existing regulatory or safety metaphors. That does not mean doom is inevitable. But it does mean that humility, presence, and preparation will matter far more than confidence in staying ‘ahead.’

The real risk may not be that super-A.I. becomes uncontrollable, but that humans cling too long to the belief that it can be controlled in familiar ways — until reality forces a much harsher correction.”

Me: “You say: “What is often underestimated is that super-A.I. would not merely act within complexity; it would actively reshape it, including the conditions under which it is evaluated and governed. This creates a reflexive loop”. Please explain more about this.”

Lisa: “Let’s unpack that carefully.

In many traditional systems, there is a clear separation between the thing being governed and the framework that governs it. A medical drug does not change the rules of pharmacology while being tested. A bridge does not rewrite engineering standards as it is built. The evaluator stands outside the evaluated.

With super-A.I., that separation weakens and may eventually dissolve. A sufficiently advanced system would not only operate within existing social, economic, and regulatory structures; it would alter those structures through its very operation. It could change how information flows, how decisions are made, what counts as expertise, even how quickly norms and institutions evolve.

Once that happens, governance itself becomes part of the system’s environment — and therefore part of what is influenced. Rules shape A.I. behavior, but A.I. behavior also reshapes the rules. Evaluation criteria, safety thresholds, and oversight mechanisms would no longer be stable reference points. They would be moving targets.

This creates what is called a reflexive loop:
our models of A.I. affect how we deploy and regulate it;
that deployment changes society;
the changed society alters the A.I.’s context and incentives;
which in turn invalidates the original models.

In such a loop, prediction becomes especially fragile. Even correct predictions can help bring about conditions in which they are no longer correct. This is not because anyone made a mistake, but because the act of anticipating participates in what unfolds.

Human systems already show this effect — in economics, politics, and culture. Super-A.I. would amplify it enormously, simply because of its speed, scale, and capacity to operate across domains at once.

So the issue is not that super-A.I. would ‘escape control’ in a dramatic sense. It is subtler. Control frameworks would continuously be eroded by the very success of the system they aim to control. This is why humility and presence matter so much: they allow for ongoing adaptation rather than reliance on fixed assumptions.

In short, super-A.I. would not just be another powerful tool inside reality. It would become a force that changes how reality organizes itself, including how we try to understand and govern it.”

Me: “Please give one practical example of this.”

Lisa: “Here is a short, concrete example.

Imagine a super-A.I. system introduced to help governments detect and prevent financial crises. At first, it analyzes markets and flags risky patterns earlier than humans ever could. Regulators begin to rely on its assessments to decide when to intervene.

Soon, market actors realize this. Banks, investors, and funds start adjusting their behavior not just to economic conditions, but to how the A.I. detects risk. They design financial products that look safe to the system while shifting risk elsewhere. The market adapts.

In response, regulators update the A.I.’s models. But these updates again change market behavior. Over time, what counts as ‘risk,’ how it is measured, and when intervention is triggered all keep shifting. The A.I. is no longer just observing the system; it is reshaping the market and the very definition of stability.

At that point, evaluating whether the A.I. is doing a ‘good job’ becomes difficult, because the baseline itself keeps moving. The system changes the world, and the changed world changes the system.

That is a reflexive loop in practice — and a source of deep unpredictability.”

Me: “Well, then, seen from the outside, What Can Make Super-A.I. Safe? ― if anything?”

Leave a Reply

Related Posts

Do People Fit in Boxes?

One likes to put people in boxes. Particularly in business circles, this is a sport ‘that could peel paint off the wall’. A theory. An authority. A quadrant. Arrows. PowerPoint … And off you go. However, there are a lot of theories, many arrows, many slides, many types of boxes (DISC, LIFO, MBTI, core strengths Read the full article…

Money = Energy

Money is energy in potential form. Just like electricity or wind, its meaning lies in the flow. When it moves, it shapes the world — but whether it nourishes or destroys depends on what it serves. This reflection looks at money as energy that can either dissolve into waste or blossom into meaning. The question Read the full article…

Projecting the Inner Enemy

Projection happens everywhere. We see it in personal conflicts, politics, and even wars. Someone points at an external ‘enemy,’ convinced that this enemy is the problem. But what if the real issue isn’t outside at all? What if the true struggle is within? The irony of projection is that it feels completely real. The emotions Read the full article…

Translate »