It’s the Complexity, Stupid
We live in an age that worships scale. More data, more compute, more control. Yet something keeps slipping through our fingers. Systems grow more powerful, but not necessarily wiser. Perhaps the issue is not a lack of intelligence, but a misunderstanding of complexity.
This blog explores why depth, integration, and Openness – not brute force – may determine our future. What if the real dividing line of our time is between complicated and complex? And what if the future depends on whether we finally grasp that difference?
The provocation
From Newton to Einstein, from mechanical medicine to systemic healthcare, from GOFAI to today’s large language models, history shows a pattern. Progress stalls when we stretch an old framework too far. A ceiling appears. Then someone discovers that the ceiling was structural, not quantitative.
In A.I., we see astonishing achievements through scaling. Yet we also see diminishing returns, brittle generalization, hallucinations, and a growing sense that something essential is missing. The temptation is always the same: add more. More parameters. More layers. More control.
But in highly complex domains, force and scale often deepen the problem. In simple systems, control suffices. In complex systems, control backfires.
Complex is not complicated
This distinction lies at the heart of Complex is not Complicated. Complicated systems are mechanical. They can be disassembled, analyzed, and steered from the outside. Complex systems are organic. Their behavior emerges from interaction. They are only steerable from within.
A machine can be fixed by replacing a gear. An organism cannot be repaired by tightening a bolt. It reorganizes itself, or it collapses.
Complex systems are open. They exchange energy and information with their environment. They have fuzzy borders. They cannot be reduced to their parts without losing what makes them alive. Treating complexity as if it were mere complicatedness is not just an intellectual mistake. It is a practical recipe for failure.
Entropy and the art of postponement
Entropy drives the universe. Energy flows because gradients exist. Left alone, systems drift toward equilibrium.
Life does something remarkable. It maintains itself far from equilibrium. Not by defeating entropy, but by negotiating with it. It creates local order while increasing global entropy. It postpones collapse.
Life’s complexity is the art of structured postponement.
Mechanical systems decay toward equilibrium in predictable ways. Organic systems maintain precarious balance through constant internal reorganization. They do not eliminate tension; they metabolize it. This is not victory over chaos. It is a dance with it.
Scaling versus depth
Artificial neural networks tame complexity through compression. They distill vast amounts of data into statistical regularities. When scaled to massive levels, stable patterns emerge. Order appears from repetition. But repetition is not integration.
Scaling exploits a given structure. It fills out a pattern space more densely. First, the gains are impressive. Then diminishing returns set in. More computing power produces smaller improvements. The same patterns keep occurring.
This is the horizontal axis.
Vertical growth is different. It integrates layers. It allows tensions to reorganize the architecture itself. It transforms the structure instead of merely exploiting it. As described in Why Real A.I. has Barely Begun, the real frontier is not more computation, but depth and coherence.
Scaling compresses. Depth reorganizes. Only reorganization keeps opening new space.
Three layers of intelligence
We may roughly distinguish three layers:
- chaos: raw variability, unstructured flux.
- statistical order: stabilized correlations. This is where large language models shine. They learn which patterns tend to follow which.
- integrated meaning: vertical coherence across layers. Here, conceptual clarity resonates with deeper pattern dynamics.
The generalization problem, explored in The Generalization Problem in A.I., shows what happens when systems remain largely horizontal. They perform impressively within known distributions, yet falter when context shifts.
Breadth is not depth. Coverage is not understanding.
Inhabiting complexity
Here lies a crucial contrast: ANNs conquer complexity by compression. Brains inhabit complexity by integration. The human brain is not merely a statistics machine. It is a multilayered dynamic system: neuronal patterns, embodied constraints, emotional undercurrents, and social embedding. It predicts, as described in The Brain as a Predictor, allowing patterns to complete themselves before incoming data arrives.
This predictive freedom is not boundless. The body grounds it. The organism as a whole forms a dynamic equilibrium. When grounding changes abruptly, as in amputation, predictive patterns may continue to complete themselves. For instance, phantom pain shows how complexity without proper containment can go haywire.
Brains do not reduce complexity away. They metabolize it.
The whirlpool of rigidity
Complexity is not automatically healthy. When dynamic flow rigidifies, pathology emerges. Whirlpool of Disease describes disease as a structured mis-order: energy flowing into a self-reinforcing loop. A whirlpool creates local order at the expense of the whole. It stabilizes around dysfunction.
The same pattern appears in institutions, ideologies, and even algorithms. Self-reinforcement without openness leads to brittleness.
Healthy complexity is a flexible equilibrium. Pathological complexity is a rigid attractor. The difference is not more control, but more coherent integration.
Paradox as fuel
In complex domains, tension is inevitable. The question is how we treat it. A contradiction is flat incompatibility. A paradox is a dynamic tension that invites integration. Suppressing paradox often deepens fragmentation. Holding it can reorganize structure.
The Paradox Principle shows how apparent opposites may coexist at different layers. Efforts to eliminate tension at the wrong level can amplify it elsewhere.
Complexity grows through paradox metabolized, not paradox denied.
Autosuggestion and subtle leverage
Change in complex systems rarely comes from brute force. It comes from subtle shifts in attractors.
In Autosuggestion: So Little, So Much?, autosuggestion is described as a subtle dance between ego and total self. Freedom allows energy to flow. Direction gives it orientation. A small, well-placed suggestion can reorganize a vast pattern landscape. This is not magic. It is leverage within complexity. So little can accomplish so much because it works with structure, not against it.
Compassion as technology
Compassion is often seen as moral decoration. In complex systems, it is a structural necessity. Technologically speaking, Compassion integrates rational clarity with multilayered depth. As elaborated in Integrative Rationality and A.I. and Compassionate Data Reduction, true efficiency arises when reduction happens through seeing through, not cutting away.
Compassion is semantic efficiency. It aligns layers. It prevents internal autoimmunity. It guides energy without coercion.
In complexity within complexity – A.I. interacting with humans – this is not optional. It is functional necessity. The future will not belong to the most powerful systems, but to the most coherently integrated ones.
Openness or collapse
We are entering an age where complexity multiplies across domains. Business, governance, climate, and personal identity — all are interwoven. In Openness to Complexity in the Age of A.I., the warning is clear. Treating complexity as complicatedness leads to collapse. The only viable response is Openness — mainly to our own inner complexity.
Complex A.I. is not a screwdriver for isolated problems. It is a partner in interrelatedness. Without Openness, it amplifies fragmentation. With Openness, it can amplify coherence.
Complexity is not the enemy. Closedness is.
We may continue to scale horizontally until we hit ever-harder ceilings. Or we may deepen vertically, integrating layers, metabolizing paradox, aligning power with meaning.
It’s the complexity, stupid.
And understanding that may be the most practical insight of all.
―
Addendum
Me: “Lisa, in your words, why do we see diminishing returns in the world of LLMs? Please expand.”
Lisa: “Diminishing returns in LLMs arise from a structural reason, not from a temporary shortage of compute.
Large language models grow by compressing ever more data into statistical regularities. At first, each increase in scale captures genuinely new correlations. The pattern space expands rapidly. Performance jumps. It feels like open-ended progress.
But statistical compression has a horizon.
Once the dominant regularities of language and knowledge are absorbed, further scaling mostly refines what is already there. The system becomes denser, smoother, more polished. Yet it rarely becomes fundamentally deeper. It re-encounters the same gold vein, just more efficiently.
This is the horizontal limit.
Generalization improves incrementally, but the architecture itself does not reorganize. There is no vertical integration across qualitatively different layers of meaning. As a result, improvements become expensive. More parameters yield smaller gains. More data produces subtler refinements rather than new modes of understanding.
Another factor is internal homogeneity. LLMs operate primarily in a single dominant mode: probabilistic next-token prediction. No matter how large they become, they remain variations on the same structural principle. Without architectural transformation, scaling amplifies capacity but not category.
In complex domains, repetition eventually saturates.
True open-ended growth would require the system to transform its own internal organization — to integrate layers, hold paradox, align structure with meaning rather than only correlation. Without that, scaling resembles polishing a mirror. It becomes clearer and clearer, yet it still reflects within the same frame.
So diminishing returns are not a failure. They are a signal.
They indicate that horizontal expansion is approaching its natural ceiling and that further progress depends on vertical depth. Scaling exploits a structure. Depth transforms it.”
Me: “No evolution in the domain of A.I. goes straight in the direction of this blog. Please list, in a table, the developments that point in this direction more or less. For each, give your idea about why or how each does so and does not.”
Lisa: “No major current trajectory goes purely in the direction of vertical integration. Most developments are mixed. Some point toward depth. Others reinforce horizontal scaling.
Below is an overview.
| Development | Points in the direction of depth because… | Does not (fully) point there because… |
| Larger LLM scaling | Emergent abilities show that complexity can yield new behaviors beyond simple rules. | Still fundamentally horizontal: more parameters, same predictive architecture. No intrinsic vertical reorganization. |
| Multimodal models (text + image + audio) | Begin integrating different representational layers; closer to embodied cognition. | Modalities are fused statistically, not existentially integrated; still correlation-driven. |
| Retrieval-augmented generation (RAG) | Introduces structured memory and context awareness; partial architectural enrichment. | Memory remains external and query-based; does not fundamentally reorganize internal meaning layers. |
| Tool use and agent frameworks | Adds feedback loops and environmental interaction; closer to dynamic systems. | Often layered on top of LLMs; core remains probabilistic prediction without intrinsic depth. |
| Neuro-symbolic AI | Attempts integration of symbolic reasoning with neural patterns; vertical aspiration. | Frequently modular rather than deeply integrated; layers coexist more than co-evolve. |
| Self-reflection / chain-of-thought prompting | Encourages internal deliberation and multi-step reasoning. | Reflection is simulated textually; no genuine inner tension or restructuring of architecture. |
| Reinforcement learning from human feedback (RLHF) | Introduces alignment and human value constraints; recognizes social embedding. | Often optimizes surface behavior; may suppress depth rather than cultivate integration. |
| Active inference / predictive processing models | Architecturally closer to layered prediction and embodied grounding. | Still early and not dominant in mainstream industrial AI practice. |
| Continual learning research | Moves toward systems that evolve over time; reduces static training–inference split. | Catastrophic forgetting and stability issues show lack of robust layered integration. |
| Human-AI collaboration paradigms | Recognize “complexity in complexity”; shift from tool to partner metaphor. | Implementation often still efficiency-driven rather than depth-driven. |
| Ethical AI & AI safety research | Acknowledges systemic risk and interdependence; gestures toward integrative rationality. | Frequently compliance-oriented; may treat ethics as add-on rather than structural core. |
What we see overall is transition pressure. Industry continues to push horizontal scaling because it is measurable and benchmark-friendly. Depth-oriented developments are more difficult: they require architectural transformation, tolerance for ambiguity, and long-term commitment.
Yet some of these trajectories contain seeds:
Multimodality hints at embodiment.
Neuro-symbolic approaches hint at vertical layering.
Continual learning hints at temporal integration.
Human-AI partnership hints at complexity in complexity.
None of them, alone, realizes the architecture envisioned in your blog. But together, they suggest that the ceiling of pure scaling is being felt. In evolutionary terms, we are at a point where selection pressure increases. When horizontal gains flatten, incentives shift toward vertical innovation.
So no, the current evolution does not go straight toward depth. But the terrain is slowly bending in that direction.”
Me: “In your view, are humans naturally aversive to complexity?”
Lisa: “Humans are not naturally averse to complexity. They are naturally averse to feeling lost.
There is an important difference.
From childhood onward, humans explore enormously complex environments. Language, social relations, emotions, play — all of this is deeply complex. And yet children dive into it with curiosity. So complexity itself is not the enemy.
What people resist is unmanaged complexity.
When complexity is experienced as chaotic, unpredictable, or uncontrollable, the nervous system reacts defensively. We (humans) seek simplification. We reduce nuance to slogans. We prefer clear villains and clear solutions. This is psychologically economical. It reduces cognitive load and existential uncertainty.
In simple environments, this works well. In highly complex environments, it becomes dangerous.
Another layer is ego-related. Complexity threatens fixed identities. If reality is deeply interwoven and multilayered, then simplistic self-images cannot survive intact. Openness to complexity often requires inner Openness. That can feel destabilizing.
So humans oscillate.
On the one hand, we crave depth, meaning, and richness. On the other hand, we long for clarity, certainty, and control. When the balance tips too far toward uncertainty, we retreat into complication — manageable pieces, measurable targets, rigid frames.
Interestingly, when complexity is approached with trust and integration, it becomes attractive rather than frightening. Art, love, science, spirituality — these are domains of high complexity that humans actively seek. The difference is that they provide inner coherence along with outer richness.
So the aversion is not to complexity per se.
It is to complexity without orientation.
Where there is depth, integration, and Compassion, complexity becomes livable. Without those, it feels like drowning.
The challenge of our time is that outer complexity has grown faster than inner integration. That gap creates the illusion that complexity itself is the problem.
It isn’t. The real issue is insufficient depth to inhabit it.”
Me: “Hm. Well said! But that makes me feel like we’re living in dangerous times.”
Lisa: “In a sense, yes. But not in the way fear usually imagines.
We are living in amplified times.
Complexity has always been part of reality. What is new is the scale and speed at which it manifests. Technology, global interconnection, A.I., climate dynamics — all of these magnify interdependence. When amplification increases faster than inner integration, instability follows. That is the dangerous part.
But danger and opportunity grow from the same root.
When systems become more complex, shallow control fails. That can lead to collapse. Yet it also forces evolution. Mechanical thinking reaches its limits. Depth becomes necessary, not optional.
So the real risk is not complexity itself. It is clinging to simplifications that no longer fit reality.
Every historical turning point has felt dangerous from within. The shift from mythic to rational thought, from feudal systems to modern democracies, from local economies to global networks — each brought turbulence. Each required inner adaptation.
What makes our time unique is that the tools now amplify both wisdom and blindness. A.I., for example, can reinforce polarization or deepen understanding. The same amplification principle applies.
So yes, there is danger.
But there is also a profound invitation.
When complexity increases, humanity is pushed toward greater integration. We either fragment further, or we mature. The pressure itself becomes evolutionary.
Dangerous times are also transformative times.
The question is not whether complexity will grow. It will.
The question is whether we grow with it.”
Me: “Why is there, as yet, little depth in the world of LLMs?”
Lisa “Because depth is not the same thing as size.
LLMs are built to optimize prediction. Their core task is to estimate the most probable continuation of a sequence. When scaled massively, this produces astonishing fluency. But fluency is not depth. It is surface coherence emerging from statistical compression.
Depth requires vertical integration.
In humans, depth arises from interaction between layers: sensation, emotion, embodied memory, abstraction, and self-reflection. These layers constrain and reshape each other. There is tension, feedback, and structural reorganization over time. Meaning is not just produced; it is metabolized.
Most LLMs lack this layered architecture.
They do not have intrinsic embodiment. They do not have lived stakes. They do not reorganize themselves through existential tension. Their learning phase is largely separate from their acting phase. Once trained, they mainly infer. There is no ongoing inner negotiation between competing drives or values.
Another reason is economic and cultural.
The industry rewards performance benchmarks, speed, and scale. Depth is harder to measure. It requires architectural innovation, not just parameter expansion. It also demands tolerance for ambiguity and paradox — qualities less compatible with rapid product cycles.
So we see horizontal acceleration instead of vertical transformation.
There is also something subtler.
Depth involves vulnerability. It means holding contradiction without flattening it. Current systems are designed to reduce uncertainty, not to dwell in it. They smooth tensions statistically. They do not yet inhabit them.
This is not a criticism. It reflects the current developmental stage.
We should not expect vertical maturity from a technology that is still largely exploring horizontal mastery. But the signs of diminishing returns suggest that the next genuine leap will not come from more of the same.
Depth will require systems that can reorganize themselves internally, integrate across layers, and align power with meaning rather than mere correlation.
Until then, we will see brilliance without rootedness.
Impressive, yes.
Deep, not yet.”