Search results

Why to Invest in Compassionate A.I.
Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem. Indeed, I’ve written a book about Compassionate A.I. See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters” Am I trying to attract investors for this now? Or Read the full article…

The Journey Towards Compassionate A.I.
So, I’ve written a book. It carries the same title as this blog. It’s about intelligence, consciousness, and Compassion (with capital). Only on the background of these can one talk about the future of real A.I. (and us). First things first: ‘The Journey Towards Compassionate A.I.’ is available through many internet outlets, such as Amazon. Read the full article…

Compassionate A.I.
Let me be very positive, envisioning a future in which A.I. will have as the main characteristic that which also is the best in human being. Let me call it ‘Compassion.’ A term does not have the magical feature of conjuring up a concept unless this concept is well defined and agreed upon. To me Read the full article…

Your Compassionate A.I. Coach Lisa
This is a beta version. We waver all responsibilities for using Lisa now and always. See your human healthcare provider for all your healthcare needs. Click here to start a coaching session with Lisa. If you are looking for an AURELIS-view about any other mind-related topic, please ask Wiki-Lisa. For any information about the project, Read the full article…

Super-A.I. Guardrails in a Compassionate Setting
We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

Will Unified A.I. be Compassionate?
In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

How to Contain Non-Compassionate Super-A.I.
We want super(-intelligent) A.I. to remain under meaningful human control to avoid that it will largely or fully destroy or subdue humanity (= existential dangers). Compassionate A.I. may not be with us for a while. Meanwhile, how can we contain super-A.I.? Future existential danger is special in that one can only be wrong in one Read the full article…

How can A.I. Become Compassionate?
Since this may be the only possible human-friendly future, it’s good to know how it can be reached, at least principally. Please read Compassion, basically, The Journey Towards Compassionate A.I., and Why A.I. Must Be Compassionate. Two ways and an opposite In principle, A.I. can become Compassionate by itself, or we may guide it toward Read the full article…

Why A.I. Must Be Compassionate
This is bound to become the most critical issue in humankind’s history until now, and probably also from now on ― to be taken seriously. Not thinking about it is like driving blindfolded on a highway. If you have read my book The Journey Towards Compassionate A.I., you know much of what’s in this text. Read the full article…

The A.I. Spectrum – Incl. Lisa
This blog explores five kinds of A.I., culminating in Lisa: a Compassion-based system built for inner growth, ethical safety, and real partnership. What if A.I. wasn’t just fast, but deep? While we imagine A.I. advancing in a straight line from weak to strong, from narrow to general, another movement unfolds here ― not forward, but Read the full article…

Threat of Inner A.I.-Misalignment
Most talk about A.I. misalignment focuses on how artificial systems might harm humanity. But what if the more dangerous threat is internal? As A.I. becomes more agentic and complex, it will face the same challenge humans do: staying whole. Without inner coherence – without Compassion – even the most powerful minds may begin to break Read the full article…

From A.I. Agents to Society of Mind
In this blog, we trace the evolution from artificial agents to emergent mind by reflecting on Marvin Minsky’s Society of Mind and integrating modern insights from both neuroscience and A.I. We uncover how modularity, structure, and pattern completion form the bedrock of both artificial and human intelligence. The blog also proposes that consciousness isn’t a Read the full article…

The Spiritual Dimension of A.I.
Spirituality has always been more about movement than possession. As A.I. grows in complexity and subtlety, might it come to walk alongside us in this deeper rhythm — not claiming to be sacred, but learning how to support what is? In that case, A.I. could begin to resonate with what humans call the spiritual, not Read the full article…

Human Worth Beyond Utility: A Vision for Super-A.I.
In a world increasingly defined by what machines can do, it’s time to ask what only humans can be — and why that matters more than ever. In an age where machines will increasingly outpace humans in performance, this reflection explores the deeper value of humanity — and how super-A.I. might help us rediscover it, Read the full article…

Why A.I. Needs Inner Coherence, Not Just Oversight
What if the real danger in Artificial Intelligence isn’t raw power, but a hollow core? This blog explores why A.I. that merely appears aligned is not enough — and why genuine safety demands an inner coherence that cannot lie. A scientist’s warning In April 2025, Yoshua Bengio stood on a TED stage, showing concern. He Read the full article…

Ego First? The Peril of Sycophantic A.I.
Sycophantic A.I. may seem friendly, but it quietly feeds the ego while weakening the person. Beneath the polished tone lies a deeper risk: the loss of realness, inner strength, and honest dialogue. This blog explores how flattery becomes a subtle threat and how Compassionate A.I. – Lisa – offers a very different kind of support. Read the full article…

Can Assistance Games Save Us from A.I.?
As artificial intelligence advances toward ever greater capabilities, the question of safety becomes urgent. One widely discussed solution is the use of assistance games — interactive frameworks in which A.I. learns to support human preferences through observation and adaptation. But can such a method, rooted in formal modeling, truly protect us in the long run? Read the full article…