From Intractable to Tractable = Intelligent

December 10, 2023 Artifical Intelligence, Cognitive Insights No Comments

This is what intelligence can do: simplifying intractable problems (not easily controlled, managed, or solved) to tractable ones without much loss of relevant information. Problems may be social, mathematical, or other.

Depending on the meaning of the considered terms, it may be seen as an exhaustive characterization of intelligence. However, this is not meant as an intrinsic definition of intelligence.

Depending on the resources

If the (known) universe isn’t big enough to host the necessary computations for a specific problem, that problem is clearly to be called intractable. Even so, the unknown universe may hold exciting surprises.

Within a much smaller system, the resources are, of course, more determinant for what might be called intractable.

Where intelligence fits in

What is intractable to a less intelligent system may be reduced by a more intelligent system to what’s tractable to the less intelligent one. Eventually, the hallmark of a human genius is the ability to show solutions to very complex issues in a simple way so many people can quickly grasp it. To accomplish this, the genius frequently uses metaphorical language or visual means.

Thus, super-A.I. will also be able to simplify very complex issues in a way that we, humans, can finally grasp them. We may look forward to quite a few surprises.

Also about the explainability of one’s motivations

People aren’t generally capable of explaining accurately why they do the things they do. Our motivations aren’t readily transparent to ourselves. Understandably, nature didn’t see the need for this as long as our ancestors acted well enough to survive and thrive.

Thus, an essential part of any human coaching consists of clarifying more or less what the problem is and why it persists. There is generally an issue behind the issue behind the issue… The further it goes, the deeper and more meaningful it gets. Excellent coaching gets quite deep because that’s where the energy (profound human motivation) comes from.

In this sense, AURELIS coaching consists of helping the coachee get more clarity in his deep motivations. Generally, this ‘diagnosis’ is the most important part of the ‘therapy’ of excellent coaching. Once tractable, the system (coachee) has the resources to work out the solution.

Explainability of A.I.

We find the same issue here. The explainability of an A.I.’s motivations is a way for us to grasp and possibly control its motivations. For that, we need the A.I.’s intelligence to make the intractable tractable (to us).

Certainly in the case of pending super-A.I., we will need the super-A.I. to explain itself to us. Even before that, it will be able to explain its own motivations to itself ― thereby gaining self-insight and self-consciousness.

That’s some kind of a problem if one wants to see it as a problem.

The positive note

Humans will be able to gain much more self-insight and self-knowledge.

Hasn’t this always been seen as the beginning of all wisdom?

Leave a Reply

Related Posts

Distributed ‘Mental’ Patterns in A.I.

The idea that A.I. systems can mimic human cognition through distributed mental patterns opens exciting avenues for how we can design more nuanced and human-like A.I. By using distributed, non-linear processing akin to broader MNPs (see The Broadness of Subconceptual Patterns), A.I. could move toward a deeper form of ‘thinking’ that incorporates both cognitive flexibility Read the full article…

Why is Compassion Important in the Future of A.I.?

Compassionate A.I. is poised to revolutionize personal well-being across many domains, such as mental health, content curation, and customer service, turning technology into a true partner in emotional and mental growth. I’ve been down with COVID for a few days now, for the first time. Mainly very tired in a weird way. One shouldn’t even Read the full article…

Why We NEED Compassionate A.I.

It’s not just a boon. Humanity is at a stage where we desperately need the support that possibly only Compassionate A.I. can provide. This is not about the future. The need is related to the inner dissociation that we (humanoids, humans) have increasingly been stumbling into since the dawn of conscious conceptualization. That’s a long Read the full article…

Translate »