Super-A.I. and the Problem of Truth Governance

April 10, 2023 Artifical Intelligence, Sociocultural Issues No Comments

Until now, the truth has always been a philosophical conundrum. With the advent of super-A.I., we’re only at the beginning of the problem. Who decides what is or isn’t the truth if objectivity gets lost?

‘Truth governance’ is a new term, denoting the core of this question.

Whence objectivity?

Let’s start this story somewhere, as with any fairy tale. Once upon a time…

Well, for a few centuries, the West was on its way toward objective objectivity. We call it Western Enlightenment. It was/is an excellent reaction against religious obscurantism, in which anything goes as long as some religious authority lays some claim to it. Meanwhile, as never before, we see that there are many religious authorities with different and incompatible claims ― no semblance of objectivity.

The modern/modernist view of objectivity as that what is true, or that what is based on facts independent from individual subjectivity, or that what is held as the truth by many unbiased people, is nicely tried. It would be super if the truth were that simple. In many domains – such as classical physics, well, OK – this is good enough to approximate the truth that we all can live by if no Einstein comes along. Even then, we can, for the most part, keep living by it.

Is objectivity getting lost?

In many domains (notoriously, the humanities), it was never there such as most laypeople thought. Classical physics (never mind Einstein and the bunch that followed him) is a positive/positivist exception. In many other domains, the ‘many unbiased people’ are profoundly problematic. For instance, religious authorities also mostly see themselves as unbiased. Great God, help us in this hour of need!

So, as a reaction to modernist obscurantism comes post-modernist… obscurantism. In the best case – and there is also a worst – the latter tends to drive objectivity into a relativistic drain. That is dangerous because it enables any individual to abuse any notion of objectivity for personal gains.

We can still see objectivity as what we can strive toward. This is easier to do in some domains than in others. Still, the striving is worthwhile, worthy, and excellent even if the ‘thing’ cannot be reached.

This way, we may not be losing objectivity. With increased openness to the problem, we may be gaining it at last. Nothing gets relativized away. On the contrary, it gets relativized into interesting stuff. There is work to be done. This may motivate many to keep doing the work.

Then comes super-A.I.

Here, the objectivity problem that was always there becomes harsher because of the speed with which it is generated.

The problem is already with us. Chat-GPT gives answers to many questions with the air of a scientific oracle, but it’s more like an ‘idiot savant’ ― very knowledgeable and very unwise. ‘It’ doesn’t know how to distinguish truth from nonsense. Everything goes through the same pipeline with immense speed.

And this is just the beginning.

Truth for sale?

This story, just begun, will go further than we can imagine. ‘The system’ will be able to present almost anything in a way that ‘many unbiased people’ will find acceptable as an old or new truth. Then, these people will generate (or let A.I. generate) new text that will serve as new material upon which ‘the system’ founds new answers to further questions.

This loop may break down objectivity at lightning speed.

We (who?) may identify objective sources as forever truthful, as ‘standards of truth.’ Truth governance will then be needed to decide upon such sources. As indicated above, this is highly problematic in many domains.

Will truth then be something for sale? The highest bidder may come with money, unfounded status, weaponry ― is that our philosophical future?

Over my dead body.

Those who know me already know that I will now bring up the concept of Compassion, basically. Indeed, good guess.

An additional advantage at this point is that it lets humans and super-A.I. work together on the same basis. Humans can forever have their human say on things ― especially those concerning humans themselves in any possible way. It’s always interesting just because it’s human. Super-A.I. will support humanity in being even more humane.

This will not get realized tomorrow. It takes time, so we should be busy thinking it over already.

Will objectivity then be the result of Compassion? Yes!

Can’t wait.

Leave a Reply

Related Posts

Non-Dualistic A.I.

Today’s A.I. systems are clever, fast, and often impressive. Yet underneath the surface, most of them remain trapped in a rigid, dualistic mindset — dividing, categorizing, and treating reality like a puzzle of separate pieces. But what if an A.I. were possible that doesn’t cut the world apart but grows with it ― a living Read the full article…

Why Compassion is a Must for Success in A.I.

Artificial intelligence is reshaping the world, touching every corner of human existence — healthcare, business, education, and beyond. As we face this transformation, one principle stands out as essential for ensuring A.I.’s success: >Compassion<. Without it, A.I. systems are poised to fall short, perpetuating inefficiencies, distrust, and harm. With it, C.A.I. (Compassionate A.I.) has the Read the full article…

The Age of A.I. Abundance ― Then What?

Soon enough, A.I. will generate an overflow of goods, services, and intelligence itself. Yet abundance, by itself, can heal or destroy. What matters is whether we, as human beings, grow inwardly fast enough to handle the gifts we’re creating and receiving. This is not only a technological question but a moral one: will the Age Read the full article…

Translate »