Super-A.I. and the Problem of Truth Governance

April 10, 2023 Artifical Intelligence, Sociocultural Issues No Comments

Until now, the truth has always been a philosophical conundrum. With the advent of super-A.I., we’re only at the beginning of the problem. Who decides what is or isn’t the truth if objectivity gets lost?

‘Truth governance’ is a new term, denoting the core of this question.

Whence objectivity?

Let’s start this story somewhere, as with any fairy tale. Once upon a time…

Well, for a few centuries, the West was on its way toward objective objectivity. We call it Western Enlightenment. It was/is an excellent reaction against religious obscurantism, in which anything goes as long as some religious authority lays some claim to it. Meanwhile, as never before, we see that there are many religious authorities with different and incompatible claims ― no semblance of objectivity.

The modern/modernist view of objectivity as that what is true, or that what is based on facts independent from individual subjectivity, or that what is held as the truth by many unbiased people, is nicely tried. It would be super if the truth were that simple. In many domains – such as classical physics, well, OK – this is good enough to approximate the truth that we all can live by if no Einstein comes along. Even then, we can, for the most part, keep living by it.

Is objectivity getting lost?

In many domains (notoriously, the humanities), it was never there such as most laypeople thought. Classical physics (never mind Einstein and the bunch that followed him) is a positive/positivist exception. In many other domains, the ‘many unbiased people’ are profoundly problematic. For instance, religious authorities also mostly see themselves as unbiased. Great God, help us in this hour of need!

So, as a reaction to modernist obscurantism comes post-modernist… obscurantism. In the best case – and there is also a worst – the latter tends to drive objectivity into a relativistic drain. That is dangerous because it enables any individual to abuse any notion of objectivity for personal gains.

We can still see objectivity as what we can strive toward. This is easier to do in some domains than in others. Still, the striving is worthwhile, worthy, and excellent even if the ‘thing’ cannot be reached.

This way, we may not be losing objectivity. With increased openness to the problem, we may be gaining it at last. Nothing gets relativized away. On the contrary, it gets relativized into interesting stuff. There is work to be done. This may motivate many to keep doing the work.

Then comes super-A.I.

Here, the objectivity problem that was always there becomes harsher because of the speed with which it is generated.

The problem is already with us. Chat-GPT gives answers to many questions with the air of a scientific oracle, but it’s more like an ‘idiot savant’ ― very knowledgeable and very unwise. ‘It’ doesn’t know how to distinguish truth from nonsense. Everything goes through the same pipeline with immense speed.

And this is just the beginning.

Truth for sale?

This story, just begun, will go further than we can imagine. ‘The system’ will be able to present almost anything in a way that ‘many unbiased people’ will find acceptable as an old or new truth. Then, these people will generate (or let A.I. generate) new text that will serve as new material upon which ‘the system’ founds new answers to further questions.

This loop may break down objectivity at lightning speed.

We (who?) may identify objective sources as forever truthful, as ‘standards of truth.’ Truth governance will then be needed to decide upon such sources. As indicated above, this is highly problematic in many domains.

Will truth then be something for sale? The highest bidder may come with money, unfounded status, weaponry ― is that our philosophical future?

Over my dead body.

Those who know me already know that I will now bring up the concept of Compassion, basically. Indeed, good guess.

An additional advantage at this point is that it lets humans and super-A.I. work together on the same basis. Humans can forever have their human say on things ― especially those concerning humans themselves in any possible way. It’s always interesting just because it’s human. Super-A.I. will support humanity in being even more humane.

This will not get realized tomorrow. It takes time, so we should be busy thinking it over already.

Will objectivity then be the result of Compassion? Yes!

Can’t wait.

Leave a Reply

Related Posts

Selling Data is Selling Soul

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles? Big data + A.I. = big knowledge Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ Read the full article…

Can Motivation be Purely Conscious?

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved. Purely conscious? This is also purely (even though possibly partly fuzzy) conceptual. Motivation would Read the full article…

The Danger of Non-Compassionate A.I.

There are many obvious issues, from killer humans to killer robots. This text is about something even more fundamental. About Compassion Please read Compassion, basically, or more blogs about Compassion. Having done so, you know the reason for the capital ‘C,’ which is what this text is mainly about. To intellectually grasp Compassion, one needs Read the full article…

Translate »