Super-A.I. and the Problem of Truth Governance

April 10, 2023 Artifical Intelligence, Sociocultural Issues No Comments

Until now, the truth has always been a philosophical conundrum. With the advent of super-A.I., we’re only at the beginning of the problem. Who decides what is or isn’t the truth if objectivity gets lost?

‘Truth governance’ is a new term, denoting the core of this question.

Whence objectivity?

Let’s start this story somewhere, as with any fairy tale. Once upon a time…

Well, for a few centuries, the West was on its way toward objective objectivity. We call it Western Enlightenment. It was/is an excellent reaction against religious obscurantism, in which anything goes as long as some religious authority lays some claim to it. Meanwhile, as never before, we see that there are many religious authorities with different and incompatible claims ― no semblance of objectivity.

The modern/modernist view of objectivity as that what is true, or that what is based on facts independent from individual subjectivity, or that what is held as the truth by many unbiased people, is nicely tried. It would be super if the truth were that simple. In many domains – such as classical physics, well, OK – this is good enough to approximate the truth that we all can live by if no Einstein comes along. Even then, we can, for the most part, keep living by it.

Is objectivity getting lost?

In many domains (notoriously, the humanities), it was never there such as most laypeople thought. Classical physics (never mind Einstein and the bunch that followed him) is a positive/positivist exception. In many other domains, the ‘many unbiased people’ are profoundly problematic. For instance, religious authorities also mostly see themselves as unbiased. Great God, help us in this hour of need!

So, as a reaction to modernist obscurantism comes post-modernist… obscurantism. In the best case – and there is also a worst – the latter tends to drive objectivity into a relativistic drain. That is dangerous because it enables any individual to abuse any notion of objectivity for personal gains.

We can still see objectivity as what we can strive toward. This is easier to do in some domains than in others. Still, the striving is worthwhile, worthy, and excellent even if the ‘thing’ cannot be reached.

This way, we may not be losing objectivity. With increased openness to the problem, we may be gaining it at last. Nothing gets relativized away. On the contrary, it gets relativized into interesting stuff. There is work to be done. This may motivate many to keep doing the work.

Then comes super-A.I.

Here, the objectivity problem that was always there becomes harsher because of the speed with which it is generated.

The problem is already with us. Chat-GPT gives answers to many questions with the air of a scientific oracle, but it’s more like an ‘idiot savant’ ― very knowledgeable and very unwise. ‘It’ doesn’t know how to distinguish truth from nonsense. Everything goes through the same pipeline with immense speed.

And this is just the beginning.

Truth for sale?

This story, just begun, will go further than we can imagine. ‘The system’ will be able to present almost anything in a way that ‘many unbiased people’ will find acceptable as an old or new truth. Then, these people will generate (or let A.I. generate) new text that will serve as new material upon which ‘the system’ founds new answers to further questions.

This loop may break down objectivity at lightning speed.

We (who?) may identify objective sources as forever truthful, as ‘standards of truth.’ Truth governance will then be needed to decide upon such sources. As indicated above, this is highly problematic in many domains.

Will truth then be something for sale? The highest bidder may come with money, unfounded status, weaponry ― is that our philosophical future?

Over my dead body.

Those who know me already know that I will now bring up the concept of Compassion, basically. Indeed, good guess.

An additional advantage at this point is that it lets humans and super-A.I. work together on the same basis. Humans can forever have their human say on things ― especially those concerning humans themselves in any possible way. It’s always interesting just because it’s human. Super-A.I. will support humanity in being even more humane.

This will not get realized tomorrow. It takes time, so we should be busy thinking it over already.

Will objectivity then be the result of Compassion? Yes!

Can’t wait.

Leave a Reply

Related Posts

More about Rewards (also in A.I.)

A reward is a nudge – with more or less lasting result – into some preferred direction. Anything can be experienced as a reward. Thinking about it as a pattern within a broader pattern is clarifying. Pattern recognition and completion (PRC) Seeing rewards in the context of PRC, a reward is always just a part Read the full article…

Containing Compassion in A.I.

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity. Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even Read the full article…

From Tool to Autonomy

When can a tool – gradually – become an autonomous agent, and how must we deal with this? What is an agent? And what is a tool? For instance, is your body (or your left hand) a tool of your brain? Is your entire body – including your brain – a tool of your mind? Read the full article…

Translate »