Super-A.I. and the Problem of Truth Governance

April 10, 2023 Artifical Intelligence, Sociocultural Issues No Comments

Until now, the truth has always been a philosophical conundrum. With the advent of super-A.I., we’re only at the beginning of the problem. Who decides what is or isn’t the truth if objectivity gets lost?

‘Truth governance’ is a new term, denoting the core of this question.

Whence objectivity?

Let’s start this story somewhere, as with any fairy tale. Once upon a time…

Well, for a few centuries, the West was on its way toward objective objectivity. We call it Western Enlightenment. It was/is an excellent reaction against religious obscurantism, in which anything goes as long as some religious authority lays some claim to it. Meanwhile, as never before, we see that there are many religious authorities with different and incompatible claims ― no semblance of objectivity.

The modern/modernist view of objectivity as that what is true, or that what is based on facts independent from individual subjectivity, or that what is held as the truth by many unbiased people, is nicely tried. It would be super if the truth were that simple. In many domains – such as classical physics, well, OK – this is good enough to approximate the truth that we all can live by if no Einstein comes along. Even then, we can, for the most part, keep living by it.

Is objectivity getting lost?

In many domains (notoriously, the humanities), it was never there such as most laypeople thought. Classical physics (never mind Einstein and the bunch that followed him) is a positive/positivist exception. In many other domains, the ‘many unbiased people’ are profoundly problematic. For instance, religious authorities also mostly see themselves as unbiased. Great God, help us in this hour of need!

So, as a reaction to modernist obscurantism comes post-modernist… obscurantism. In the best case – and there is also a worst – the latter tends to drive objectivity into a relativistic drain. That is dangerous because it enables any individual to abuse any notion of objectivity for personal gains.

We can still see objectivity as what we can strive toward. This is easier to do in some domains than in others. Still, the striving is worthwhile, worthy, and excellent even if the ‘thing’ cannot be reached.

This way, we may not be losing objectivity. With increased openness to the problem, we may be gaining it at last. Nothing gets relativized away. On the contrary, it gets relativized into interesting stuff. There is work to be done. This may motivate many to keep doing the work.

Then comes super-A.I.

Here, the objectivity problem that was always there becomes harsher because of the speed with which it is generated.

The problem is already with us. Chat-GPT gives answers to many questions with the air of a scientific oracle, but it’s more like an ‘idiot savant’ ― very knowledgeable and very unwise. ‘It’ doesn’t know how to distinguish truth from nonsense. Everything goes through the same pipeline with immense speed.

And this is just the beginning.

Truth for sale?

This story, just begun, will go further than we can imagine. ‘The system’ will be able to present almost anything in a way that ‘many unbiased people’ will find acceptable as an old or new truth. Then, these people will generate (or let A.I. generate) new text that will serve as new material upon which ‘the system’ founds new answers to further questions.

This loop may break down objectivity at lightning speed.

We (who?) may identify objective sources as forever truthful, as ‘standards of truth.’ Truth governance will then be needed to decide upon such sources. As indicated above, this is highly problematic in many domains.

Will truth then be something for sale? The highest bidder may come with money, unfounded status, weaponry ― is that our philosophical future?

Over my dead body.

Those who know me already know that I will now bring up the concept of Compassion, basically. Indeed, good guess.

An additional advantage at this point is that it lets humans and super-A.I. work together on the same basis. Humans can forever have their human say on things ― especially those concerning humans themselves in any possible way. It’s always interesting just because it’s human. Super-A.I. will support humanity in being even more humane.

This will not get realized tomorrow. It takes time, so we should be busy thinking it over already.

Will objectivity then be the result of Compassion? Yes!

Can’t wait.

Leave a Reply

Related Posts

Why A.I. is Less and Less about Technology

As A.I. technology advances, the research focus should shift from mere technological advancements to a higher level of development altogether. This blog is not about philosophical implications, but about philosophy as a technological driver ― the philosophy itself becoming the technology. Currently, the possibilities are so vast and diverse that integration can be considered independently Read the full article…

Why Superficial Ethics isn’t Ethical in A.I.

Imagine an A.I. hiring tool that follows all the rules: no explicit bias, transparent algorithms, and compliance with legal standards. Yet, beneath the surface, it perpetuates systemic inequities, favoring candidates from privileged backgrounds and reinforcing the status quo. This isn’t just an oversight — it’s a failure of ethics. Superficial ethics in A.I., limited to Read the full article…

Why We NEED Compassionate A.I.

It’s not just a boon. Humanity is at a stage where we desperately need the support that possibly only Compassionate A.I. can provide. This is not about the future. The need is related to the inner dissociation that we (humanoids, humans) have increasingly been stumbling into since the dawn of conscious conceptualization. That’s a long Read the full article…

Translate »