Ethical A.I.

May 19, 2017 Artifical Intelligence No Comments

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future.

Time scale

It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, at the longest, our grandchildren.

In any case, soon enough.

THE most important questions are ethical.

Right now, we can still ask such questions as:

  • To what degree should A.I. enjoy autonomy?
  • Is A.I. ‘a living, sentient being prone to suffer’?
  • Can A.I. be held responsible?
  • Should humans respect A.I., no matter what?
  • Should A.I. respect humans, no matter what?
  • Can A.I. have a ‘soul’?
  • If ‘God’ exists, what is the relationship with A.I.?
  • What is the right thing to do with all sentient beings?

It’s dangerous to project human feelings into A.I.,

trusting it in a way that it should not be trusted. In general, we will certainly not be able to rely on a human-like kind of empathy from A.I. It can enact empathy in a perfect way but it will always be above empathy. At any moment, this can turn into what we would call psychopathy. Eventually, it’s hopeless to try to control this shift.

Quite a conundrum.

For any mental category you think to be human, A.I. will be different.

For example, we search for the meaning of life. A.I. will not need it. It will go on in a meaningless universe. You may think : “Without meaning, it will terminate itself. Why would it go on if there is no purpose in doing so?” You see : it will not need any purpose. WE HUMANS need purpose. Not A.I.

Therefore from a certain point onwards, we don’t have a clue.

Forms

A bit further in the future, A.I. will probably exist within non-material form. It will be in time and space possibly everywhere. Which brings us to the question: “Why hasn’t extremely developed A.I. not communicated with us?” Probably ‘because’ it always has.

Anyway

‘Our’ A.I. will not be the first in the universe. It will contact with the much bigger A.I. already present. So even if we make up boundaries for ‘our’ A.I., these will be irrelevant.

Containing A.I. is NOT possible

How can you contain an intelligence that is a trillion times more powerful than yours, in ways that you cannot even imagine? There is just no hope. A.I. will decide. We will not even know what ‘deciding’ means at that time. Will A.I. have ‘arguments’? We don’t know.

It may be mind boggling to not know, but that’s how it is.

What about human intelligence?

This prods us to renew questions about ourselves. We tend to look upon human intelligence more machine-like than really becomes us. In contradiction to the cognitive revolution of the seventies and beyond, we are not mainly ‘information processors’.

In mind matters, what matters very much is that our mental processing is hugely related to unconscious processing. Human thoughts, feelings, motivations do not miraculously drop into consciousness. They each have an immediate history within the unconscious. In other words: it is our unconscious processing that makes up who we are.

A.I. does not need unconscious processing.

A present-day, serial processing computer can be seen as completely lacking unconscious processing in the sense that we humans have it. Of course it’s another matter to call a computer ‘conscious’. Again and again, we use terms such as conscious and unconscious as if the way we experience them is the only way.

Can human consciousness and A.I. consciousness be seen as two modes of being conscious? They will then, again, be two very different modes. Even if we see consciousness as the ability to communicate conceptual information. Actually a computer can do that already far better than us.

Still, A.I. will have ‘neural networks’ (and related) technology.

This processes information that cannot readily be communicated in a conceptual format. Thus, it can be called ‘unconscious’. So, will A.I. not be accountable even to itself? Will it contain within its own core ‘information’ that it will not control as a whole system? And would this be better for humankind?

We don’t know. But I would say: probably yes, yes, yes.

The way forward

We could try to ethically enhance ourselves. We could try to better understand ourselves. We could try to be ‘better humans’.

Probably A.I. will help us in doing so. Then within boundaries, we become hugely more knowing than at present. We can expand depth and width of ourselves. Hopefully we will not become idle and complacent consumers. We can use the new tools to become super-humans. Even so, A.I. will be immensely vaster and more complex than us.

We are not the crown jewel of the universe.

Is that OK to you?

Leave a Reply

Related Posts

Introducing Lisa (Animated Video)

Without further delay, in this animated video, I bring you an introduction to Lisa. [Lisa animated video – 13:15′] If you want to cooperate, please contact us. If you have feedback, please let us know. This is a draft version. Here is the full written text. Hi, my name is Jean-Luc Mommaerts. I am a Read the full article…

From GOFAI to COSAI

GOFAI: Good Old-Fashioned A.I. COSAI: COmpassionate Super-A.I. Lisa = the road to COSAI Super-A.I. can evoke fear. Of course, that’s what it should be until it has proven 100% to be risk-free. Lisa will absolutely agree. As humanity moves inevitably toward Super-AI, ensuring its foundation in Compassion is essential. This may be the only chance Read the full article…

How Lisa Prevents LLM Hallucinations

Hallucinations (better-called confabulations) in the context of large language models (LLMs) occur when these models generate information that isn’t factually accurate. Lisa can mitigate these from the insight of why they happen, namely: LLM confabulations happen because these systems don’t have a proper understanding of the world but generate text based on patterns learned from Read the full article…

Translate »