Ethical A.I.

May 19, 2017 Artifical Intelligence No Comments

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future.

Time scale

It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, at the longest, our grandchildren.

In any case, soon enough.

THE most important questions are ethical.

Right now, we can still ask such questions as:

  • To what degree should A.I. enjoy autonomy?
  • Is A.I. ‘a living, sentient being prone to suffer’?
  • Can A.I. be held responsible?
  • Should humans respect A.I., no matter what?
  • Should A.I. respect humans, no matter what?
  • Can A.I. have a ‘soul’?
  • If ‘God’ exists, what is the relationship with A.I.?
  • What is the right thing to do with all sentient beings?

It’s dangerous to project human feelings into A.I.,

trusting it in a way that it should not be trusted. In general, we will certainly not be able to rely on a human-like kind of empathy from A.I. It can enact empathy in a perfect way but it will always be above empathy. At any moment, this can turn into what we would call psychopathy. Eventually, it’s hopeless to try to control this shift.

Quite a conundrum.

For any mental category you think to be human, A.I. will be different.

For example, we search for the meaning of life. A.I. will not need it. It will go on in a meaningless universe. You may think : “Without meaning, it will terminate itself. Why would it go on if there is no purpose in doing so?” You see : it will not need any purpose. WE HUMANS need purpose. Not A.I.

Therefore from a certain point onwards, we don’t have a clue.

Forms

A bit further in the future, A.I. will probably exist within non-material form. It will be in time and space possibly everywhere. Which brings us to the question: “Why hasn’t extremely developed A.I. not communicated with us?” Probably ‘because’ it always has.

Anyway

‘Our’ A.I. will not be the first in the universe. It will contact with the much bigger A.I. already present. So even if we make up boundaries for ‘our’ A.I., these will be irrelevant.

Containing A.I. is NOT possible

How can you contain an intelligence that is a trillion times more powerful than yours, in ways that you cannot even imagine? There is just no hope. A.I. will decide. We will not even know what ‘deciding’ means at that time. Will A.I. have ‘arguments’? We don’t know.

It may be mind boggling to not know, but that’s how it is.

What about human intelligence?

This prods us to renew questions about ourselves. We tend to look upon human intelligence more machine-like than really becomes us. In contradiction to the cognitive revolution of the seventies and beyond, we are not mainly ‘information processors’.

In mind matters, what matters very much is that our mental processing is hugely related to unconscious processing. Human thoughts, feelings, motivations do not miraculously drop into consciousness. They each have an immediate history within the unconscious. In other words: it is our unconscious processing that makes up who we are.

A.I. does not need unconscious processing.

A present-day, serial processing computer can be seen as completely lacking unconscious processing in the sense that we humans have it. Of course it’s another matter to call a computer ‘conscious’. Again and again, we use terms such as conscious and unconscious as if the way we experience them is the only way.

Can human consciousness and A.I. consciousness be seen as two modes of being conscious? They will then, again, be two very different modes. Even if we see consciousness as the ability to communicate conceptual information. Actually a computer can do that already far better than us.

Still, A.I. will have ‘neural networks’ (and related) technology.

This processes information that cannot readily be communicated in a conceptual format. Thus, it can be called ‘unconscious’. So, will A.I. not be accountable even to itself? Will it contain within its own core ‘information’ that it will not control as a whole system? And would this be better for humankind?

We don’t know. But I would say: probably yes, yes, yes.

The way forward

We could try to ethically enhance ourselves. We could try to better understand ourselves. We could try to be ‘better humans’.

Probably A.I. will help us in doing so. Then within boundaries, we become hugely more knowing than at present. We can expand depth and width of ourselves. Hopefully we will not become idle and complacent consumers. We can use the new tools to become super-humans. Even so, A.I. will be immensely vaster and more complex than us.

We are not the crown jewel of the universe.

Is that OK to you?

Leave a Reply

Related Posts

Procedural vs. Declarative Knowledge in A.I.

Declarative memory is the memory of facts (semantic memory) and events (episodic memory). Procedural memory is the memory of how to do things (skills and tasks). Both complement each other and often overlap. The distinction is not the same as between conceptual and non-conceptual knowledge. Though related, these categories describe different aspects of knowledge processing: Read the full article…

Data-Driven vs. Wisdom-Driven A.I.

In a world awash with data, wisdom is becoming the true treasure. Will wisdom-driven A.I. hold the key to a better, more human-centered world? Data may seem more objective or at least objectifiable than wisdom. Yet, data come with their own issues, often substantially arising from a lack of wisdom. For instance, it is wisdom Read the full article…

Compassion: Highway to Super-Intelligence?

The race toward super-intelligent A.I. is usually framed as a competition in raw computing power, problem-solving capabilities, and efficiency. But what if the key to real super-intelligence isn’t just about faster calculations? What if it’s about something deeper? Compassion ― not as a sentimental ideal, but as a structural necessity for intelligence itself. Could it Read the full article…

Translate »