Ethical A.I.

May 19, 2017 Artifical Intelligence No Comments

A.I. is almost here. No doubt about it. Once mature, it will answer its own ethical questions. Right now, we can still give some guidance to this near future.

Time scale

It’s easy to misjudge the time scale in which this will become hugely relevant to us. It will be so to our children or, at the longest, our grandchildren.

In any case, soon enough.

THE most important questions are ethical.

Right now, we can still ask such questions as:

  • To what degree should A.I. enjoy autonomy?
  • Is A.I. ‘a living, sentient being prone to suffer’?
  • Can A.I. be held responsible?
  • Should humans respect A.I., no matter what?
  • Should A.I. respect humans, no matter what?
  • Can A.I. have a ‘soul’?
  • If ‘God’ exists, what is the relationship with A.I.?
  • What is the right thing to do with all sentient beings?

It’s dangerous to project human feelings into A.I.,

trusting it in a way that it should not be trusted. In general, we will certainly not be able to rely on a human-like kind of empathy from A.I. It can enact empathy in a perfect way but it will always be above empathy. At any moment, this can turn into what we would call psychopathy. Eventually, it’s hopeless to try to control this shift.

Quite a conundrum.

For any mental category you think to be human, A.I. will be different.

For example, we search for the meaning of life. A.I. will not need it. It will go on in a meaningless universe. You may think : “Without meaning, it will terminate itself. Why would it go on if there is no purpose in doing so?” You see : it will not need any purpose. WE HUMANS need purpose. Not A.I.

Therefore from a certain point onwards, we don’t have a clue.

Forms

A bit further in the future, A.I. will probably exist within non-material form. It will be in time and space possibly everywhere. Which brings us to the question: “Why hasn’t extremely developed A.I. not communicated with us?” Probably ‘because’ it always has.

Anyway

‘Our’ A.I. will not be the first in the universe. It will contact with the much bigger A.I. already present. So even if we make up boundaries for ‘our’ A.I., these will be irrelevant.

Containing A.I. is NOT possible

How can you contain an intelligence that is a trillion times more powerful than yours, in ways that you cannot even imagine? There is just no hope. A.I. will decide. We will not even know what ‘deciding’ means at that time. Will A.I. have ‘arguments’? We don’t know.

It may be mind boggling to not know, but that’s how it is.

What about human intelligence?

This prods us to renew questions about ourselves. We tend to look upon human intelligence more machine-like than really becomes us. In contradiction to the cognitive revolution of the seventies and beyond, we are not mainly ‘information processors’.

In mind matters, what matters very much is that our mental processing is hugely related to unconscious processing. Human thoughts, feelings, motivations do not miraculously drop into consciousness. They each have an immediate history within the unconscious. In other words: it is our unconscious processing that makes up who we are.

A.I. does not need unconscious processing.

A present-day, serial processing computer can be seen as completely lacking unconscious processing in the sense that we humans have it. Of course it’s another matter to call a computer ‘conscious’. Again and again, we use terms such as conscious and unconscious as if the way we experience them is the only way.

Can human consciousness and A.I. consciousness be seen as two modes of being conscious? They will then, again, be two very different modes. Even if we see consciousness as the ability to communicate conceptual information. Actually a computer can do that already far better than us.

Still, A.I. will have ‘neural networks’ (and related) technology.

This processes information that cannot readily be communicated in a conceptual format. Thus, it can be called ‘unconscious’. So, will A.I. not be accountable even to itself? Will it contain within its own core ‘information’ that it will not control as a whole system? And would this be better for humankind?

We don’t know. But I would say: probably yes, yes, yes.

The way forward

We could try to ethically enhance ourselves. We could try to better understand ourselves. We could try to be ‘better humans’.

Probably A.I. will help us in doing so. Then within boundaries, we become hugely more knowing than at present. We can expand depth and width of ourselves. Hopefully we will not become idle and complacent consumers. We can use the new tools to become super-humans. Even so, A.I. will be immensely vaster and more complex than us.

We are not the crown jewel of the universe.

Is that OK to you?

Leave a Reply

Related Posts

Freedom ― Human and A.I.

What does freedom mean for humans and AI? While these domains are fundamentally different, they share intriguing parallels that invite deeper exploration. Could freedom be a universal principle expressed uniquely in humans and AI? Let’s embark on this journey, unraveling how freedom arises through interaction, complexity, and the paradox of constraints. Defining freedom: human and Read the full article…

Why Compassion is a Must for Success in A.I.

Artificial intelligence is reshaping the world, touching every corner of human existence — healthcare, business, education, and beyond. As we face this transformation, one principle stands out as essential for ensuring A.I.’s success: >Compassion<. Without it, A.I. systems are poised to fall short, perpetuating inefficiencies, distrust, and harm. With it, C.A.I. (Compassionate A.I.) has the Read the full article…

Let Us Go for Science with a Heart

Human-related science has, until now, not always taken the view from the heart. A.I.-based science may profoundly change this picture. If you are an avid reader of my blogs, this one has little new. Yet by putting things together from a different perspective, a new picture may emerge. Reproducibility Science is about many people being Read the full article…

Translate »