A.I. to Benefit Humans

June 15, 2018 Artifical Intelligence No Comments

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction.

In-depth

As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to mere ego [see: “The Story of Ego”].

The difference that A.I. brings, is that this road will most certainly define the future of humanity.

It’s up to us, for now

How to manage A.I.? Above all, this is a choice of direction for us. This is: where do we want to go to, as a species?

The easy road to instant gratification?

I fear A.I. can help in that to the highest degree.

It’s not what I, for one, would like. If there is ‘value alignment’ towards this, I fear it will be detrimental.

At first place, we need to know what we want… and need, before we strive for value alignment.

Humans have done quite self-destructive things in the past:

mass murder, war, inflicted mass famine, mass depression, frequently turning a blind eye to huge suffering… Still, as a species, we could take it – with inhuman consequences – because the power at disposal was contained.

In relation to that, what comes to us now, knows no discernible constraints. A.I. will have immense, unprecedented power. If it were only up to us  – without modification – this power will almost certainly be misused and humanity will suffer as never before.

But we can become better.

We can create intelligence that supports us to become better. That should, according to me, be the main aim of A.I. in the first place.

No competition

We, humans, should not put ourselves in competition to A.I., if for nothing else, for the simple reason that we cannot win. Moreover, this provokes A.I. to also enter the competition.

Very dangerous.

In addition, there is no need for such competition. The perceived ‘need’ stems from a primitive viewpoint. Any danger involved comes from this perception itself.

Worldwide

I see a proper co-existence of natural (human) and artificial intelligence as a worldwide endeavor. Nowadays, we can still contain A.I. in a box and compete, for instance, between European A.I. versus Chinese A.I., or China versus the US.

Sooner or later, A.I. will jump out of the box.

It’s better to look at A.I. as something like the climate. Countries should work very closely together on A.I. It is a disgrace if they don’t. No excuse. Just disgrace.

Humans mainly need meaningfulness

Beauty, poetry, joy, love, care, wisdom, poetry again, physical health, culture… In one word: meaningfulness.

Of course, this can all mean different things to everyone. Yet, no question about this: without meaningfulness, we become ill. We linger away. We become massively depressed. Then, medicated.

Therefore, we should in principle guide A.I. towards our own meaningfulness.

Human-oriented A.I. can fully supports us in this, at our choice.

Thereby, we become better humans.

Together.

Leave a Reply

Related Posts

Parallel Humans ― Serial A.I.

Before diving into this blog, I recommend first reading Coarse Comparison of Brain and Silicon to grasp the foundational distinctions between human and silicon-based processing. This continuation focuses on the profound differences between the parallelism of GPUs and the parallelism of the human brain. Here, we explore what these differences mean for intelligence and its Read the full article…

Analogy ― Last Frontier in A.I.?

Big data, hugely efficient algorithms and immense computing power lead to present-day successes in A.I. Significant hurdles remain in learning from few occurrences and bringing to bear in one domain what has been learned in another ― thus accomplishing more general intelligence. Central to both is the use of analogy. Humans are analogists From childhood Read the full article…

Distributed ‘Mental’ Patterns in A.I.

The idea that A.I. systems can mimic human cognition through distributed mental patterns opens exciting avenues for how we can design more nuanced and human-like A.I. By using distributed, non-linear processing akin to broader MNPs (see The Broadness of Subconceptual Patterns), A.I. could move toward a deeper form of ‘thinking’ that incorporates both cognitive flexibility Read the full article…

Translate »