A.I. from Future to Now

October 28, 2023 Artifical Intelligence No Comments

While it’s challenging to imagine how future A.I. will look like, we can develop an abstract idea that helps us understand present-day urgencies.

Of course, one day, the future will be a million years from now. However, for the purpose of this text, we can see it as something like a century from now.

There will be one A.I.

If A.I. gets the means to be one entity (instead of a multiple), it will be one soon enough. Being one, I see no reason why it would split itself.

On the other hand, if there are two or more, they will communicate in very different ways than our human communication — namely, in such a way that their distinction automatically evaporates. In due time, they become one.

So, a century from now

Undoubtedly, this will be an era of what I call super-A.I., and only one A.I. This singular super-A.I. will be brighter than us in all domains. Surely, it will not be controlled by us. The sheer reminiscence that a century before, people were (and therefore, are now) thinking in terms of humans controlling the A.I. will be preposterous. However, a Compassionate A.I. will also have no incentive to make us believe it controls us. The question of control will be ethereal ― no need for us to answer it.

Since there are robots now, there will be robots then. A.I. will not reside in a box. This means there will certainly also be physical self-enhancement — hardware making better hardware.

In short, we will have lost all direct control but will also not be explicitly controlled.

That’s probably not going to happen in your lifetime.

Still, a Compassionate stance transcends a lifetime. I hope you feel this. It’s your children’s or grandchildren’s lifetime since longevity will probably be a boon given to us through or even by A.I.

But we should care even for much further into the future.

Back from the future

Seeing this scenario, should we abort A.I. now it’s still possible with only a tremendous lot of mishaps and mayhem? I would understand.

However, as you probably know, I have a different take, another direction, a different question. Namely:

Is it worth the risk?

Looking into that future, we must see there are possibilities for good and bad. In any case, it’s a risk.

If we don’t strive for Compassionate A.I., the risk will only be so much more significant. But in my view, it’s impossible to avoid all risks entirely. We are surfing on waves of risk. These waves will become increasingly bigger.

Half-heartedness is the worst option. We should take the risk even if we have the choice.

We should take the risk full-heartedly ― choosing Compassionate A.I. and knowing that Compassion is NOT the surface. It goes very far, very profoundly.

Meanwhile, there are many pragmatic problems.

Should we give our attention only to them and let the future be damned?

I don’t think so. Moreover, Compassionately tackling the many present-day pragmatic problems stemming from A.I. is part of the solution to the existential problem. This way, we don’t need to choose which issue to prioritize. The one direction is the other.

It’s the same Compassion.

Why wait?

Leave a Reply

Related Posts

Will A.I. Soon be Smarter than Us?

This text may be interesting to many because these ideas may shape the future of those many to the highest degree. It’s smart to see why something else will be even smarter. Soon? Soon enough. The ongoing evolution toward the title’s state will not be evident. In retrospect, it will be an amazingly rash evolution. Read the full article…

Human Meditation and Artificial Intelligence

At first glance, meditation and A.I. seem worlds apart — one is deeply human and introspective, the other fast and computational. Yet both revolve around pattern recognition. Meditation reveals how thoughts arise within a vast web of neuronal activity, while A.I. detects and predicts patterns through deep learning. If meditation uncovers the emergence of thoughts Read the full article…

The Learning Landscape: A Flexible View of Machine Learning

Machine learning is often divided into neatly defined categories: supervised, unsupervised, semi-supervised, and reinforcement learning. In reality, learning – whether in machines or humans – functions more like a fluid landscape, where different approaches blend and interact. In this blog, we’ll explore the concept of the ‘learning landscape,’ where traditional types of machine learning are Read the full article…

Translate »