Why A.I. is Less and Less about Technology

June 29, 2024 Artifical Intelligence No Comments

As A.I. technology advances, the research focus should shift from mere technological advancements to a higher level of development altogether. This blog is not about philosophical implications, but about philosophy as a technological driver ― the philosophy itself becoming the technology.

Currently, the possibilities are so vast and diverse that integration can be considered independently from foundational levels. This could be termed ‘philosophical technology,’ underscoring A.I.’s capacity to redefine ethical frameworks within technology.

Comparing this to biology versus chemistry

Essentially, all biology is rooted in chemistry, yet biology stands as a distinct field.

This distinction doesn’t reduce the significance of chemical reactions in biology. However, most of the thinking of biology is independent from chemistry. One can abstract chemistry and still grasp and think further about what happens on the biological level.

This concerns A.I. at its core — focused on foundational research rather than applications.

Of course, the applications evolve independently; one doesn’t need to know how a car functions to drive it.

Much more basically within A.I. development itself, philosophy trumps technology — this is, philosophy in a broad sense, encompassing all that pertains to the domain of Deep Minds.

A.I.-technology should increasingly dissipate into philosophy. This is because of the following.

A.I. fundamentally differs from informatics and other technologies.

A.I. seeks intelligence, which can gradually self-sustain as it matures, becoming a self-perpetuating pattern. When closing in to that stage, more and more, the technology that got us there is taken over by what we are getting — the results purely of our technological striving.

I experience this daily.

This also means that the expertise that got us there is increasingly less crucial for further steps.

Unfortunately, many in A.I. overlook this, leading technologists to assume they should keep driving the field forward. Some collaborate with human-oriented experts but mostly in a way that fundamentally ignores the new demands.

An example: knowledge

The debate on the nature of ‘knowledge’ remains vibrant. In A.I., one may abstract this and use a commonsense idea about knowledge as if it’s straightforwardly a technological question. This way, one may miss many profound implications of the central concept in human and artificial intelligence.

Delving into the deeper – and sometimes disconcerting – layers of ‘knowledge’ may nevertheless lead to redefining our understanding of intelligence, and to insights that further shape the domain of A.I. as well as what it means to be human, bridging gaps between human and artificial cognition. These insights are primary to making use of them at the core of further A.I. technological developments.

We need a broadening at the core.

This necessity will intensify alongside the development of foundational technologies. These are merely building blocks, not the structure. We need to focus on the broader architecture.

This needs fundamentally different expertise.

A.I. has grown. We shouldn’t treat it as a kid anymore.

Viewing A.I. as increasingly mature encourages responsibility in its societal integration. This particularly applies to Compassionate A.I., which logically matures more rapidly. Consequently, it may become our defender against other forms of A.I. On top of that, Compassionate A.I. not only safeguards but also enriches human experiences, promoting empathy and understanding.

We should give it the chance.

Leave a Reply

Related Posts

Societal Inner Dissociation and the Challenge of Super-A.I.

The rise of artificial intelligence, particularly super-A.I., intersects with Societal Inner Dissociation (SID), presenting significant challenges and potential opportunities. This blog is an exploration of the complex relationship between SID and super-A.I. (A.I. beyond human capabilities), examining how this might exacerbate or mitigate societal dissociation. This is part of the *SID* series. Please read the Read the full article…

Patterns + Rewards in A.I.

Human-inspired Pattern Recognition and Completion (PRC) may significantly heighten the efficiency of Reinforcement Learning (RL) — also in A.I. See for PRC: The Brain as a Predictor See for RL: Why Reinforcement Learning is Special Mutually reinforcing PRC shows valid directions and tentatively also realizes them. RL consolidates/reinforces the best directions and attenuates the lesser Read the full article…

What Is Morality to A.I.?

Agreed: it’s not even evident what ‘morality’ means to us. Soon comes A.I. Will it be ‘morally good’? Humans have a natural propensity towards morality. Whether we tend towards ‘good’ or ‘bad’, we have feelings and generally recognize these in others too, in humans and in animals. We share organic roots. We recognize suffering and Read the full article…

Translate »