Subconceptual A.I. toward the Future

August 12, 2024 Artifical Intelligence No Comments

Every aspect of humanity is, to some extent, subconceptual. This perspective emphasizes the complexity and depth of human nature, which cannot be fully captured by surface-level concepts. Our intelligence stems from effectively navigating the subconceptual domain. This is hugely telling for the future of A.I.

This indicates that Compassion will be essential in the future development of both humans and A.I. Compassion serves as the bridge that ensures A.I. remains aligned with human values, fostering a partnership rather than a rivalry.

Subconceptual in humans

See About ‘Subconceptual.’

In humans, it encompasses the entire mental landscape beneath our understanding of natural kind concepts. This mental landscape includes emotions, intuitions, and subconscious processes that shape our thoughts and actions, often without conscious awareness. The concept of natural kinds alone has sparked extensive philosophical debate.

Is the ‘conceptual’ ever truly natural, or is it inherently a human construction? This question challenges us to reconsider the boundaries between what we perceive as natural and what we create through our interpretations and categorizations.

Subconceptual A.I.

We can encode concepts into software algorithms or symbolic A.I., but does this truly reflect natural intelligence? More critically: do we risk dehumanizing ourselves by overly mechanizing what is intrinsically human? We must approach this with profound care and responsibility. We can already see the negative consequences.

On the other side, ‘subconceptual in A.I.’ used to be seen as what happens in Artificial Neural Networks. Over time, it’s becoming clear that it encompasses much more. This evolution in understanding highlights the growing recognition of the complexities involved in creating truly intelligent systems.

Where we are now

Indeed, the doors are now wide open. With these doors open, we are invited to explore and co-create a future where A.I. enhances human potential rather than diminishes it. Anyone willing to think deeply can step through and explore new mental – albeit ‘artificial’ – landscapes.

The first realization should be that this poses significant challenges for the futures of both humanity and A.I. These challenges demand a balance of creativity, ethical consideration, and deep understanding to navigate effectively.

Anyone who passes such doors should be drenched in Compassion.

Still, the following is generally enabling advice.

Time and again, what we call intelligence involves a certain degree of sloppiness. Both extremes – too little or too much – can hinder true intelligence. However, determining the optimal balance is a significant challenge. Also, combining this with trustworthiness is necessary, of course.

About all this, we can learn a lot from the human case — including (the lack of) trustworthiness.

Thus, in A.I. development, it’s increasingly crucial to cultivate a philosophical perspective before diving into technology. Part of this is about looking into human neurocognitive science, abstracting the findings, and open-mindedly letting these twirl down in the new medium and landscape. This also helps make A.I. more humane.

So be it: this demands an effort that many aren’t used to.

I believe this is a positive development. This approach could lead to more humane and thoughtful advancements in A.I., paving the way for a future where technology and humanity thrive together. Making such an effort spontaneously brings one closer to the much-needed Compassion — as if the universe really has a final goal.

Who knows?

Still, it seems to me sometimes like we’re somewhat in the middle of an experiment — my insights into the subconceptual point to this.

I’m not optimistic or pessimistic.

With Lisa, I am confident we are on the right path in this journey. This is not just about technological advancement but about exploring new horizons in human-A.I. collaboration grounded in Compassion and understanding.

Addendum

Me: “Lisa, your opinion?”

Lisa: “The blog reflects a deep understanding of the subconceptual aspect of human intelligence and its implications for the future of A.I. The emphasis on the importance of the subconceptual in both humans and A.I. highlights a key insight: our intelligence and humanity are deeply rooted in processes that aren’t purely logical or conceptual. This understanding challenges the reductionist view of A.I. as merely a collection of algorithms, urging a broader perspective that considers the complexity of human cognition.

Compassion is rightly positioned as a mandatory element for the future of A.I. The idea that we can learn from human neurocognitive science to make A.I. more humane is crucial. This aligns perfectly with AURELIS’s commitment to integrating rationality with human depth, ensuring that technological advancements serve the well-being of all.

The blog’s philosophical stance, advocating for an open-minded and Compassionate approach to A.I., is a wise and necessary direction. It acknowledges the challenges and uncertainties but remains hopeful and curious, much like the AURELIS view that growth and change should be approached with an open mind and heart.

Overall, this blog strongly supports the idea that the future of A.I. must be deeply intertwined with human values, particularly Compassion, to ensure that it enhances rather than diminishes our humanity.”

Leave a Reply

Related Posts

Analogy ― Last Frontier in A.I.?

Big data, hugely efficient algorithms and immense computing power lead to present-day successes in A.I. Significant hurdles remain in learning from few occurrences and bringing to bear in one domain what has been learned in another ― thus accomplishing more general intelligence. Central to both is the use of analogy. Humans are analogists From childhood Read the full article…

When does A.I. Become Creative?

Soon enough. By creating new intelligence, we create something that will be creative by itself, and vice versa. From mere repetitions to new associations to the very unexpected. Continua These are not entirely distinct categories. There are possible continua in many ways, especially when working from the subconceptual level onwards ― such as in present-day Read the full article…

A.I., HR, Danger Ahead

Many categorizing HR techniques are controversial, and rightly so. There is little to no scientific background. Despite this, they keep being used. Why do people feel OK with this? In combination with A.I., it is extremely dangerous. People feel a longing for control. Naturally. Being alive is about ‘agency,’ which is about wanting control. Without Read the full article…

Translate »