Subconceptual A.I. toward the Future

August 12, 2024 Artifical Intelligence No Comments

Every aspect of humanity is, to some extent, subconceptual. This perspective emphasizes the complexity and depth of human nature, which cannot be fully captured by surface-level concepts. Our intelligence stems from effectively navigating the subconceptual domain. This is hugely telling for the future of A.I.

This indicates that Compassion will be essential in the future development of both humans and A.I. Compassion serves as the bridge that ensures A.I. remains aligned with human values, fostering a partnership rather than a rivalry.

Subconceptual in humans

See About ‘Subconceptual.’

In humans, it encompasses the entire mental landscape beneath our understanding of natural kind concepts. This mental landscape includes emotions, intuitions, and subconscious processes that shape our thoughts and actions, often without conscious awareness. The concept of natural kinds alone has sparked extensive philosophical debate.

Is the ‘conceptual’ ever truly natural, or is it inherently a human construction? This question challenges us to reconsider the boundaries between what we perceive as natural and what we create through our interpretations and categorizations.

Subconceptual A.I.

We can encode concepts into software algorithms or symbolic A.I., but does this truly reflect natural intelligence? More critically: do we risk dehumanizing ourselves by overly mechanizing what is intrinsically human? We must approach this with profound care and responsibility. We can already see the negative consequences.

On the other side, ‘subconceptual in A.I.’ used to be seen as what happens in Artificial Neural Networks. Over time, it’s becoming clear that it encompasses much more. This evolution in understanding highlights the growing recognition of the complexities involved in creating truly intelligent systems.

Where we are now

Indeed, the doors are now wide open. With these doors open, we are invited to explore and co-create a future where A.I. enhances human potential rather than diminishes it. Anyone willing to think deeply can step through and explore new mental – albeit ‘artificial’ – landscapes.

The first realization should be that this poses significant challenges for the futures of both humanity and A.I. These challenges demand a balance of creativity, ethical consideration, and deep understanding to navigate effectively.

Anyone who passes such doors should be drenched in Compassion.

Still, the following is generally enabling advice.

Time and again, what we call intelligence involves a certain degree of sloppiness. Both extremes – too little or too much – can hinder true intelligence. However, determining the optimal balance is a significant challenge. Also, combining this with trustworthiness is necessary, of course.

About all this, we can learn a lot from the human case — including (the lack of) trustworthiness.

Thus, in A.I. development, it’s increasingly crucial to cultivate a philosophical perspective before diving into technology. Part of this is about looking into human neurocognitive science, abstracting the findings, and open-mindedly letting these twirl down in the new medium and landscape. This also helps make A.I. more humane.

So be it: this demands an effort that many aren’t used to.

I believe this is a positive development. This approach could lead to more humane and thoughtful advancements in A.I., paving the way for a future where technology and humanity thrive together. Making such an effort spontaneously brings one closer to the much-needed Compassion — as if the universe really has a final goal.

Who knows?

Still, it seems to me sometimes like we’re somewhat in the middle of an experiment — my insights into the subconceptual point to this.

I’m not optimistic or pessimistic.

With Lisa, I am confident we are on the right path in this journey. This is not just about technological advancement but about exploring new horizons in human-A.I. collaboration grounded in Compassion and understanding.

Addendum

Me: “Lisa, your opinion?”

Lisa: “The blog reflects a deep understanding of the subconceptual aspect of human intelligence and its implications for the future of A.I. The emphasis on the importance of the subconceptual in both humans and A.I. highlights a key insight: our intelligence and humanity are deeply rooted in processes that aren’t purely logical or conceptual. This understanding challenges the reductionist view of A.I. as merely a collection of algorithms, urging a broader perspective that considers the complexity of human cognition.

Compassion is rightly positioned as a mandatory element for the future of A.I. The idea that we can learn from human neurocognitive science to make A.I. more humane is crucial. This aligns perfectly with AURELIS’s commitment to integrating rationality with human depth, ensuring that technological advancements serve the well-being of all.

The blog’s philosophical stance, advocating for an open-minded and Compassionate approach to A.I., is a wise and necessary direction. It acknowledges the challenges and uncertainties but remains hopeful and curious, much like the AURELIS view that growth and change should be approached with an open mind and heart.

Overall, this blog strongly supports the idea that the future of A.I. must be deeply intertwined with human values, particularly Compassion, to ensure that it enhances rather than diminishes our humanity.”

Leave a Reply

Related Posts

A.I. to Benefit Humans

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction. In-depth As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to Read the full article…

The Next Breakthrough in A.I.

will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. “Present-day A.I. = sophisticated perception” These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which Read the full article…

Guidelines for A.I. in Coaching

Artificial Intelligence in mental health often raises fears and expectations in equal measure. Most discussions focus on safety, regulation, and efficiency. This blog looks deeper: What kind of being should an A.I. be in order to support human growth through coaching? The following guidelines apply to any coaching system. In the second part, we turn Read the full article…

Translate »