What seems ‘Easy’ and is Not
The subconceptual domain may seem ‘easy’ mainly for those who operate predominantly within a conceptual framework. On the other side, what comes out of this domain may seem magical, almost impossible, for the same people.
This is highly relevant as subconceptual influences, often underestimated, can have profound effects on our understanding and interactions in various fields. This leads to specific challenges — firstly, to convince primarily conceptual thinkers of the possibilities; secondly, to mitigate anxiety when they see actually (being bound to) happen ‘what cannot be.’
In art
This may lead some to incredulously see others appreciate art as of prime importance. Of course, something isn’t art by virtue of being called so. This makes it even more challenging since some may call anything ‘art’ and try to get by for reasons of standing and money.
When is ‘art’ art? Difficult to say. In any case, it’s about more than what is superficially observed.
In coaching
Excellent coaching has an aspect of ‘as by itself’ — very different from ‘by itself’ (being magic) but this may equally be challenging to appreciate.
Thus, one may look at the printout of a coaching session without appreciating anything that is really happening in that session.
It may happen invisibly — even to the coachee.
In developing A.I.
Concocting something that appears somehow ‘intelligent’ is enough for some to deem themselves ‘A.I. developers.’
This is crazy stuff – yet frequently encountered – and uncannily dangerous on top of that!
In combination
Unfortunately, many (most) A.I. developers have no clue about the human subconceptual domain. Thus, they think that, for instance, human-A.I. alignment is a simple question of the right decisions (‘easily’ made by philosophers or themselves) and developing some software to enact these.
Sorry, but that would be hilarious if it wouldn’t be an existential threat to humanity.
Into what seems to be ‘easy’ and is not.
Doing this, one quickly enters a highly complex field full of mines and booby traps — no problem if you know more or less what you’re doing, but… precisely. And sadly, this may lead, in many cases, to the wrong experts leading the conversations even at decision-making levels.
This deeply resonates with AURELIS’ philosophy of integrating rationality and human depth, emphasizing the necessity of a balanced approach that respects both the conceptual and subconceptual realms, ensuring a holistic view that avoids oversimplification and acknowledges the inherent complexity of human cognition and emotion.
We need to know first who we are, then what A.I. can be.
Nothing is ‘easy’ just by itself.