Can A.I. be Neutral?

October 12, 2024 Artifical Intelligence No Comments

I mean, concerning individuation vs. inner dissociation — in other words, total self vs. ego.

If we don’t take care, are we doomed to enter a future of ever more ego, engendered by our ‘latest invention’? So, how can we take care?

The illusion of neutrality

At first glance, A.I. might appear neutral. After all, it’s just a set of algorithms programmed to perform tasks, right?

However, the reality is more nuanced. A.I., in practice, is shaped by the frameworks, goals, and values of the humans who design and use it. Left unchecked, this supposed neutrality tends to favor shallow, ego-driven outcomes rather than deeper, more meaningful engagement. This is not a flaw in A.I. itself but rather a reflection of the environments in which it operates.

The question then becomes: how do we shift from this tendency toward deeper, more compassionate uses of A.I.?

Social Media: a case study

Consider social media, which was initially hailed as a tool for open communication and global connection. The assumption was that these platforms could be used for good or for ill, depending on how we engaged with them. Therefore, the thinking went, we should make them as accessible and open as possible.

However, what we have witnessed over time is quite different: the rise of clickbait, disinformation, and manipulation. Algorithms designed to optimize engagement often favor the content that triggers the strongest emotional responses. These are usually ego-driven emotions – anger, fear, outrage – that lead to more clicks and longer screen time. The noise of ego-driven impulses drowns out the more nuanced, total-self conversations that could foster real understanding.

This is a prime example of how so-called ‘neutral’ A.I. can be co-opted by the ego, leading to fragmentation rather than connection.

The judiciary: more efficiency, less resolution

Another domain where we see A.I. making waves is the judiciary. A.I. has been touted as a solution to streamline legal processes, reducing the cost and time involved in handling lawsuits. At first glance, this seems like a positive development.

Yet, what we are already beginning to see is an unintended consequence: the number of lawsuits is skyrocketing. Instead of resolving disputes through conversation and understanding, more people are taking their cases to court, relying on A.I.-driven legal systems. The more human resolution of conflicts becomes secondary to the efficiency of legal processes. Ego, seeking quick wins, dominates, while the total self’s more challenging path of reconciliation is left behind.

The bigger picture: ego-centered A.I. downfall

These are just two examples, but similar patterns can be seen across various domains — education, healthcare, economics, … The issue isn’t just that bad people misuse A.I. applications. It’s that the very structure of these systems often defaults to serving the ego’s need for immediate results, validation, or gratification.

When left unchecked, A.I. developments can help us go downstream, pulled by the ego’s drive for quick fixes. The problem is deeper than surface-level misuse; it’s about how these systems are ingrained to prioritize efficiency and short-term gains over depth and long-term meaning. In the absence of intentional guidance, A.I. will inevitably favor ego-driven outcomes.

Why does this happen?

The inadequate use of anything complex is far easier than the good use. Ego is efficient in navigating straight lines, whereas the total self moves through broader, more distributed patterns that take time and effort to develop.

When A.I. systems are designed to prioritize efficiency, they naturally align with the ego’s fast-paced, linear desires. The deeper processes that support human growth, creativity, and Compassion simply take too long to compete in this space.

This is catastrophic in the long run. If we continue developing A.I. as ‘neutral’ systems, we will inevitably face outcomes that undermine deeper human values and create more division, more shallow interaction, and less meaning.

The case for Compassionate A.I.

The situation is not beyond repair, but it requires us to approach A.I. development with open eyes. The first step is acknowledging that A.I. is not truly neutral — it is shaped by the intentions behind its design and the contexts in which it operates.

We need to actively steer A.I. towards supporting the total self, not just the ego. This means designing systems that enhance our capacity for deep, meaningful connections and inner growth rather than just optimizing for surface-level engagement and instant rewards.

Moving beyond neutrality

If we allow A.I. to operate under the illusion of neutrality, we will only accelerate the ego-driven outcomes that lead to division and superficiality.

The path forward is clear: we must develop A.I. that reflects the depth, Compassion, and openness that are necessary for true human flourishing.

In this way, A.I. can be a tool for profound human growth — if we take care to make it so.

Leave a Reply

Related Posts

Must Future Super-A.I. Have Rights?

Financial rights — juridical rights — political rights… Should we lend these? Must we? Can we? This is one of the trickiest issues of all time. Therefore, let’s not rush this through. Anyway, my answer is no ― no ― no. Even so, this blog may be pretty confrontational, and I’m very much aware of Read the full article…

The Learning Landscape: A Flexible View of Machine Learning

Machine learning is often divided into neatly defined categories: supervised, unsupervised, semi-supervised, and reinforcement learning. In reality, learning – whether in machines or humans – functions more like a fluid landscape, where different approaches blend and interact. In this blog, we’ll explore the concept of the ‘learning landscape,’ where traditional types of machine learning are Read the full article…

The Society of Mind in A.I.

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence. It brings A.I. and H.I. (human intelligence) closer together. The society of mind Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, Read the full article…

Translate »