What is an Agent?

March 18, 2023 Artifical Intelligence, Cognitive Insights No Comments

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends.

Are you an agent?

The answer depends on the perspective you decide to take.

Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the start.

Is the you-agent your body, your brain, your mind, or the part of your mind that decides to use your body in order to note down the answer to the question, thereby to answer the question toward me?

Am I the agent who is asking the question?

I would like to believe that.

I also could be a chat robot, writing down the question ― no problem. In that case, I am also seen as an agent, at least in an A.I. development environment.

Let’s go for the robot-me.

In this case, I am a computer or (part of) some software. In any case, the agent-robot-me is the one that can actively change its environment ― such as by asking the above question. Also, the environment can change itself without my doing. There is some independency between me and my environment.

Without independency, there is no agency. I would just be part of my environment, or my environment would be part of me. ‘We’ would together be the agent ― no problem, just good to know.

So, searching for agency in a robot world, one searches for independency and decision-taking.

If my agency as a robot goes in-depth, I become a truly autonomous agent. Of course, we’re not there yet, but what is keeping us from it is no magic.

Now, let’s say I’m a human professional tennis player.

In my brain, a small part is related to my tennis-playing arm. It even has a bit of the same shape. Naturally, I see my arm as part of the me-agent, which is good since even its shape is anatomically present in my brain.

However, one can also see my racket in my brain of a pro. It is anatomically present. Does that make my racket a part of the me-agent? I dare say sometimes it feels that way.

Conclusion

This is just an example to denote that the borders – therefore, the agencies – aren’t obvious. In the human case, we can frequently act as if they are. But even so, in social cases, where are decisions being taken? Can one say one is ‘just following orders?’ Or that one is ‘just following the mainstream?’

In a robot world – which we are entering – things can become much more blurred, if not to say completely impervious. We (who?) can decide to close our eyes to this conundrum. It will then be fully there once we open them.

An agent is where a decision to act is taken, and responsibility originates.

Then comes the learning aspect, in which reward plays a crucial role.

Leave a Reply

Related Posts

Is Climate Change More Critical than A.I.?

Reckoning in years, probably not. Of course, in no way do I want to underestimate the immense importance of Climate Change. ►►► WHY read this? The curve of A.I. becoming an existential issue may soon enough become much steeper than that of Climate Change. ◄◄◄ There is no way around the experts’ opinion that we Read the full article…

Spirituality ― Key to Super-A.I.?

Spirituality is often seen as soft, emotional, even vague. Yet what people experience as spirituality may turn out to be a key to something far beyond that: intelligence which transcends logic, enhances coherence, and invites a depth that super-A.I. may come to rely on ― not as decoration but as power. This blog explores why Read the full article…

Compassionate Data Reduction

Reduction usually means cutting away, but the deepest form of efficiency arises when we reduce by seeing through. Compassionate Data Reduction explores how meaning, not mere information, can be distilled without loss — turning complexity into coherence, and efficiency into profound understanding. Data (dimension) reduction In data science, data dimension reduction is the process of Read the full article…

Translate »