What is an Agent?

March 18, 2023 Artifical Intelligence, Cognitive Insights No Comments

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends.

Are you an agent?

The answer depends on the perspective you decide to take.

Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the start.

Is the you-agent your body, your brain, your mind, or the part of your mind that decides to use your body in order to note down the answer to the question, thereby to answer the question toward me?

Am I the agent who is asking the question?

I would like to believe that.

I also could be a chat robot, writing down the question ― no problem. In that case, I am also seen as an agent, at least in an A.I. development environment.

Let’s go for the robot-me.

In this case, I am a computer or (part of) some software. In any case, the agent-robot-me is the one that can actively change its environment ― such as by asking the above question. Also, the environment can change itself without my doing. There is some independency between me and my environment.

Without independency, there is no agency. I would just be part of my environment, or my environment would be part of me. ‘We’ would together be the agent ― no problem, just good to know.

So, searching for agency in a robot world, one searches for independency and decision-taking.

If my agency as a robot goes in-depth, I become a truly autonomous agent. Of course, we’re not there yet, but what is keeping us from it is no magic.

Now, let’s say I’m a human professional tennis player.

In my brain, a small part is related to my tennis-playing arm. It even has a bit of the same shape. Naturally, I see my arm as part of the me-agent, which is good since even its shape is anatomically present in my brain.

However, one can also see my racket in my brain of a pro. It is anatomically present. Does that make my racket a part of the me-agent? I dare say sometimes it feels that way.

Conclusion

This is just an example to denote that the borders – therefore, the agencies – aren’t obvious. In the human case, we can frequently act as if they are. But even so, in social cases, where are decisions being taken? Can one say one is ‘just following orders?’ Or that one is ‘just following the mainstream?’

In a robot world – which we are entering – things can become much more blurred, if not to say completely impervious. We (who?) can decide to close our eyes to this conundrum. It will then be fully there once we open them.

An agent is where a decision to act is taken, and responsibility originates.

Then comes the learning aspect, in which reward plays a crucial role.

Leave a Reply

Related Posts

Open Letter about Compassionate A.I. (C.A.I.) to Elon Musk

And to any Value-Driven Investors (VDI) in or out of worldly spotlights. This is a timely call for Compassionate A.I. (C.A.I.) Compassion and A.I. are seldom mentioned together. Yet C.A.I. may be the most crucial development in the near as well as far-away future of humanity. Please see my book about the Journey Towards Compassionate Read the full article…

Human-Centered A.I.

Human-centered A.I. (HAI) emphasizes human strength, health, and well-being. To be durably so, it must be Compassionate, basically, ― properly taking into account human complexity; this is: the total person. The total person comprises the conceptual and subconceptual mind ― way beyond classical humanism and a lingering body-mind divide. From the inside out As neurocognitive Read the full article…

Will A.I. Have Its Own Feelings and Purpose?

Anno 2025: A.I. has made its entry and it’s here to stay. Human based? We generally think of ‘feelings’ as human-based. But that is just a historical artifact, a kind of convention. Does an ant have feelings? Or a goldfish, a snake, a mouse? Does a rabbit have feelings? I think these are the wrong Read the full article…

Translate »