What is an Agent?

March 18, 2023 Artifical Intelligence, Cognitive Insights No Comments

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends.

Are you an agent?

The answer depends on the perspective you decide to take.

Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the start.

Is the you-agent your body, your brain, your mind, or the part of your mind that decides to use your body in order to note down the answer to the question, thereby to answer the question toward me?

Am I the agent who is asking the question?

I would like to believe that.

I also could be a chat robot, writing down the question ― no problem. In that case, I am also seen as an agent, at least in an A.I. development environment.

Let’s go for the robot-me.

In this case, I am a computer or (part of) some software. In any case, the agent-robot-me is the one that can actively change its environment ― such as by asking the above question. Also, the environment can change itself without my doing. There is some independency between me and my environment.

Without independency, there is no agency. I would just be part of my environment, or my environment would be part of me. ‘We’ would together be the agent ― no problem, just good to know.

So, searching for agency in a robot world, one searches for independency and decision-taking.

If my agency as a robot goes in-depth, I become a truly autonomous agent. Of course, we’re not there yet, but what is keeping us from it is no magic.

Now, let’s say I’m a human professional tennis player.

In my brain, a small part is related to my tennis-playing arm. It even has a bit of the same shape. Naturally, I see my arm as part of the me-agent, which is good since even its shape is anatomically present in my brain.

However, one can also see my racket in my brain of a pro. It is anatomically present. Does that make my racket a part of the me-agent? I dare say sometimes it feels that way.

Conclusion

This is just an example to denote that the borders – therefore, the agencies – aren’t obvious. In the human case, we can frequently act as if they are. But even so, in social cases, where are decisions being taken? Can one say one is ‘just following orders?’ Or that one is ‘just following the mainstream?’

In a robot world – which we are entering – things can become much more blurred, if not to say completely impervious. We (who?) can decide to close our eyes to this conundrum. It will then be fully there once we open them.

An agent is where a decision to act is taken, and responsibility originates.

Then comes the learning aspect, in which reward plays a crucial role.

Leave a Reply

Related Posts

Is A.I. Becoming more Philosophy than Technology?

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building. Many technologically oriented people may not like this idea. The ones who do are probably forming the future. Some history Historically, the development of A.I. has had Read the full article…

About Blurring the Line between Reasoning and Planning

In A.I. research, reasoning and planning are usually treated as if they were separate faculties. Yet in humans, and even more in Lisa, the two constantly weave into one another. This dialogue continues where the addendum of About Reasoning and Planning in Humans and Lisa left off, exploring why the line between them was drawn, Read the full article…

Will Cheap A.I. Chatbots be Our Downfall?

This is bad. It’s not just about one dystopia, but many dystopias on the fly. Also, cheap A.I. chatbots will be with us soon enough. Their up-march has already begun. Spoiler alert: this is extremely dangerous. To bury one’s head for it is equally sad! At the start of the many dystopias lies a chatbot-generating Read the full article…

Translate »