What is an Agent?

March 18, 2023 Artifical Intelligence, Cognitive Insights No Comments

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends.

Are you an agent?

The answer depends on the perspective you decide to take.

Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the start.

Is the you-agent your body, your brain, your mind, or the part of your mind that decides to use your body in order to note down the answer to the question, thereby to answer the question toward me?

Am I the agent who is asking the question?

I would like to believe that.

I also could be a chat robot, writing down the question ― no problem. In that case, I am also seen as an agent, at least in an A.I. development environment.

Let’s go for the robot-me.

In this case, I am a computer or (part of) some software. In any case, the agent-robot-me is the one that can actively change its environment ― such as by asking the above question. Also, the environment can change itself without my doing. There is some independency between me and my environment.

Without independency, there is no agency. I would just be part of my environment, or my environment would be part of me. ‘We’ would together be the agent ― no problem, just good to know.

So, searching for agency in a robot world, one searches for independency and decision-taking.

If my agency as a robot goes in-depth, I become a truly autonomous agent. Of course, we’re not there yet, but what is keeping us from it is no magic.

Now, let’s say I’m a human professional tennis player.

In my brain, a small part is related to my tennis-playing arm. It even has a bit of the same shape. Naturally, I see my arm as part of the me-agent, which is good since even its shape is anatomically present in my brain.

However, one can also see my racket in my brain of a pro. It is anatomically present. Does that make my racket a part of the me-agent? I dare say sometimes it feels that way.

Conclusion

This is just an example to denote that the borders – therefore, the agencies – aren’t obvious. In the human case, we can frequently act as if they are. But even so, in social cases, where are decisions being taken? Can one say one is ‘just following orders?’ Or that one is ‘just following the mainstream?’

In a robot world – which we are entering – things can become much more blurred, if not to say completely impervious. We (who?) can decide to close our eyes to this conundrum. It will then be fully there once we open them.

An agent is where a decision to act is taken, and responsibility originates.

Then comes the learning aspect, in which reward plays a crucial role.

Leave a Reply

Related Posts

What Ethical A.I. is (Not)

There’s a growing urgency around how we shape the future of artificial intelligence. More and more, we hear about ‘ethical A.I.’ — systems that behave nicely, avoid harm, and follow rules. But let’s pause. Is that really what ethics means? And more importantly, is that all it can be? Please first read this blog: What Read the full article…

The Society of Mind in A.I.

The human brain is pretty modular. This is a lesson from nature that we should heed when building a new kind of intelligence. It brings A.I. and H.I. (human intelligence) closer together. The society of mind Marvin Minsky (cognitive science and A.I. researcher) wrote the philosophical book with this title back in 1986. In it, Read the full article…

The Compute-Efficient Frontier

Research on scaling laws for LLMs suggests that while scaling model size (functional parameters), dataset size (tokens), and compute (amount of computing power) improves performance, diminishing returns are becoming evident. This is called the ‘compute-efficient frontier.’ Apparently, it does not depend much on architectural details (such as network width or depth) as long as reasonably Read the full article…

Translate »