What is an Agent?

March 18, 2023 Artifical Intelligence, Cognitive Insights No Comments

An agent is an entity that takes decisions and acts upon them. That is where the clarity ends.

Are you an agent?

The answer depends on the perspective you decide to take.

Since the answer also depends on who is seen as the taker of this decision, the proper perspective becomes less obvious from the start.

Is the you-agent your body, your brain, your mind, or the part of your mind that decides to use your body in order to note down the answer to the question, thereby to answer the question toward me?

Am I the agent who is asking the question?

I would like to believe that.

I also could be a chat robot, writing down the question ― no problem. In that case, I am also seen as an agent, at least in an A.I. development environment.

Let’s go for the robot-me.

In this case, I am a computer or (part of) some software. In any case, the agent-robot-me is the one that can actively change its environment ― such as by asking the above question. Also, the environment can change itself without my doing. There is some independency between me and my environment.

Without independency, there is no agency. I would just be part of my environment, or my environment would be part of me. ‘We’ would together be the agent ― no problem, just good to know.

So, searching for agency in a robot world, one searches for independency and decision-taking.

If my agency as a robot goes in-depth, I become a truly autonomous agent. Of course, we’re not there yet, but what is keeping us from it is no magic.

Now, let’s say I’m a human professional tennis player.

In my brain, a small part is related to my tennis-playing arm. It even has a bit of the same shape. Naturally, I see my arm as part of the me-agent, which is good since even its shape is anatomically present in my brain.

However, one can also see my racket in my brain of a pro. It is anatomically present. Does that make my racket a part of the me-agent? I dare say sometimes it feels that way.

Conclusion

This is just an example to denote that the borders – therefore, the agencies – aren’t obvious. In the human case, we can frequently act as if they are. But even so, in social cases, where are decisions being taken? Can one say one is ‘just following orders?’ Or that one is ‘just following the mainstream?’

In a robot world – which we are entering – things can become much more blurred, if not to say completely impervious. We (who?) can decide to close our eyes to this conundrum. It will then be fully there once we open them.

An agent is where a decision to act is taken, and responsibility originates.

Then comes the learning aspect, in which reward plays a crucial role.

Leave a Reply

Related Posts

The Inverse Turing Test

2024 – Things are evolving quickly in the world of A.I. Turing test As you probably know, this is about discerning a human being from intelligent A.I. If the A.I. can mimic the human to the point that an observer cannot tell the difference (for instance, by reading their written output), the A.I. is said Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

Semantically Meaningful Chunks

A Semantically Meaningful Chunk (SMC) is any cognitive entity, big or small, that is worth contemplating. In A.I., these can serve as building blocks of intelligence. It’s what humans often reserve specific terms for. Language comes into play here, significantly contributing to how humans have rapidly advanced in intelligence through using terms, sentences, documents, and Read the full article…

Translate »