Principles of Being an Intelligent Being

July 19, 2021 Artifical Intelligence, Cognitive Insights No Comments

Strange times. We are living at the borders of old and new intelligences. We’ll need some agreement in seeing what it’s about.

Intelligence is in the eye of the beholder.

Definitions of intelligence abound. Therefore, it is better to start from the really basic, where it can be hardly even more basic. There, it’s the easiest to see whether we can agree.

‘We’ is human and artificial intelligence ― well, any intelligence in past, present, and future. We need a scheme to be able to communicate and understand each other. We need it to recognize each other and feel related. There is nothing worse than intelligences being completely alien to each other.

So, starting with one principle, which is also the principle of being alive:

“I move.”

Even in this one principle, there are two sides:

  • Without ‘I,’ there is nothing to be intelligent, no ‘intelligent being.’
  • Without ‘move,’ the ‘I’ is just a static substance, thus also no ‘intelligent being.’

Note that any living entity changes/moves in relevant ways while staying itself.

Going from information to knowledge/intelligence [see: “About ‘Intelligence’ (in A.I.)“]:

  • Without movement, there may be information but no intelligence.
  • With relevant movement, information becomes active, therefore, intelligent.

In other words, the basic preconditions for intelligence are a sense of stability (‘I’) and a sense of progress (‘move’).

Conservation and progression, as in US politics. Is this a coincidence? Here too, nothing worse than being alien to each other.

We need some more principles to make this happen in reality.

I discern two static principles (there is → being now, knowledge representation)

  • concepts → handling many models – of things, of environment, of oneself
  • relations → relating models to each other – using reference frames; conceptual features are other models

I also discern two dynamic principles (there moves → being later on, knowledge manipulation)

  • movement → at least, movement of attention – reuse of resources; one cannot put all resources continually everywhere
  • learning → adapting to a continuous change in the environment

These seem pretty universal. For instance, one can see through them the four principles of what qualifies as ‘intelligent’ according to Jeff Hawkins. [Jeff Hawkins, 2021]

The right balance

This is not a fit ‘once and for all.’ Any intelligent being needs a continual trade-off, as well as a striving for an optimal synthesis of both:

  • With an excessive preponderance of the dynamic, we lose the ‘I’ in the equation. There is just movement from here to there to anywhere.
  • With an excessive preponderance of the static, the ‘I’ makes no sense. For instance, a stone cannot be called intelligent, nor a book or a database by itself. No self-initiated movement, no intelligence.

Interestingly, this brings intelligence together with being alive.

One can see degrees of intelligence. One can see degrees of complexity in being alive.

As simple as it may be, a living being can be seen as intelligent. The simplest being is just a tiny bit intelligent. Also, even a little bit of intelligence makes something alive, at least in the vein of the principles as just described.

Note that consciousness is not in this picture. Still, there is ethics involved: If something is intelligent, then it is alive. If it’s intelligent and alive, it needs to be treated with due respect.

Also, there is a huge continuum towards us, and further on. Sorry if disappointing, but in the continuum of intelligence, we are just somewhere at the beginning.

Scary indeed. That’s why we should face it and not hide for ‘the big bad wolf’ that is coming towards us from the future on planet Earth.

We’d better try to let it become a big Compassionate Artificial intelligence.


[Jeff Hawkins, 2021] A Thousand Brains: A New Theory of Intelligence

Leave a Reply

Related Posts

Legal vs. Deontological in A.I.

The trolley problem This is a well-known problem in A.I. A trolley driver gets into a situation where he must choose between killing one person by taking a deliberate action or letting five others get killed by not reacting to the situation. Deontologically, people tend not to choose purely logically and statistically in such situations. Read the full article…

Lisa in Times of Suicide Danger

Can A.I.-video-coach-bot Lisa prevent suicide or bring someone to it? The question needs to be looked upon broadly and openly. Yesterday, a Belgian person committed suicide after long conversations with a chatbot. Doubtlessly, once in a while, some coach-bot will be accused of having brought someone closer to suicide. Such accusations cannot be prevented, even Read the full article…

Coach-bots Shouldn’t Make People Do Things

This is a first principle for Lisa: never to make a human being do anything ― not even by giving advice if anyhow possible. From this constraint, the thinking goes toward how Lisa can operate sensibly. It forces us to think creatively. What comes from inside makes you stronger. This is an AURELIS coaching principle Read the full article…

Translate »