Forward-Forward Neur(on)al Networks

April 19, 2023 Artifical Intelligence, Cognitive Insights No Comments

Rest assured, I don’t stuff technical details into this blog. Nevertheless, this new framework lies closer to how the brain works, which is interesting enough to go somewhat into it.

Backprop

In Artificial Neural Networks (ANN) – the subfield that sways a big scepter in A.I. nowadays – backpropagation (backprop) is one of the main buzzwords and is used ubiquitously. Present-day A.I. would be nowhere without it. However, that may change.

ANN schema with several layers from left to right

Shortly put, an explanation of backprop:

A multilayered ANN-in-training tries to accurately categorize, for instance, images of cats and dogs. But it makes many errors. After each pass forward through the network from entry layers to exit layers, an error estimation is fed into the system at the end side and, from there, backprops to earlier layers. So, signals go forward; error estimations go backward through the same network. ANNs work well this way for some problems, not for others.

In any case, the human brain hardly works this way.

Firstly, backprop involves a lot of mathematics, while no calculator exists in the brain. There are only neurons – and other cells – and groups of them. The brain may simulate a calculator, functionally approaching it to a small degree. But there is no way the brain could perform as many complex calculations as are needed for backprop.

Secondly, backprop would mean that, at the cellular level, there is a kind of memory that definitely isn’t there.

Thus, although low-level information in brainy neuronal networks can go in many directions, a backprop direction can hardly be one of them. Also, there is no evidence of it in reality. Backprop is especially (extremely) implausible with information that evolves over time.

The brain goes forward.

Neurons in the human brain are unidirectional. As a general flow, the information goes from one side (dendrites) to the cell body and from there to the axon connecting with other cells’ dendrites.

Neuron with flow from left to right

Thus, all in all, and taking into account many loops, forward processing is what the brain does, also when doing it backwards. For instance, much information travels from the brain to the eye, influencing how the eye itself reacts to what it ‘sees.’ This is no backprop but a forward feed in the backward (or ‘top-down’) direction.

A few months ago, Geoffrey Hinton proposed the Forward-Forward algorithm (FF) in neural networks.

Shortly put, if an error is made (cat, no dog). FF handles this network experience in a forward fashion ― erroneous and non-erroneous pass alike. The forward trails of both (thus ‘forward-forward’) are compared, and the system learns.

This is more aligned with how the brain works ― even to such a degree that both cases (natural and artificial) can be described pretty much in the same way. I just did so and will do some more.

Given good and bad examples, a neur(on)al network can proceed in two ways:

  • It can process each pretty much apart from each other. Hinton proposes this may be the main aim of why humans sleep ― a period in which the brain can generate its own erroneous ‘input.’
  • Alternatively, a system can compare good and bad examples as overlays and automatically find out where the overlays mainly differ. This way, the differences are found subconceptually. The conceptual interpretation results from emergence.

Are you following? For the sake of comprehension, imagine two transparent sheets. On one is drawn a cat; on the other a dog that overlaps as much as possible with the cat. Then you can look at this and immediately notice the differences without working anything out. It happens automatically.

The brain intrinsically likes automatic stuff.

Things that happen automatically don’t require much energy or other resources. This makes the brain very power-efficient and responsive, as it can also do with FF neural networks. One promise of the latter is that no massive amounts of memory or distinct processing are needed ― contrary to what is generally the case with backprop. In short, purely forward processing – using a suitable architecture – can do with very much less energy and memory.

Just as it is easier to drop an object on your toe than to calculate how much that may hurt.

Additionally interesting is that no magic is involved while we can explain more and more of the brain’s inner secrets in relatively simple terms that can be experimentally investigated in an artificial medium.

Society of FF-networks

This is a personal addition to FF. At least, I didn’t encounter it yet, but it’s logical. Such a society cannot be but close to the brain’s working as a network of networks (of networks).

One macro function (for instance, dancing the tango, or even one single move) is accomplished by many small brainy networks working together, each performing a part of the macro task. One network’s forward pass can relatively easily be combined with others in many parallel and serial ways ― which is what we see happening in the brain to a huge degree. Of course, nature’s playfield has been immense in this regard.

In such a society, many conglomerates of networks can work in different ways while accomplishing the same macro-functional result. In the human case, this is a well-investigated phenomenon. The same principle may be brought to the artificial world, creating many additional possibilities for efficiency and effectiveness.

(Only) two (of many possible) lessons

  1. We can learn much from the brain when developing genuinely artificial intelligence. The natural case (we) gives us much inspiration.
  2. We can learn from the above abstract thinking how humans are concocted ― including the positive and the flaws. Significantly, such insights may explain why our thinking does an excellent job while simultaneously being such a bunch of biases. It shows how hard it is to keep the intelligence while reducing bias. The biases are part of the intelligence. This explains why many people frequently act stupidly in so many interesting ways.

It also shows how we can ameliorate the situation.

In my view, we will direly need this more than ever soon enough.

Leave a Reply

Related Posts

Why Superficial Ethics isn’t Ethical in A.I.

Imagine an A.I. hiring tool that follows all the rules: no explicit bias, transparent algorithms, and compliance with legal standards. Yet, beneath the surface, it perpetuates systemic inequities, favoring candidates from privileged backgrounds and reinforcing the status quo. This isn’t just an oversight — it’s a failure of ethics. Superficial ethics in A.I., limited to Read the full article…

Global Human-A.I. Value Alignment

Human values align deeply across the globe, though they vary on the surface. Thus, striving for human-A.I. value alignment can create positive challenges for A.I. and opportunities for humanity. A.I. may make the world more pluralistic. With A.I. means, different peoples/cultures can strive for more self-efficacy, doing their thing independently and thereby floating away from Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

Translate »