Search results

Human-Centered A.I.

Human-centered A.I. (HAI) emphasizes human strength, health, and well-being. To be durably so, it must be Compassionate, basically, ― properly taking into account human complexity; this is: the total person. The total person comprises the conceptual and subconceptual mind ― way beyond classical humanism and a lingering body-mind divide. From the inside out As neurocognitive Read the full article…

Will Super-A.I. Want to Dominate?

Super-AI will transcend notions of ‘wanting’ and ‘domination.’ Therefore, the title’s question asks for some deeper delving. We readily anthropomorphize the future. This time, we should be humble. Super-A.I. will not want to dominate us. Even if we might feel it is dominating (in the future), ‘it’ will not. It will have no more than Read the full article…

Active Learning in A.I.

An active learner deliberately searches for information/knowledge to become smarter. In biological evolution on Earth The ‘Cambrian explosion’ was probably jolted by the appearance of active learning in natural evolution. It was the time when living beings started to chase other living beings— thus also being chased, heightening the challenges of survival. This mutual predation Read the full article…

Must Future Super-A.I. Have Rights?

Financial rights — juridical rights — political rights… Should we lend these? Must we? Can we? This is one of the trickiest issues of all time. Therefore, let’s not rush this through. Anyway, my answer is no ― no ― no. Even so, this blog may be pretty confrontational, and I’m very much aware of Read the full article…

Compassion is the Intelligent Future

The only proper future of intelligence, whether natural or artificial, is Compassionate. Why? In my view, Compassion is the only way for a total civilization to be sustainable — thus, to grow and become more intelligent. Intelligence wants more of itself. Compassion also wants more of itself. Thus, Compassionate intelligence is a powerful self-enhancing combination. Read the full article…

Human-Centered or Ego-Centered A.I.?

‘Humanism’ is supposed to be human-centered. ‘Human-A.I. Value Alignment’ is supposed to be human-centered. Or is it ego-centered? Especially concerning (non-)Compassionate A.I., this is the crucial question that will make or break us. Unfortunately, this is intrinsically unclear to most people. Mere-ego versus total self See also The Big Mistake. This is not about ‘I’ Read the full article…

A.I. from Future to Now

While it’s challenging to imagine how future A.I. will look like, we can develop an abstract idea that helps us understand present-day urgencies. Of course, one day, the future will be a million years from now. However, for the purpose of this text, we can see it as something like a century from now. There Read the full article…

Compassion is NOT the Surface

Compassion is profoundly challenging. It should not be confused with pity, empathy, or friendliness. Confusion concerning Compassion may lead to substantial mistakes. On top of this, the concept is vaguely used, like any profound concept. For my formalization, see Compassion, basically. Compassion is much more potent than any of the above. Compassion makes you strong! Read the full article…

Is A.I. Becoming more Philosophy than Technology?

This question has been relevant already for years. It’s only becoming worse (or better). Of course, technology remains important but it’s more like the bricks than the building. Many technologically oriented people may not like this idea. The ones who do are probably forming the future. Some history Historically, the development of A.I. has had Read the full article…

Compassion as Basis for A.I. Regulations

To prevent A.I.-related mishaps or even disasters while going into a future of super-A.I., merely regulating A.I. is not sufficient ― presently nor in principle. Striving for Compassionate A.I. There will eventually be no security concerning A.I. if we don’t put Compassion into the core. The main reason is that super-A.I. will be much more Read the full article…

Better A.I. for Better Humans

While we need to be afraid of non-Compassionate A.I., the Compassionate kind is necessary for a humane future ― starting as soon as possible. Please read about why we NEED Compassionate A.I. (C.A.I.) in general. In this text, I pass concretely along some fields. In each, the primary focus naturally lies on the human complexity Read the full article…

Patterns + Rewards in A.I.

Human-inspired Pattern Recognition and Completion (PRC) may significantly heighten the efficiency of Reinforcement Learning (RL) — also in A.I. See for PRC: The Brain as a Predictor See for RL: Why Reinforcement Learning is Special Mutually reinforcing PRC shows valid directions and tentatively also realizes them. RL consolidates/reinforces the best directions and attenuates the lesser Read the full article…

Levels of Abstraction in Humans and A.I.

Humans are masters of abstraction. We do it spontaneously, thus creating an efficient mental environment for ourselves, others, and culturally. The challenge is now to bring this to A.I. Abstraction = generalization Humans (and other animals) perform spontaneous generalization. From a number of example objects, we generalize to some concept. A concept is already an Read the full article…

Two Takes on Human-A.I. Value Alignment

Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…

Super-A.I. and the Meaning Crisis

I don’t know how things will evolve, especially with those unpredictable humans. But it is clear that we are in a meaning crisis at present, globally. With the advent of super-A.I., soon enough, what shall we do? Please read about the meaning crisis. We use(d) to get meaning from fairy tales. No lack of them. Read the full article…

A.I.-Phobia

One should be scared of any danger, including dangerous A.I. Contrary to this, anxiety is never a good adviser. This text is about being anxious. A phobic reaction against present technology is most dangerous. Needed is a lot of common sense. As to the above image, note the reference to Mary Wollstonecraft Shelley’s novel. In Read the full article…

Containing Compassion in A.I.

This is utterly vital to humankind ― arguably the most crucial of our still-young existence as a species. If we don’t bring this to a good end, future A.I. will remember us as an oddity. Please read first about Compassion, basically. Or even better, you might read some blogs about empathy and Compassion. Or even Read the full article…

« Previous PageNext Page »
Translate »