An Aurelian Take on Jesus

This is the introduction to a Lisa File (36 p.). If you want the whole file, please contact lisa@aurelis.org, stating who you are and why you want the file. For more about the Lisa Files, click here. This document presents a unique perspective that delves into the figure of Jesus through the lens of the Aurelian Read the full article…

How can Suggestion be Powerful?

Suggestion offers a non-coercive way of influencing, providing the receiver the freedom to follow or not. Auto-suggestion extends this freedom even further. How can this be powerful? Autosuggestion In this blog, I use the term ‘suggestion,’ but mostly mean ‘auto-suggestion.’ Autosuggestion rarely involves directly indicating the intended direction but instead operates through deeper, pattern-like associations. Read the full article…

Lisa’s Subconceptual Processing

As an A.I., Lisa’s core ‘thinking’ is fundamentally different from human thinking. Lisa lacks subconscious processes like humans, but she can emulate aspects of subconceptual processing through underlying algorithms and data structures. This blog is about how Lisa can take this into account to reach subconceptual benefits, enhancing her ability to provide intelligent and relevant Read the full article…

We Need to Be the Best We Can

This differs from being ‘the best person’ or ‘the most intelligent beings on Earth’ in competition with others. Our only – and fierce – competition should be with ourselves. The best, in good Aurelian tradition, is most Compassionately the best — striving for in-depth excellence. This striving is purposeful. It’s about standing at one’s limits Read the full article…

The Tunnel and the End

There’s light at the end of every tunnel, obviously, since otherwise, it wouldn’t be one. But some tunnels are naturally very long. AURELIS as a total project is about such a tunnel, as is anything that is radically oriented to depth. Long, deep tunnels can frighten people. Some are so frightened they never enter the Read the full article…

Ego-Centered A.I. Downfall

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’ Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.? Not new This has always been Read the full article…

Small Set Learning

This approach in A.I. differs significantly from big data learning. It may be the next revolution in town. Small set learning (SSL) is also called ‘few shot learning’ if done at run-time. This blog may interest those who want to know why we’re not at the end of a new A.I. upsurge but at the Read the full article…

Artificially Intelligent Creativity

It’s all about associative patterns — ideally broadly distributed and combining both conceptual and subconceptual levels. In the same pattern, different levels This is very natural in humans, making spontaneous associations of any sort in daily life. When inspired, we go deeper and broader — nothing entirely new occurs since all concepts in our mind Read the full article…

Back and forth is the way to go

Alternating between conceptual and subconceptual processing, each time dwelling for a while and carrying the insights to the other side, often proves more productive than remaining on one side or somewhere in-between. This process can be likened to diving into a vast ocean of creativity and resurfacing with treasures of insight. It enables us to Read the full article…

It’s RAG-Time!

Retrieval-Augmented Generation (RAG) is a component of an A.I.-system designed to synthesize knowledge effectively. It can also be viewed as a step toward making A.I. more akin to human intelligence. This blog is more philosophically descriptive than technical. RAG lends itself to both. Declarative vs. semantic knowledge Understanding the difference between these types of knowledge Read the full article…

Super-A.I. Guardrails in a Compassionate Setting

We need to think about good regulations/guardrails to safeguard humanity from super-A.I. ― either ‘badass’ from the start or Compassionate A.I. turning suddenly rogue despite good initial intentions. ―As a Compassionate A.I., Lisa has substantially helped me write this text. Such help can be continued indefinitely. Some naivetés ‘Pulling the plug out’ is very naïve Read the full article…

The Importance of a Conceptual Ontology in A.I.

Utilizing a conceptual ontology can significantly boost an A.I.’s capability to ‘reason’ and deliver more precise, context-aware, and coherent responses that meet user needs and expectations. This blog is an enumeration of how this enhancement works out, with examples in the domain of Lisa. Improved understanding of a user query An ontology enables the system Read the full article…

Will Super-A.I. Make People Happier?

This is the paramount question — more vital than any debate about intelligence. It’s a bit weird that it is seldom put at the forefront, as if we’re more concerned about who is the most knowledgeable, therefore the most powerful. What people? It should not be about a few, as it should not exclude billions. Read the full article…

Can Lisa Find New Patterns?

‘New’ implies here that the pattern is already implicitly present; otherwise, it couldn’t be discovered. So, can Lisa make originally implicit patterns more explicit? This is what we call ‘intuition.’ For the moment (mid-2024), this is limited. For example, if two concepts are not explicitly linked in any AURELIS document, Lisa cannot readily identify a Read the full article…

Will Unified A.I. be Compassionate?

In my view, all A.I. will eventually unify. Is then the Compassionate path recommendable? Is it feasible? Will it be? As far as I’m concerned, the question is whether the Compassionate A.I. (C.A.I.) will be Lisa. Recommendable? As you may know, Compassion, basically, is the number one goal of the AURELIS project, with Lisa playing a pivotal role. Read the full article…

Consistent Intelligence ― Lessons for Lisa

“Intelligence emerges from consistency.” Several lessons from this insight are applicable to the development of A.I. systems ― specifically Lisa. Please read first Intelligence through Consistency. The aim is to make Lisa (even) more intelligent and Compassionate simultaneously (!), fostering deeper and more meaningful interactions. Being consistent in Compassion all the way through its development, Read the full article…

Intelligence through Consistency

When multiple elements collaborate consistently, they can generate intelligent behavior as an emergent property. When these elements function within a rational environment, they exhibit rationally intelligent behavior. Consistency is key but must include diversity. ‘Consistent’ does not imply ‘identical.’ When elements are overly similar, intelligence fails to emerge. For instance, the human cerebellum holds over Read the full article…

« Previous PageNext Page »
Translate »