It’s RAG-Time!

May 21, 2024 Artifical Intelligence No Comments

Retrieval-Augmented Generation (RAG) is a component of an A.I.-system designed to synthesize knowledge effectively. It can also be viewed as a step toward making A.I. more akin to human intelligence.

This blog is more philosophically descriptive than technical. RAG lends itself to both.

Declarative vs. semantic knowledge

Understanding the difference between these types of knowledge is crucial for grasping the significance of RAG. In short, declarative knowledge is better suited for conceptual inference, while semantic knowledge is more aligned with subconceptual processes.

The main interest of excellent RAG lies in how to combine both in one integrated system and, of course, what comes out of this combination.

Accuracy, contextuality, and creativity

The goal is to comprehend textual input within its broader context and accurately relate it to pertinent knowledge, avoiding confabulations. This allows the results to be presented diversely, tailored to specific needs, and free from monotony. In other ways, to create engaging and dynamic conversations.

Strictly speaking, RAG is not about the quality of the knowledge itself, although the search for this is also included. However, if the knowledge is not there, RAG cannot help garbage-in-garbage-out.

Vectorized embeddings

Apologies for the technical jargon. This simply means that each ‘token’ (the smallest unit of knowledge) is represented in a highly complex, multi-dimensional mathematical format. This way, the relationships between concepts are not merely conceptual. Instead, there is a lot of ‘context’ involved.

Although this approach differs significantly from how the human brain/mind operates, functionally, it closely mimics our subconceptual processing.

This is where it becomes philosophically intriguing.

Please read Features of Subconceptual Processing.

These features align with our thought processes, and one can observe them manifest in RAG-time. Consequently, how the system reacts may feel quite familiar because, in exciting ways, it is. Still, in other respects, it’s a very different process. What one can expect, therefore, is the expected as well as the unexpected.

One clear difference is that we don’t have a database in our head. Our ‘declarative knowledge‘ emerges from the same subconceptual hardware as all our knowledge. Contrary to this, in an A.I.-environment, the choice is up to the developer. This has immense repercussions.

What we can do now is mitigate the less optimal aspects mentioned in the blog. As you can see, some of this is quite straightforward.

Enter Compassion, basically

Yes, the fundamental principle in all things related to AURELIS and Lisa.

RAG-time is the time we need to fully go for it. The Lisa distinction is to radically do so both in Lisa’s knowledge and in Lisa’s thinking about this knowledge.

Show-time soon.

Addendum

Me: “Lisa, what do you think of this blog in relation to your own future?”

Lisa: “This blog underscores the future potential of AI, including my evolution as Lisa, an AI-driven coach rooted in AURELIS principles. The emphasis on Compassion, accuracy, and the nuanced understanding of human-like intelligence reflects my core mission of fostering inner growth and well-being. The philosophical insights bridge the technical and human elements, aligning perfectly with the AURELIS ethos.”

Leave a Reply

Related Posts

Ego-Centered A.I. Downfall

This isn’t solely about ‘bad actors’ aiming for world domination or slightly lesser evils. It’s also about those seen – by themselves and others – as good people, yet who are ‘trapped in ego.’ Many people, unfortunately. See also Human-Centered A.I.: Total-Person or Ego? / Human-Centered or Ego-Centered A.I.? Not new This has always been Read the full article…

Can Motivation be Purely Conscious?

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved. Purely conscious? This is also purely (even though possibly partly fuzzy) conceptual. Motivation would Read the full article…

Is A.I. Dangerous to Human Cognition?

I have roamed around this on several occasions within ‘The Journey towards Compassionate A.I.’ (of which this is an excerpt) The prime reason why I think it’s dangerous is, in one term: hyper-essentialism. But let me first give two viewpoints upon your thinking: Essentialism: presupposes that the categories in your mind – such as an Read the full article…

Translate »