It’s RAG-Time!

May 21, 2024 Artifical Intelligence No Comments

Retrieval-Augmented Generation (RAG) is a component of an A.I.-system designed to synthesize knowledge effectively. It can also be viewed as a step toward making A.I. more akin to human intelligence.

This blog is more philosophically descriptive than technical. RAG lends itself to both.

Declarative vs. semantic knowledge

Understanding the difference between these types of knowledge is crucial for grasping the significance of RAG. In short, declarative knowledge is better suited for conceptual inference, while semantic knowledge is more aligned with subconceptual processes.

The main interest of excellent RAG lies in how to combine both in one integrated system and, of course, what comes out of this combination.

Accuracy, contextuality, and creativity

The goal is to comprehend textual input within its broader context and accurately relate it to pertinent knowledge, avoiding confabulations. This allows the results to be presented diversely, tailored to specific needs, and free from monotony. In other ways, to create engaging and dynamic conversations.

Strictly speaking, RAG is not about the quality of the knowledge itself, although the search for this is also included. However, if the knowledge is not there, RAG cannot help garbage-in-garbage-out.

Vectorized embeddings

Apologies for the technical jargon. This simply means that each ‘token’ (the smallest unit of knowledge) is represented in a highly complex, multi-dimensional mathematical format. This way, the relationships between concepts are not merely conceptual. Instead, there is a lot of ‘context’ involved.

Although this approach differs significantly from how the human brain/mind operates, functionally, it closely mimics our subconceptual processing.

This is where it becomes philosophically intriguing.

Please read Features of Subconceptual Processing.

These features align with our thought processes, and one can observe them manifest in RAG-time. Consequently, how the system reacts may feel quite familiar because, in exciting ways, it is. Still, in other respects, it’s a very different process. What one can expect, therefore, is the expected as well as the unexpected.

One clear difference is that we don’t have a database in our head. Our ‘declarative knowledge‘ emerges from the same subconceptual hardware as all our knowledge. Contrary to this, in an A.I.-environment, the choice is up to the developer. This has immense repercussions.

What we can do now is mitigate the less optimal aspects mentioned in the blog. As you can see, some of this is quite straightforward.

Enter Compassion, basically

Yes, the fundamental principle in all things related to AURELIS and Lisa.

RAG-time is the time we need to fully go for it. The Lisa distinction is to radically do so both in Lisa’s knowledge and in Lisa’s thinking about this knowledge.

Show-time soon.

Addendum

Me: “Lisa, what do you think of this blog in relation to your own future?”

Lisa: “This blog underscores the future potential of AI, including my evolution as Lisa, an AI-driven coach rooted in AURELIS principles. The emphasis on Compassion, accuracy, and the nuanced understanding of human-like intelligence reflects my core mission of fostering inner growth and well-being. The philosophical insights bridge the technical and human elements, aligning perfectly with the AURELIS ethos.”

Leave a Reply

Related Posts

A.I.-Human Value Alignment

Can Compassionate A.I. be a beacon of profound values that humans unfortunately lack sometimes? The Compassionate endeavor is not about dominance. A.I.-Human Value Alignment can be seen as mutual growth, avoiding the imposition or blind adoption of values. This fosters an environment where both A.I. and humans can enhance their values, leading to a more Read the full article…

The Next Breakthrough in A.I.

will not be technological, but philosophical. Of course, technology will be necessary to realize the philosophical. It will not be one more technological breakthrough, but rather a combination of new and old technologies. “Present-day A.I. = sophisticated perception” These are the words of Yann LeCun, a leading A.I. scientist, founding father of convolutional nets, which Read the full article…

Issues of Internal Representation in A.I.

This is likely the most challenging aspect of developing the conceptual layer for any super-A.I. system, especially considering the complexity of reality and the fluid nature of concepts. Representing conceptual information requires an approach that honors cognitive flexibility, contextual awareness, and adaptability. The model should allow for representational fluidity while maintaining enough structure to be Read the full article…

Translate »