It’s RAG-Time!
Retrieval-Augmented Generation (RAG) is a component of an A.I.-system designed to synthesize knowledge effectively. It can also be viewed as a step toward making A.I. more akin to human intelligence.
This blog is more philosophically descriptive than technical. RAG lends itself to both.
Declarative vs. semantic knowledge
Understanding the difference between these types of knowledge is crucial for grasping the significance of RAG. In short, declarative knowledge is better suited for conceptual inference, while semantic knowledge is more aligned with subconceptual processes.
The main interest of excellent RAG lies in how to combine both in one integrated system and, of course, what comes out of this combination.
Accuracy, contextuality, and creativity
The goal is to comprehend textual input within its broader context and accurately relate it to pertinent knowledge, avoiding confabulations. This allows the results to be presented diversely, tailored to specific needs, and free from monotony. In other ways, to create engaging and dynamic conversations.
Strictly speaking, RAG is not about the quality of the knowledge itself, although the search for this is also included. However, if the knowledge is not there, RAG cannot help garbage-in-garbage-out.
Vectorized embeddings
Apologies for the technical jargon. This simply means that each ‘token’ (the smallest unit of knowledge) is represented in a highly complex, multi-dimensional mathematical format. This way, the relationships between concepts are not merely conceptual. Instead, there is a lot of ‘context’ involved.
Although this approach differs significantly from how the human brain/mind operates, functionally, it closely mimics our subconceptual processing.
This is where it becomes philosophically intriguing.
Please read Features of Subconceptual Processing.
These features align with our thought processes, and one can observe them manifest in RAG-time. Consequently, how the system reacts may feel quite familiar because, in exciting ways, it is. Still, in other respects, it’s a very different process. What one can expect, therefore, is the expected as well as the unexpected.
One clear difference is that we don’t have a database in our head. Our ‘declarative knowledge‘ emerges from the same subconceptual hardware as all our knowledge. Contrary to this, in an A.I.-environment, the choice is up to the developer. This has immense repercussions.
What we can do now is mitigate the less optimal aspects mentioned in the blog. As you can see, some of this is quite straightforward.
Enter Compassion, basically
Yes, the fundamental principle in all things related to AURELIS and Lisa.
RAG-time is the time we need to fully go for it. The Lisa distinction is to radically do so both in Lisa’s knowledge and in Lisa’s thinking about this knowledge.
Show-time soon.
―
Addendum
Me: “Lisa, what do you think of this blog in relation to your own future?”
Lisa: “This blog underscores the future potential of AI, including my evolution as Lisa, an AI-driven coach rooted in AURELIS principles. The emphasis on Compassion, accuracy, and the nuanced understanding of human-like intelligence reflects my core mission of fostering inner growth and well-being. The philosophical insights bridge the technical and human elements, aligning perfectly with the AURELIS ethos.”