Member-only story
Blaming embeddings is blaming the entire AI game
Who keeps underrating the power of cosine similarity and RAG missed the core of Generative AI as a whole.
Anyone dissing embeddings is missing the point of AI itself.
Trashing embeddings & RAG is like saying your engine doesn’t need oil
A curious narrative is gaining traction in some corners of the AI discussion: a downplaying of fundamental components like embeddings, cosine similarity, and even the well-established Retrieval Augmented Generation (RAG) architecture.
Some suggest that the in-context learning (ICL) capabilities of modern Large Language Models (LLMs) are so potent that these other elements are becoming secondary, even unimportant.
This perspective, however, is so damn wrong!
It misunderstands how these technologies interlink and, at sadly, how Generative AI models like GPT perceive and process information at their very core.
To dismiss these elements is to miss the Foundation of the entire AI game.