I was however tripped up by this sentence close to the beginning:
> we encountered a significant challenge with RAG: relying solely on vector search (even using both dense and sparse vectors) doesn’t always deliver satisfactory results for certain queries.
Not to be overly pedantic, but that's a problem with vector similarity, not RAG as a concept.
Although the author is clearly aware of that - I have had numerous conversations in the past few months alone of people essentially saying "RAG doesn't work because I use pg_vector (or whatever) and it never finds what I'm looking for" not realizing 1) it's not the only way to do RAG, and 2) there is often a fair difference between the embeddings and the vectorized query, and with awareness of why that is you can figure out how to fix it.
https://medium.com/@cdg2718/why-your-rag-doesnt-work-9755726... basically says everything I often say to people with RAG/vector search problems but again, seems like the assembled team has it handled :)
I've seen the whole gamut of RAG implementations as well, and the implementation, specifically prompting and the document search has a lot to do with the end quality.
So I'm not sure why the article uses 1/Rank alone. Did you test both and find that the smoothing didn't help? My understanding is that it has been pretty important for the best results.
We used 1/Rank in the article for simplicity purposes, though I can see why this might be confusing to an astute reader.
2. If anyone is observing significant gains from incorporating knowledge graphs into the retrieval step, what kind of a knowledge graph are you working with, what is your retrieval algorithm, and what technology are you using to store it?
https://github.com/Azure-Samples/rag-postgres-openai-python/
Here's the RRF+Hybrid part: https://github.com/Azure-Samples/rag-postgres-openai-python/...
That's largely based off a sample from the pgvector repo, with a few tweaks.
Agreed that Hybrid is the way to go, it's what the Azure AI Search team also recommends, based off their research:
https://techcommunity.microsoft.com/t5/ai-azure-ai-services-...
We also included supporting data in that write up showing you can improve significantly on top of Hybrid/RRF using a reranking stage (assuming you have a good reranker model), so we shipped one as an optional step as part of our search engine.
(disclaimer: supabase dev who went down the rabbit hole with hybrid search)
The tradeoffs of using existing systems vs building your own resonate with me. What we eventually experienced, however, is that periods of bad search performance often correlated to out-of-date search indices.
I'd be interested in another article detailing how you monitor search. It can be tricky to keep an entire search system moving.
LlamaIndex has a module for exactly this
https://docs.llamaindex.ai/en/stable/examples/retrievers/rel...
I'm not using that in my own experiments since I don't want to worry about the performance of running a model on production, but seems worth a try.
At first, I downloaded entire channels, loaded them into a vector DB, and did RAG. The results sucked. Vector searches don't understand things very well, and in this world, specific keywords and error messages are very searchable.
Instead, I take the user's query, ask an LLM (Claude / Bedrock) to find keywords, then search Slack using the API, get results, and use an LLM to filter for discussions that are relevant, then summarize them all in a response.
This is slow, of course, so it's very multi-threaded. A typical response will be within 30 seconds.
LlamaIndex does this out of the box.
There are a couple ways around this. Either learning the relative importance based on the query, and/or using a separate reranking function (usually a DNN) that also takes user behavior into account.
Might be worth a shot if performance is a tricky spot in your setup.
On the other hand let me introduce another database we developed, Infinity(https://github.com/infiniflow/infinity), which can provide the hybrid search, you can see the performance here(https://github.com/infiniflow/infinity/blob/main/docs/refere...), both vector search and full-text search could perform much faster than other open source alternatives.
From the next version(weeks later), Infinity will also provide more comprehensive hybrid search capabilities, what you have mentioned the 3-way recalls(dense vector, sparse vector, keyword search) could be provided within single request.
Vector similarity has a surprising failure mode. It only indexes explicit information, missing out the implicit one. For example "The second word of this phrase, decremented by one" is "first", do you think these strings will embed the same? Calculated results don't retrieve well. Also, deductions in general.
How about "I agree with what John said, but I'd rather apply Victor's solution"? It won't embed like the answer you seek. Multi-hop information seeking questions don't retrieve well.
The obvious fix is to pre-ingest all the RAG text into a LLM and calculate these deductions before embedding.
While it's not such a problem in RAG, one downside is that it complicates pagination for results (there are a few different ways to tackle this).
> Out-of-sync document stores could lead to subtle bugs, such as a document being present in one store but not another.
But then the article suggests to upload synchronously in S3/DDB and then sync asynchronously to actual document stores. How does this solve out of sync issue? It doesn't. It can't be solved is what I'm thinking.
> Data, numbers
How much data are we talking about?
For decades we had search engines based on the query terms (keywords). Then there were lots of discussions and some implementations to put a semantic search on top of it to improve the keyword search. A hybrid search. Google Search did exactly that already in 2015 [1].
Now we start from pure semantic search and put keyword search on top of it to improve the semantic search and call it hybrid search.
In both approaches, the overall search performance is exactly identical - to the last digit.
I am glad, that so far, no one has called this an innovation. But you could certainly write a lot of blog articles about it.
[1] https://searchengineland.com/semantic-search-entity-based-se...
The problem is that most people don't have experience optimizing even 1 of the retrieval systems (vector or keyword), so a lot of users that try to DIY build end up with an awful time trying to get to prod. People are talking about things like RRF (which are needed) but then missing other big-picture things like the mistakes everyone makes when building out a keyword search (not getting the right language rules in place) and also not getting the right vector side (finding the right embedding models, chunking strategies, etc).
I recognize I have a bit of a conflict of interest since I'm at a RAG vendor, but I'll abstain from the name/self-promotion and say: I've seen so many cases where people get this wrong, if you're thinking RAG you really should be hiring a consultant or looking at a complete platform from people that have done it more. Or be prepared to spend a lot of cycles learning and iterating
Don't think it's overly self-promotional if first asked :)
If you still don't wanna say, feel free to email, email in profile
One reason is unlike other data products - it’s an active, conscious action of users. If ads or recommendations are wrong nobody gets mad. But screw up search and it’s like the shop sales person taking you to the wrong aisle. It’s actively frustrating.
So basically every useful search system is disliked to some degree because it will get some things wrong some of the time.
Also my use case includes more than 20 languages. To find usable embeddings for all languages is next to impossible. However, there are keyword plugins for most languages in Solr or ElasticSearch.
Btw. In my benchmarks the result look something like this in English (MAP=mean average precision):
BM25(keyword search) -> MAP=45%
Embedding (Ada-002) -> MAP=49%
Hybrid (BM25 + Embedding) -> MAP=57%
Hybrid (Embedding + BM25) -> MAP=57%
And that's before you use synonym dictionaries for keyword searches.
Additionally, if you can add conditional fuzzy matching into the mix so fat fingering something still yields a workable result is even better for UX (something along the lines of "the results from the tf-idf search are garbage, let's redo the search with fuzzy matching this time).
If you make the embedding with an LLM, it should work for any language the LLM is trained on.
For my tests, I used Ada-002. As data I used small news articles and no chunking and no preprocessing. The query for the articles is embedded directly.
Of course, improvements can be done for both approaches. That should just exemplify, what you might expect with hybrid search.