I was however tripped up by this sentence close to the beginning:
> we encountered a significant challenge with RAG: relying solely on vector search (even using both dense and sparse vectors) doesn’t always deliver satisfactory results for certain queries.
Not to be overly pedantic, but that's a problem with vector similarity, not RAG as a concept.
Although the author is clearly aware of that - I have had numerous conversations in the past few months alone of people essentially saying "RAG doesn't work because I use pg_vector (or whatever) and it never finds what I'm looking for" not realizing 1) it's not the only way to do RAG, and 2) there is often a fair difference between the embeddings and the vectorized query, and with awareness of why that is you can figure out how to fix it.
https://medium.com/@cdg2718/why-your-rag-doesnt-work-9755726... basically says everything I often say to people with RAG/vector search problems but again, seems like the assembled team has it handled :)
https://github.com/Azure-Samples/rag-postgres-openai-python/
Here's the RRF+Hybrid part: https://github.com/Azure-Samples/rag-postgres-openai-python/...
That's largely based off a sample from the pgvector repo, with a few tweaks.
Agreed that Hybrid is the way to go, it's what the Azure AI Search team also recommends, based off their research:
https://techcommunity.microsoft.com/t5/ai-azure-ai-services-...
We also included supporting data in that write up showing you can improve significantly on top of Hybrid/RRF using a reranking stage (assuming you have a good reranker model), so we shipped one as an optional step as part of our search engine.
(disclaimer: supabase dev who went down the rabbit hole with hybrid search)
LlamaIndex has a module for exactly this
https://docs.llamaindex.ai/en/stable/examples/retrievers/rel...
I'm not using that in my own experiments since I don't want to worry about the performance of running a model on production, but seems worth a try.
On the other hand let me introduce another database we developed, Infinity(https://github.com/infiniflow/infinity), which can provide the hybrid search, you can see the performance here(https://github.com/infiniflow/infinity/blob/main/docs/refere...), both vector search and full-text search could perform much faster than other open source alternatives.
From the next version(weeks later), Infinity will also provide more comprehensive hybrid search capabilities, what you have mentioned the 3-way recalls(dense vector, sparse vector, keyword search) could be provided within single request.
For decades we had search engines based on the query terms (keywords). Then there were lots of discussions and some implementations to put a semantic search on top of it to improve the keyword search. A hybrid search. Google Search did exactly that already in 2015 [1].
Now we start from pure semantic search and put keyword search on top of it to improve the semantic search and call it hybrid search.
In both approaches, the overall search performance is exactly identical - to the last digit.
I am glad, that so far, no one has called this an innovation. But you could certainly write a lot of blog articles about it.
[1] https://searchengineland.com/semantic-search-entity-based-se...