zlacker

[parent] [thread] 2 comments
1. aantix+(OP)[view] [source] 2023-12-27 16:14:31
It's possible. Perplexity.ai is trying to solve this problem.

E.g. "Japan's App Store antitrust case"

https://www.perplexity.ai/search/Japans-App-Store-GJNTsIOVSy...

replies(2): >>Philpa+c2 >>simonw+m2
2. Philpa+c2[view] [source] 2023-12-27 16:26:23
>>aantix+(OP)
That’s not the same thing. Perplexity is using an already-trained LLM to read those sources and synthesise a new result from them. This allows them to cite the sources used for generation.

LLM training sees these documents without context; it doesn’t know where they came from, and any such attribution would become part of the thing it’s trying to mimic.

It’s still largely an unsolved problem.

3. simonw+m2[view] [source] 2023-12-27 16:27:06
>>aantix+(OP)
That's a different approach: they've implemented RAG, Retrieval Augmented Generation, where the tool runs additional searches as part of answering a question.

ChatGPT Browse and Bing and Google Bard implement the same pattern.

RAG does allow for some citation, but it doesn't help with the larger problem of not being able to cite for answers provided by the unassisted language model.

[go to top]