zlacker

[parent] [thread] 0 comments
1. Philpa+(OP)[view] [source] 2023-12-27 16:26:23
That’s not the same thing. Perplexity is using an already-trained LLM to read those sources and synthesise a new result from them. This allows them to cite the sources used for generation.

LLM training sees these documents without context; it doesn’t know where they came from, and any such attribution would become part of the thing it’s trying to mimic.

It’s still largely an unsolved problem.

[go to top]