zlacker

[parent] [thread] 0 comments
1. simonw+(OP)[view] [source] 2025-12-06 17:11:09
All of the interesting LLMs can handle a full paper these days without any trouble at all. I don't think it's worth spending much time optimizing for that use-case any more - that was much more important two years ago when most models topped out at 4,000 or 8,000 tokens.
[go to top]