zlacker

[parent] [thread] 3 comments
1. drcode+(OP)[view] [source] 2024-02-13 18:51:56
This kind of just sounds like junk that will clog up the context window

I'll have try it out though to know for sure

replies(2): >>Prosam+q >>hobofa+Y1
2. Prosam+q[view] [source] 2024-02-13 18:54:33
>>drcode+(OP)
I've been finding with these large context windows that context window length is no longer the bottleneck for me — the LLM will start to hallucinate / fail to find the stuff I want from the text long before I hit the context window limit.
replies(1): >>drcode+91
◧◩
3. drcode+91[view] [source] [discussion] 2024-02-13 18:58:14
>>Prosam+q
Yeah, there is basically a soft limit now where it just is less effective as the context gets larger
4. hobofa+Y1[view] [source] 2024-02-13 19:02:45
>>drcode+(OP)
I'm assuming that they have implemented it via a MemGPT-like approach, which doesn't clog the context window. The main pre-requisite for doing that is having good function calling, where OpenAI currently is significantly in the lead.
[go to top]