>>drcode+(OP)
I've been finding with these large context windows that context window length is no longer the bottleneck for me — the LLM will start to hallucinate / fail to find the stuff I want from the text long before I hit the context window limit.
>>drcode+(OP)
I'm assuming that they have implemented it via a MemGPT-like approach, which doesn't clog the context window. The main pre-requisite for doing that is having good function calling, where OpenAI currently is significantly in the lead.