zlacker

[parent] [thread] 2 comments
1. Turing+(OP)[view] [source] 2023-12-21 01:15:04
>> Doesn’t really follow instructions too well,

This is the biggest problem we're having swapping LLMs. While Langchain allows easy swap, and while we dont care as much about quality during integration testing, etc...the bigger problem is following directions. OpenAI does well at outputting a JSON if I ask for one. Unfortunately now our software has come to expect JSON output in such cases. Swap it to, say, llama2 and you dont get JSON even if asking for one. This makes swapping not just a quality decision but an integration challenge.

replies(2): >>coder5+14 >>jay-ba+ih
2. coder5+14[view] [source] 2023-12-21 01:59:54
>>Turing+(OP)
I haven't used the llama2 models much in quite awhile, because they just aren't very good compared to other options that exist at this point. The instruction-tuned variants of Mistral and Mixtral seem to have very little trouble responding in JSON when I ask for it. However, with LLMs that you run yourself, you can also enforce a grammar for the response if you want to, guaranteeing that it will respond with valid JSON (that matches your schema!) and no extraneous text.

Something potentially helpful here: https://github.com/ggerganov/llama.cpp/discussions/2494

If you fine-tuned a base model (like the one in the article) on various inputs and the expected JSON output for each input, it would probably do even better.

3. jay-ba+ih[view] [source] 2023-12-21 04:26:29
>>Turing+(OP)
In my experience, Llama 2 (70B) can semi-reliably provide JSON output when provided with clear instructions and various distinct but similarly structured examples. It goes from “semi-reliably” to “consistently” when fine-tuned.

The primary issue I’ve run into is exhausting the context window much sooner than I’d like. Fine-tuning tends to mostly fix this issue though.

[go to top]