zlacker

[parent] [thread] 7 comments
1. Bonobo+(OP)[view] [source] 2024-02-13 19:14:06
It got so difficult to force ChatGPT to give me the full code in the answer, when I have some code related problems.

Always this patchwork of „insert your previous code here“

This is not a problem of the model, but I suspect it is in the system prompt that got some major issues.

replies(3): >>ldjkfk+N1 >>keketi+Y2 >>microm+6k
2. ldjkfk+N1[view] [source] 2024-02-13 19:22:04
>>Bonobo+(OP)
They save money by producing less tokens
replies(3): >>Bonobo+F7 >>snoman+Z7 >>soultr+s71
3. keketi+Y2[view] [source] 2024-02-13 19:26:47
>>Bonobo+(OP)
Every output token costs GPU time and thereby money. They could have tuned the model to be less verbose in this way.
◧◩
4. Bonobo+F7[view] [source] [discussion] 2024-02-13 19:49:25
>>ldjkfk+N1
And I have to force them by repeating the question with different orders.

I would understand it, if they do it in the first reply and I have to specifically ask to get the full code. Would be easier for them and me. I can fix code faster and get the working full code at the end.

At this moment it is bad for both.

◧◩
5. snoman+Z7[view] [source] [discussion] 2024-02-13 19:50:59
>>ldjkfk+N1
Which is weird because I’m constantly asking it to make responses shorter, have fewer adjectives, fewer adverbs. There’s just so much “fluff” in its responses.

Sometimes it feels like its training set was filled to the brim with marketing bs.

replies(1): >>crooke+hc
◧◩◪
6. crooke+hc[view] [source] [discussion] 2024-02-13 20:13:20
>>snoman+Z7
I saw somebody else suggest this for custom instructions and it's helped a lot:

> You are a maximally terse assistant with minimal affect.

It's not perfect, but it neatly eliminates almost all the "Sure, I'd be happy to help. (...continues for a paaragraph...)" filler before actual responses.

7. microm+6k[view] [source] 2024-02-13 20:59:27
>>Bonobo+(OP)
tell it not to do that in the custom instructions
◧◩
8. soultr+s71[view] [source] [discussion] 2024-02-14 02:51:23
>>ldjkfk+N1
They don’t save money when you have to ask it multiple times to get the expected output.
[go to top]