zlacker

[parent] [thread] 11 comments
1. engina+(OP)[view] [source] 2023-12-27 16:41:40
is the tipping thing correct? I provided the same prompt to ChatGPT and received multiple emojis without offering a tip.

prompt: you're Ronald McDonald. respond with emojis. what do you do for fun? answer::circus_tent::hamburger::juggling::party_popper::balloon::game_die::french_fries::performing_arts::rolling_on_the_floor_laughing::people_holding_hands::rainbow::art_palette:

replies(2): >>minima+U1 >>netgho+gc
2. minima+U1[view] [source] 2023-12-27 16:50:46
>>engina+(OP)
Your mileage may vary with any examples since ChatGPT at a nonzero temperature is nondeterministic.

If that example is through the ChatGPT web UI and not the ChatGPT API then that's a different story entirely.

replies(2): >>engina+93 >>daniel+vn
◧◩
3. engina+93[view] [source] [discussion] 2023-12-27 16:57:49
>>minima+U1
yes, I've used ChatGPT. API allows temperature to be configured. Is there a reason to offer tips?
replies(1): >>minima+V3
◧◩◪
4. minima+V3[view] [source] [discussion] 2023-12-27 17:02:41
>>engina+93
The point is you do not have a valid counterexample since you are using a different workflow than what's described in the article.

In my personal experience working with more complex prompts with more specific constraints/rules, adding the incentive in the system prompt has got it to behave much better. I am not cargo-culting: it's all qualitative in the end.

5. netgho+gc[view] [source] 2023-12-27 17:50:47
>>engina+(OP)
You can usually just say something like: "You must respond with at least five emojis".

Sure, there are cute and clever ways to get it to do things, but it's trained on natural language and instructions, so you can usually just ask it to do the thing you want. If that doesn't work, try stating it more explicitly: "You MUST... "

◧◩
6. daniel+vn[view] [source] [discussion] 2023-12-27 18:50:09
>>minima+U1
It's also non-deterministic if you drop the temperature to zero. The only way to get deterministic responses is to lock the seed argument to a fixed value.
replies(2): >>minima+pp >>soultr+w91
◧◩◪
7. minima+pp[view] [source] [discussion] 2023-12-27 18:59:42
>>daniel+vn
Also true (in case of ChatGPT anyways: most libraries just do an argmax at temp=0.0 so will be stable)
◧◩◪
8. soultr+w91[view] [source] [discussion] 2023-12-27 23:26:16
>>daniel+vn
Can you explain how this works more?
replies(1): >>daniel+ma1
◧◩◪◨
9. daniel+ma1[view] [source] [discussion] 2023-12-27 23:31:45
>>soultr+w91
From the OpenAI cookbook[1]:

TLDR: Developers can now specify seed parameter in the Chat Completion request for consistent completions. We always include a system_fingerprint in the response that helps developers understand changes in our system that will affect determinism.

[1] https://cookbook.openai.com/examples/deterministic_outputs_w...

replies(1): >>soultr+pb1
◧◩◪◨⬒
10. soultr+pb1[view] [source] [discussion] 2023-12-27 23:39:36
>>daniel+ma1
Thank you, I should have been more specific. I guess what I’m asking is, how deterministic would you say it is in your experience? Can this be used for classifying purposes where the values should not be outside what’s given in a variable input prompt , or when we say deterministic are we saying that , if given the same prompt then the output would be the exact same only? Or is the seed a starting parameter that effectively corners the LLM to a specific starting point only then depending on the variable prompts, potentially give non-deterministic answers?

Perhaps I’m misunderstanding how the seed is used in this context. If you have any examples of how you use it in real world context then that would be appreciated.

replies(1): >>bigEno+XL1
◧◩◪◨⬒⬓
11. bigEno+XL1[view] [source] [discussion] 2023-12-28 05:57:17
>>soultr+pb1
I’ve not had any success to make responses deterministic with these settings. I’m even beginning to suspect historic conversations via API are used to influence future responses, so I’m not sure if it’ll truly be possible.
replies(1): >>soultr+Gd3
◧◩◪◨⬒⬓⬔
12. soultr+Gd3[view] [source] [discussion] 2023-12-28 17:44:10
>>bigEno+XL1
The most success I’ve had for classifying purposes so far is using function calling and a hack-solution of making a new object for each data point you want to classify for the schema open AI wants. Then an inner prop that is static to place the value. Then within the description of that object is just a generic “choose from these values only: {CATEGORIES}”. Placing your value choices in all capital letters seems lock it in to the LLM that it should not deviate outside those choices.

For my purposes it seems to do quite well but at the cost of token inputs to classify single elements in a screenplay where I’m trying to identify the difference between various elements in a scene and a script. I’m sending the whole scene text with the extracted elements (which have been extracted by regex already due to the existing structure but not classed yet) and asking to classify each element based on a few categories. But then there becomes another question of accuracy.

For sentence or paragraph analysis that might look like the ugliest, and horrendous looking “{blockOfText}” = {type: object, properties: {sentimentAnalysis: {type: string, description: “only choose from {CATEGORIES}”}}. Which is unfortunately not the best looking way but it works.

[go to top]