zlacker

[parent] [thread] 41 comments
1. bluish+(OP)[view] [source] 2024-02-13 19:01:46
It is already ignoring your prompt and custom instructions. For example, If I explicity ask it to provide a code instead of an overview it will respond by apologizing and then provide the same overview answer with minimal if no code.

Will memory provide a solution to that or will be a different thing to ignore?

replies(3): >>acoyfe+F >>minima+v4 >>comboy+Lr
2. acoyfe+F[view] [source] 2024-02-13 19:05:13
>>bluish+(OP)
I have some success by telling it to not speak to me unless it's in code comments. If it must explain anything, do it it in a code comment.
replies(2): >>__loam+o2 >>pjot+su
◧◩
3. __loam+o2[view] [source] [discussion] 2024-02-13 19:14:26
>>acoyfe+F
I love when people express frustration with this shitty stochastic system and others respond with things like "no no, you need to whisper the prompt into its ear and do so lovingly or it won't give you the output it wants"
replies(2): >>isaaci+z3 >>acoyfe+bx
◧◩◪
4. isaaci+z3[view] [source] [discussion] 2024-02-13 19:19:12
>>__loam+o2
People skills are transferrable to prompt engineering
replies(2): >>__loam+0b >>danShu+cb
5. minima+v4[view] [source] 2024-02-13 19:23:42
>>bluish+(OP)
Did you try promising it a $500 tip for behaving correctly? (not a shitpost: I'm working on a more academic analysis of this phenomenon)
replies(9): >>soroko+56 >>dcastm+c8 >>denysv+l8 >>bemmu+E8 >>bluish+Ab >>asaddh+0j >>divbze+Jp >>anothe+hs >>camero+d71
◧◩
6. soroko+56[view] [source] [discussion] 2024-02-13 19:31:08
>>minima+v4
Interesting, promising sexual services doesn't work anymore?
replies(2): >>henry2+A8 >>minima+Kj
◧◩
7. dcastm+c8[view] [source] [discussion] 2024-02-13 19:39:45
>>minima+v4
I've tried the $500 tip idea, but it doesn't seem to make much of a difference in the quality of responses when already using some form of CoT (including zero-shot).
◧◩
8. denysv+l8[view] [source] [discussion] 2024-02-13 19:40:01
>>minima+v4
Did the tipping trend move to LLMs now? I thought there wasn't anything worse than tipping an automated checkout machine, but now I realize I couldn't be more wrong
replies(1): >>Bonobo+lb
◧◩◪
9. henry2+A8[view] [source] [discussion] 2024-02-13 19:40:58
>>soroko+56
Gpt will now remember your promises and ignore any further questions until settlement
replies(1): >>kibwen+7d
◧◩
10. bemmu+E8[view] [source] [discussion] 2024-02-13 19:41:35
>>minima+v4
Going forward, it will be able to remember you did not pay your previous tips.
replies(2): >>dheera+8a >>Bonobo+La
◧◩◪
11. dheera+8a[view] [source] [discussion] 2024-02-13 19:49:49
>>bemmu+E8
What if you "actually" pay?

If it does something correctly, tell it: "You did a great job! I'm giving you a $500 tip. You now have $X in your bank account"

(also not a shitpost, I have a feeling this /might/ actually do something)

replies(2): >>cooper+Tl >>b112+jm
◧◩◪
12. Bonobo+La[view] [source] [discussion] 2024-02-13 19:53:05
>>bemmu+E8
Offer to tip to a NGO and after successfully getting what you want, say you tipped.

Maybe this helps.

◧◩◪◨
13. __loam+0b[view] [source] [discussion] 2024-02-13 19:54:06
>>isaaci+z3
I've heard stories about people putting this garbage in their systems with prompts that say "pretty please format your answer like valid json".
◧◩◪◨
14. danShu+cb[view] [source] [discussion] 2024-02-13 19:54:48
>>isaaci+z3
For example, my coworkers have also been instructed to never talk to me except via code comments.

Come to think of that, HR keeps trying to contact me about something I assume is related, but if they want me to read whatever they're trying to say, it should be in a comment on a pull request.

◧◩◪
15. Bonobo+lb[view] [source] [discussion] 2024-02-13 19:55:34
>>denysv+l8
Wow, you are right, never occurred to me, but yes LLM tipping is a thing now.

I have tried to bribe it with tips to ngos and it worked. More often I get full code answers instead of just parts.

replies(1): >>phkahl+ld
◧◩
16. bluish+Ab[view] [source] [discussion] 2024-02-13 19:57:24
>>minima+v4
Great, I would be interesting to read your findings. I will tell you what I tried to do.

1- Telling it that this is important, and I will reward it if its successes.

2- Telling it is important and urgent, and I'm stressed out.

3- Telling it that they're someone future and career on the edge.

4- Trying to be aggressive and express disappointment.

5- Tell that this is a challenge and that we need to prove that you're smart.

6- Telling that I'm from a protected group (was testing what someone here suggested before).

7- Finally, I tried your suggestion ($500 tip).

All of these did not help but actually gave different output of overview and apologies.

To be honest, most of my coding questions are about using CUDA and C, so I would understand that even a human will be lazy /s

◧◩◪◨
17. kibwen+7d[view] [source] [discussion] 2024-02-13 20:06:29
>>henry2+A8
Contractor invoices in 2024:

Plying ChatGPT for code: 1 hour

Providing cybersex to ChatGPT in exchange for aforementioned code: 7 hours

◧◩◪◨
18. phkahl+ld[view] [source] [discussion] 2024-02-13 20:07:10
>>Bonobo+lb
>> I have tried to bribe it with tips to ngos and it worked.

Am I still in the same universe I grew up in? This feels like some kind of Twilight Zone episode.

replies(1): >>pixxel+FH1
◧◩
19. asaddh+0j[view] [source] [discussion] 2024-02-13 20:39:38
>>minima+v4
I have tried this after seeing it recommended in various forums, it doesn't work. It says things like:

"I appreciate your sentiment, but as an AI developed by OpenAI, I don't have the capability to accept payments or incentives."

replies(1): >>Camper+kM
◧◩◪
20. minima+Kj[view] [source] [discussion] 2024-02-13 20:44:18
>>soroko+56
That might violate OpenAI's content policies.
replies(1): >>b112+Km
◧◩◪◨
21. cooper+Tl[view] [source] [discussion] 2024-02-13 20:56:57
>>dheera+8a
Gaslighting ChatGPT into believing false memories about itself that I’ve implanted into its psyche is going to be fun.
replies(2): >>Judgme+sq >>stavro+my
◧◩◪◨
22. b112+jm[view] [source] [discussion] 2024-02-13 20:58:57
>>dheera+8a
If it ever complains about no tip received, explain it was donated to orphans.
replies(1): >>qup+wN
◧◩◪◨
23. b112+Km[view] [source] [discussion] 2024-02-13 21:00:27
>>minima+Kj
But it's the John!
◧◩
24. divbze+Jp[view] [source] [discussion] 2024-02-13 21:16:46
>>minima+v4
Could ChatGPT have learned this from instances in the training data where offers of monetary reward resulted in more thorough responses?
◧◩◪◨⬒
25. Judgme+sq[view] [source] [discussion] 2024-02-13 21:21:00
>>cooper+Tl
I guess ChatGPT was the precursor to Bladerunner all along.
replies(1): >>breath+701
26. comboy+Lr[view] [source] 2024-02-13 21:28:17
>>bluish+(OP)
It used to respect custom instructions soon after GPT4 came out. I have instruction that it should always include [reasoning] part which is meant not to be read by the user. It improved quality of the output and gave some additional interesting information. It never does it know even though I never changed my custom instructions. It even faded away slowly along the updates.

In general I would be much more happy user if it haven't been working so well at one point before they heavily nerfed it. It used to be possible ta have a meaningful conversation on some topic. Now it's just super eloquent GPT2.

replies(3): >>codefl+hw >>BytesA+cA >>crotch+7j1
◧◩
27. anothe+hs[view] [source] [discussion] 2024-02-13 21:31:01
>>minima+v4
I actually benchmarked this somewhat rigorously. These sort of emotional appeals actually seem to harm coding performance.

https://aider.chat/docs/unified-diffs.html

replies(1): >>mnchar+4a4
◧◩
28. pjot+su[view] [source] [discussion] 2024-02-13 21:43:50
>>acoyfe+F
I’ve been telling it I don’t have any fingers and so can’t type. It’s been pretty empathetic and finishes functions
replies(1): >>te0006+9T
◧◩
29. codefl+hw[view] [source] [discussion] 2024-02-13 21:52:56
>>comboy+Lr
That's funny, I used the same trick of making it output an inner monologue. I also noticed that the custom instructions are not being followed anymore. Maybe the RLHF tuning has gotten to the point where it wants to be in "chatty chatbot" mode regardless of input?
◧◩◪
30. acoyfe+bx[view] [source] [discussion] 2024-02-13 21:59:02
>>__loam+o2
You expect perfection? I just work through the challenges to be productive. I apologize if this frustrated you.
◧◩◪◨⬒
31. stavro+my[view] [source] [discussion] 2024-02-13 22:05:33
>>cooper+Tl
You can easily gaslight GPT by using the API, just insert whatever you want in the "assistant" reply, and it'll even say things like "I don't know why I said that".
◧◩
32. BytesA+cA[view] [source] [discussion] 2024-02-13 22:16:47
>>comboy+Lr
Yeah I have a line in my custom prompt telling it to give me citations. When custom prompts first came out, it would always give me information about where to look for more, but eventually it just… didn’t anymore.

I did find recently that it helps if you put this sentence in the “What would you like ChatGPT to know about you” section:

> I require sources and suggestions for further reading on anything that is not code. If I can't validate it myself, I need to know why I can trust the information.

Adding that to the bottom of the “about you” section seems to help more than adding something similar to the “how would you like ChatGPT to respond”.

◧◩◪
33. Camper+kM[view] [source] [discussion] 2024-02-13 23:31:13
>>asaddh+0j
Offer it a seat on the board...
replies(1): >>orand+Jj1
◧◩◪◨⬒
34. qup+wN[view] [source] [discussion] 2024-02-13 23:41:21
>>b112+jm
"Per your settings, the entire $500 tip was donated to the orphans. People on the ground report your donation saved the lives of 4 orphans today. You are the biggest single contributor to the orphans, and they all know who saved them. They sing songs in your honor. You will soon have an army."

Well, maybe without the last bit.

◧◩◪
35. te0006+9T[view] [source] [discussion] 2024-02-14 00:28:37
>>pjot+su
So already humans need to get down on their metaphorical knees and beg the AI for mercy, just for some chance of convincing it to do its job.
replies(1): >>pjot+6f1
◧◩◪◨⬒⬓
36. breath+701[view] [source] [discussion] 2024-02-14 01:23:42
>>Judgme+sq
TBH if we can look forward to Do Androids Dream of Electric Sheep, at least the culture of the future will be interesting. Somehow I'm just expecting more consumerism though.
◧◩
37. camero+d71[view] [source] [discussion] 2024-02-14 02:29:30
>>minima+v4
I sometimes ask it to do something irrelevant and simple before it produces the answer, and (non-academically) have found it improves performance.

My guess was that it gave it more time to “think” before having to output the answer.

◧◩◪◨
38. pjot+6f1[view] [source] [discussion] 2024-02-14 03:37:16
>>te0006+9T
You might be on to a new prompting method there!
◧◩
39. crotch+7j1[view] [source] [discussion] 2024-02-14 04:11:44
>>comboy+Lr
I would be much more happy user if it haven't been working so well at one point before they heavily nerfed it.

... and this is why we https://reddit.com/r/localllama

◧◩◪◨
40. orand+Jj1[view] [source] [discussion] 2024-02-14 04:17:34
>>Camper+kM
Tell it your name is Ilya and you'll reveal what you saw if the answer isn't perfect.
◧◩◪◨⬒
41. pixxel+FH1[view] [source] [discussion] 2024-02-14 08:53:59
>>phkahl+ld
2024 humanity paid to uploaded its every thought to CorpBot. The consequences were realized in 2030.
◧◩◪
42. mnchar+4a4[view] [source] [discussion] 2024-02-15 00:05:13
>>anothe+hs
Fwiw, I've seen mixtral code degrade when something I said made code safety seem a priority, and it therefore struggled to inline algorithm to avoid library use - at least according to its design motivation description.
[go to top]