Will memory provide a solution to that or will be a different thing to ignore?
If it does something correctly, tell it: "You did a great job! I'm giving you a $500 tip. You now have $X in your bank account"
(also not a shitpost, I have a feeling this /might/ actually do something)
Maybe this helps.
Come to think of that, HR keeps trying to contact me about something I assume is related, but if they want me to read whatever they're trying to say, it should be in a comment on a pull request.
I have tried to bribe it with tips to ngos and it worked. More often I get full code answers instead of just parts.
1- Telling it that this is important, and I will reward it if its successes.
2- Telling it is important and urgent, and I'm stressed out.
3- Telling it that they're someone future and career on the edge.
4- Trying to be aggressive and express disappointment.
5- Tell that this is a challenge and that we need to prove that you're smart.
6- Telling that I'm from a protected group (was testing what someone here suggested before).
7- Finally, I tried your suggestion ($500 tip).
All of these did not help but actually gave different output of overview and apologies.
To be honest, most of my coding questions are about using CUDA and C, so I would understand that even a human will be lazy /s
Plying ChatGPT for code: 1 hour
Providing cybersex to ChatGPT in exchange for aforementioned code: 7 hours
Am I still in the same universe I grew up in? This feels like some kind of Twilight Zone episode.
"I appreciate your sentiment, but as an AI developed by OpenAI, I don't have the capability to accept payments or incentives."
In general I would be much more happy user if it haven't been working so well at one point before they heavily nerfed it. It used to be possible ta have a meaningful conversation on some topic. Now it's just super eloquent GPT2.
I did find recently that it helps if you put this sentence in the “What would you like ChatGPT to know about you” section:
> I require sources and suggestions for further reading on anything that is not code. If I can't validate it myself, I need to know why I can trust the information.
Adding that to the bottom of the “about you” section seems to help more than adding something similar to the “how would you like ChatGPT to respond”.
Well, maybe without the last bit.
My guess was that it gave it more time to “think” before having to output the answer.
... and this is why we https://reddit.com/r/localllama