zlacker

[parent] [thread] 17 comments
1. BigPar+(OP)[view] [source] 2024-02-13 20:07:07
Often I’ll play dumb and withhold ideas from ChatGPT because I want to know what it thinks. If I give it too many thoughts of mine, it gets stuck in a rut towards my tentative solution. I worry that the memory will bake this problem in.
replies(6): >>madame+B >>cooper+68 >>addand+c9 >>frabjo+D9 >>bsza+qh >>thelit+Am
2. madame+B[view] [source] 2024-02-13 20:09:41
>>BigPar+(OP)
Yep.

Hopefully they'll make it easy to go into a temporary chat because it gets stuck in ruts occasionally so another chat frequently helps get it unstuck.

3. cooper+68[view] [source] 2024-02-13 20:53:56
>>BigPar+(OP)
“I pretend to be dumb when I speak to the robot so it won’t feel like it has to use my ideas, so I can hear the ideas that it comes up with instead” is such a weird, futuristic thing to have to deal with. Neat!
replies(3): >>bbor+Fi >>tomtom+bq >>aggie+bv
4. addand+c9[view] [source] 2024-02-13 20:59:37
>>BigPar+(OP)
I purposely go out of my way to start new chats to have a clean slate and not have it remember things.
replies(2): >>jerpin+Ue >>merpnd+bj
5. frabjo+D9[view] [source] 2024-02-13 21:01:29
>>BigPar+(OP)
Yeah I find GPT too easily tends toward a brown-nosing executive assistant to someone powerful who eventually only hears what he wants to hear.
replies(1): >>crotch+r51
◧◩
6. jerpin+Ue[view] [source] [discussion] 2024-02-13 21:30:41
>>addand+c9
Agreed, I do this all the time especially when the model hits a dead end
replies(1): >>hacker+Yk1
7. bsza+qh[view] [source] 2024-02-13 21:45:46
>>BigPar+(OP)
Seems like this is already solved.

"You can turn off memory at any time (Settings > Personalization > Memory). While memory is off, you won't create or use memories."

◧◩
8. bbor+Fi[view] [source] [discussion] 2024-02-13 21:50:34
>>cooper+68
I try to look for one comment like this in every AI post. Because after the applications, the politics, the debates, the stock market —- if you strip all those impacts away, you’re reminded that we have intuitive computers now.
replies(1): >>stavro+Rk
◧◩
9. merpnd+bj[view] [source] [discussion] 2024-02-13 21:54:02
>>addand+c9
In a good RAG system this should be solved by unrelated text not being available in the context. It could actually improve your chats by quickly removing unrelated parts of the conversation.
◧◩◪
10. stavro+Rk[view] [source] [discussion] 2024-02-13 22:04:22
>>bbor+Fi
We do have intuitive computers! They can even make art! The present has never been more the future.
11. thelit+Am[view] [source] 2024-02-13 22:14:45
>>BigPar+(OP)
Sounds like communication between me with my wife.
◧◩
12. tomtom+bq[view] [source] [discussion] 2024-02-13 22:36:05
>>cooper+68
It seems that people who are more emphatic have an advantage when using AI.
replies(1): >>_puk+8x
◧◩
13. aggie+bv[view] [source] [discussion] 2024-02-13 23:08:22
>>cooper+68
This is actually a common dynamic between humans, especially when there is a status or knowledge imbalance. If you do user interviews, one of the most important skills is not injecting your views into the conversation.
replies(1): >>breath+3M
◧◩◪
14. _puk+8x[view] [source] [discussion] 2024-02-13 23:17:32
>>tomtom+bq
I don't think prompts in ALL CAPS makes a huge difference ;)
◧◩◪
15. breath+3M[view] [source] [discussion] 2024-02-14 01:17:09
>>aggie+bv
Seems related to the psychological concept of "anchoring".
◧◩
16. crotch+r51[view] [source] [discussion] 2024-02-14 04:09:29
>>frabjo+D9
What else would you expect from RLHF?
◧◩◪
17. hacker+Yk1[view] [source] [discussion] 2024-02-14 07:13:39
>>jerpin+Ue
I often run multiple parallel chats and expose it to slightly different amounts of information. Then average the answers in my head to come up with something more reliable.

For coding tasks, I found it helps to feed the GPT-4 answer into another GPT-4 instance and say "review this code step by step, identify any bugs" etc. It can sometimes find its own errors.

replies(1): >>jerpin+sZ1
◧◩◪◨
18. jerpin+sZ1[view] [source] [discussion] 2024-02-14 14:02:10
>>hacker+Yk1
I feel like you could probably generalize this method and attempt to get better performance with LLMs
[go to top]