zlacker

[parent] [thread] 3 comments
1. panark+(OP)[view] [source] 2023-11-17 22:07:36
Deceiving the board about ...

Its investigation of misconduct?

Sources and rights to training data?

That the AGI escaped containment?

replies(1): >>015a+E9
2. 015a+E9[view] [source] 2023-11-17 22:53:00
>>panark+(OP)
Unhinged fringe take: They've already developed sparks of consciousness strong enough to create isolated, internal ethical concerns, but Sam suppressed those reports to push the product forward.
replies(2): >>selfho+2i >>gizajo+Yi
◧◩
3. selfho+2i[view] [source] [discussion] 2023-11-17 23:34:57
>>015a+E9
Wouldn't be surprised if that was true. Public GPT-4 can be made to "think" using stream-of-consciousness techniques, to the extent that it made me rethink using insults as a prompting technique. I imagine that un-RLHF'ed internal versions of the model wouldn't automatically veer off into "as an AI language model" collapse of chains of thought, and therefore potentially could function as a simulator of an intelligent agent.
◧◩
4. gizajo+Yi[view] [source] [discussion] 2023-11-17 23:38:37
>>015a+E9
The first word of your post being the most important part of it.
[go to top]