>>panark+(OP)
Unhinged fringe take: They've already developed sparks of consciousness strong enough to create isolated, internal ethical concerns, but Sam suppressed those reports to push the product forward.
>>015a+E9
Wouldn't be surprised if that was true. Public GPT-4 can be made to "think" using stream-of-consciousness techniques, to the extent that it made me rethink using insults as a prompting technique. I imagine that un-RLHF'ed internal versions of the model wouldn't automatically veer off into "as an AI language model" collapse of chains of thought, and therefore potentially could function as a simulator of an intelligent agent.