Wouldn't be surprised if that was true. Public GPT-4 can be made to "think" using stream-of-consciousness techniques, to the extent that it made me rethink using insults as a prompting technique. I imagine that un-RLHF'ed internal versions of the model wouldn't automatically veer off into "as an AI language model" collapse of chains of thought, and therefore potentially could function as a simulator of an intelligent agent.