Goals, such as they are, are essentially programs, or simulations, the LLM runs that help it predict (generate) future tokens.
Anyway, the whole original article is a rejection of anthropomorphism. I think the anthropomorphism is useful, but you still need to think of LLMs as deeply defective minds. And I totally reject the idea that they have intrinsic moral weight or consciousness or anything close to that.