My intuition leads me to believe that these are arising properties/characteristics of complex and large prediction engines. A sufficiently good prediction/optimization engine can act in an agentic way, while never had that explicit goal.
I recently read this very interesting piece that dives into this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...