There are two kinds of risk: the risk from these models as deployed as tools and as deployed as autonomous agents.
The first is already quite dangerous and frankly already here. An algorithm to invent novel chemical weapons is already possible. The risk here isn’t Terminator, it’s rogue group or military we don’t like getting access. There are plenty of other dangerous ways autonomous systems could be deployed as tools.
As far as autonomous agents go, I believe that corporations already exhibit most if not all characteristics of AI, and demonstrate what it’s like to live in a world of paperclip maximizers. Not only do they destroy the environment and bend laws to achieve their goals, they also corrupt the political system meant to keep them in check.