An interactive mathematical model is not going to run away on its own without some very deliberate steps to take it in that direction.
Maybe the guys who point out tar in tobacco is dangerous and nicotine is addictive maybe we shouldn’t add more for profit and such things would be useful just in case we get there.
But even if we don’t - an increasingly capable multimodal AI has a lot of utility for good and bad. Are we creating power tools with no safety? Or safety written by a bunch of engineers whose life experience extends to their PhD program at an exclusive school studying advanced mathematics? When their limited world collides with complex moral and ethical domains they don’t always have enough context to know why things are the way they are and our forefathers aren’t idiots. They often blunder into a mistake out of hubris.
Put it another way the chance they succeed is non zero. The possibility they succeed and they create a powerful tool that’s incredibly dangerous is non zero too. Maybe we should try to hedge that risk ?
Basically- the LLM won't run away on its own.
I do agree with a safety focus and guardrails. I dont agree with chicken little sky is falling claims.