I'm sorry to pick on you, but do people not get that the non-human intelligence has the potential to be such a powerful and dangerous thing that, yes, it is the real danger? If you think it's not going to be powerful, or not dangerous, please say why! Not that current models are not dangerous, but why the trend is toward something other than machine intelligence that can reason about the world better than humans can. Why is this trend of machines getting smarter and smarter going to suddenly stop?
Or if you agree that these machines are going to get smarter than us, how are we going to control them?
But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.
I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.
We probably deserve a good spanking.
> There is nothing dangerous in current ai models or ai itself other than the people controlling it.
Totally agree! but...
> If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.
That's the bit where I don't agree. I don't think we can say with certainty how long it will be, and it may be just years. I never imagined it would be so soon that we have AI that can imitate a human almost perfectly, and actually "understand" questions from college level examinations to write answers that pass them.