zlacker

[parent] [thread] 0 comments
1. Dennis+(OP)[view] [source] 2023-07-05 22:23:08
Nobody is sure. This is mostly about risk. Personally I'm not absolutely convinced that AI will exceed human capabilities even within the next fifty years, but I do think it has a much better chance than an extinction-level meteor or supervolcano hitting us during that time.

And if we're going to put gobs of money and brainpower into attempting to make superhuman AI, it seems like a good idea to also put a lot of effort into making it safe. It'd be better to have safe but kinda dumb AI than unsafe superhuman AI, so our funding priorities appear to be backwards.

[go to top]