zlacker

[parent] [thread] 0 comments
1. quickt+(OP)[view] [source] 2023-11-23 03:53:25
I don't think AI safetyists are worried about any model they have created so far. But if we are able to go from letter-soup "ooh look that almost seems like a sentence, SOTA!" to GPT4 in 20 years, where will go in the next 20? And what is the point they are becoming powerful. Let alone all the crazy ways people are trying to augment them with RAG, function calls, get them to run on less computer power and so on.

Also being better at humans at everything is not a prerequisite for danger. Probably a scary moment is when it could look at a C (or Rust, C++, whatever) codebase, find an exploit, and then use that exploit as a worm. If it can do that on everyday hardware not top end GPUs (either because the algorithms are made more efficient, or every iPhone has a tensor unit).

[go to top]