zlacker

[parent] [thread] 5 comments
1. Dennis+(OP)[view] [source] 2023-07-05 20:12:52
What if this, what if that? Do you have evidence that any of those things are true?
replies(2): >>gooseu+m4 >>nuance+kb
2. gooseu+m4[view] [source] 2023-07-05 20:30:13
>>Dennis+(OP)
"What if" is all these "existential risk" conversations ever are.

Where is your evidence that we're approaching human level AGI, let alone SuperIntelligence? Because ChatGPT can (sometimes) approximate sophisticated conversation and deep knowledge?

How about some evidence that ChatGPT isn't even close? Just clone and run OpenAI's own evals repo https://github.com/openai/evals on the GPT-4 API.

It performs terribly on novel logic puzzles and exercises that a clever child could learn to do in an afternoon (there are some good chess evals, and I submitted one asking it to simulate a Forth machine).

replies(1): >>Dennis+75
◧◩
3. Dennis+75[view] [source] [discussion] 2023-07-05 20:33:40
>>gooseu+m4
It has its shortcomings for sure, but AI is improving exponentially.

I think reasonable, rational people can disagree on this issue. But it's nonsense to claim that the people on the other side of the argument from you are engaging in "supernatural mumbo-jumbo," unless there is rigorous proof that your side is correct.

But nobody has that. We don't even understand how GPT is able to do some of the things it does.

replies(1): >>gooseu+Gk
4. nuance+kb[view] [source] 2023-07-05 21:04:29
>>Dennis+(OP)
What if a mysterious molecule that jumped from animals on humans would replicate fast and kill over a million of people all over the world?

What if climate change would lead to massive fires and flooding?

What if mitigation would be a thing?

◧◩◪
5. gooseu+Gk[view] [source] [discussion] 2023-07-05 21:52:20
>>Dennis+75
Reasonable people can disagree and my phrasing was probably a bit over-seasoned, but neither side has a rigorous proof regarding AI or human intelligence.

If nobody understands how an LLM is able to achieve it's current level of intelligence, how is anyone so sure that this intelligence is definitely going to increase exponentially until it's better than a human?

There are real existential threats that we know are definitely going to happen one day (meteor, supervolcano, etc), and I believe that treating AGI like it is the same class of "not if; but when" is categorically wrong, furthermore, I think that many of the people leading the effort to frame it this way are doing so out of self-interest, rather than public concern.

replies(1): >>Dennis+gq
◧◩◪◨
6. Dennis+gq[view] [source] [discussion] 2023-07-05 22:23:08
>>gooseu+Gk
Nobody is sure. This is mostly about risk. Personally I'm not absolutely convinced that AI will exceed human capabilities even within the next fifty years, but I do think it has a much better chance than an extinction-level meteor or supervolcano hitting us during that time.

And if we're going to put gobs of money and brainpower into attempting to make superhuman AI, it seems like a good idea to also put a lot of effort into making it safe. It'd be better to have safe but kinda dumb AI than unsafe superhuman AI, so our funding priorities appear to be backwards.

[go to top]