zlacker

[parent] [thread] 11 comments
1. omeze+(OP)[view] [source] 2023-07-05 17:57:58
yes I also have that impression. If you consider the concrete objectives, this is a good announcement:

- they want to make benchmarking easier by using AI systems

- they want to automate red-teaming and safety-checking ("problematic behavior" i.e. cursing at customers)

- they want to automate the understanding of model outputs ("interpretability")

Notice how absolutely none of these things require "superintelligence" to exist to be useful? They're all just bog standard Good Things that you'd want for any class of automated system, i.e. a great customer service bot.

The superintelligence meme is tiring but we're getting cool things out of it I guess...

replies(1): >>gooseu+0e
2. gooseu+0e[view] [source] 2023-07-05 18:46:18
>>omeze+(OP)
We'll get these cool things either way, no need to bundle them with the supernatural mumbo-jumbo, imo.

My take is that every advancement in these highly complex and expensive fields is dependent on our ability to maintain global social, political, and economic stability.

This insistence on the importance of Super-Intelligence and AGI as the path to Paradise or Hell is one of the many brain-worms going around that have this "Revelation" structure that makes pragmatic discussions very difficult, and in turn actually makes it harder to maintain social, political, and economic stability.

replies(1): >>Dennis+9n
◧◩
3. Dennis+9n[view] [source] [discussion] 2023-07-05 19:23:43
>>gooseu+0e
There's nothing "supernatural" about thinking that an AGI could be smarter than humans, and therefore behave in ways that we dumb humans can't predict.

There's more mumbo-jumbo in thinking human intelligence has some secret sauce that can't be replicated by a computer.

replies(1): >>gooseu+Xv
◧◩◪
4. gooseu+Xv[view] [source] [discussion] 2023-07-05 20:05:38
>>Dennis+9n
Not if the "secret sauce" is actually a natural limit to what levels of intelligence can be reached with the current architectures we're exploring.

It could be theoretically possible to build an AGI smarter than a human, but is it really plausible if it turns out to need a data center the size of the Hadron Collider and the energy of a small country to maintain itself?

It could be that it turns out the only architecture we can find that is equal to the task (and feasibly produced) is the human brain, and instead the hard part of making super-intelligence is bootstrapping that human brain and training it to be more intel?

Maybe the best way to solve the "alignment problem", and other issues of creating super-intelligence, is to solve the problem of how best to raise and educate intelligent and well-adjusted humans?

replies(3): >>Dennis+Bx >>jodrel+QF >>ben_w+fr2
◧◩◪◨
5. Dennis+Bx[view] [source] [discussion] 2023-07-05 20:12:52
>>gooseu+Xv
What if this, what if that? Do you have evidence that any of those things are true?
replies(2): >>gooseu+XB >>nuance+VI
◧◩◪◨⬒
6. gooseu+XB[view] [source] [discussion] 2023-07-05 20:30:13
>>Dennis+Bx
"What if" is all these "existential risk" conversations ever are.

Where is your evidence that we're approaching human level AGI, let alone SuperIntelligence? Because ChatGPT can (sometimes) approximate sophisticated conversation and deep knowledge?

How about some evidence that ChatGPT isn't even close? Just clone and run OpenAI's own evals repo https://github.com/openai/evals on the GPT-4 API.

It performs terribly on novel logic puzzles and exercises that a clever child could learn to do in an afternoon (there are some good chess evals, and I submitted one asking it to simulate a Forth machine).

replies(1): >>Dennis+IC
◧◩◪◨⬒⬓
7. Dennis+IC[view] [source] [discussion] 2023-07-05 20:33:40
>>gooseu+XB
It has its shortcomings for sure, but AI is improving exponentially.

I think reasonable, rational people can disagree on this issue. But it's nonsense to claim that the people on the other side of the argument from you are engaging in "supernatural mumbo-jumbo," unless there is rigorous proof that your side is correct.

But nobody has that. We don't even understand how GPT is able to do some of the things it does.

replies(1): >>gooseu+hS
◧◩◪◨
8. jodrel+QF[view] [source] [discussion] 2023-07-05 20:50:49
>>gooseu+Xv
Well, that argument didn't work for a lot of other things. Wheels are more energy efficient than legs, steel more resilient than tortoise shell or rhino skin, motors more powerful than muscles, aircraft fly higher and faster than birds, ladders reach higher than Giraffes much more easily, bulldozers dig faster than any digging creature, speakers and airhorns are louder than any animal cry or roar, ancient computers remember more raw data than humans do, electronics can react faster than human reactions. Human working memory is ~7 items after 80 billion neurons, far outdone by an 8-bit computer of the 1980s.

Why think 'intelligence' is somehow different?

◧◩◪◨⬒
9. nuance+VI[view] [source] [discussion] 2023-07-05 21:04:29
>>Dennis+Bx
What if a mysterious molecule that jumped from animals on humans would replicate fast and kill over a million of people all over the world?

What if climate change would lead to massive fires and flooding?

What if mitigation would be a thing?

◧◩◪◨⬒⬓⬔
10. gooseu+hS[view] [source] [discussion] 2023-07-05 21:52:20
>>Dennis+IC
Reasonable people can disagree and my phrasing was probably a bit over-seasoned, but neither side has a rigorous proof regarding AI or human intelligence.

If nobody understands how an LLM is able to achieve it's current level of intelligence, how is anyone so sure that this intelligence is definitely going to increase exponentially until it's better than a human?

There are real existential threats that we know are definitely going to happen one day (meteor, supervolcano, etc), and I believe that treating AGI like it is the same class of "not if; but when" is categorically wrong, furthermore, I think that many of the people leading the effort to frame it this way are doing so out of self-interest, rather than public concern.

replies(1): >>Dennis+RX
◧◩◪◨⬒⬓⬔⧯
11. Dennis+RX[view] [source] [discussion] 2023-07-05 22:23:08
>>gooseu+hS
Nobody is sure. This is mostly about risk. Personally I'm not absolutely convinced that AI will exceed human capabilities even within the next fifty years, but I do think it has a much better chance than an extinction-level meteor or supervolcano hitting us during that time.

And if we're going to put gobs of money and brainpower into attempting to make superhuman AI, it seems like a good idea to also put a lot of effort into making it safe. It'd be better to have safe but kinda dumb AI than unsafe superhuman AI, so our funding priorities appear to be backwards.

◧◩◪◨
12. ben_w+fr2[view] [source] [discussion] 2023-07-06 10:10:37
>>gooseu+Xv
> Not if the "secret sauce" is actually a natural limit to what levels of intelligence can be reached with the current architectures we're exploring.

If we were limited to only explore what we're currently exploring, we'd never have made Transformer models.

> It could be theoretically possible to build an AGI smarter than a human, but is it really plausible if it turns out to need a data center the size of the Hadron Collider and the energy of a small country to maintain itself?

That would be an example of "some kind of magic special sauce", given human brains fit on the inside if a skull and use 20 watts regardless of if they are Einstein or a village idiot, and we can make humans more capable by giving them normal computer with normal software like a calculator and a spreadsheet.

A human with a Pi Zero implant they can access by thought, which is basically the direction Neuralink is going but should be much easier in an AI that's simulating a brain scan, is vastly more capable than an un-augmented human.

Oh, and transistors operate faster than synapses by about the same ratio that wolves outpace continental drift; the limiting factor being that synapses use less energy right now — it's known to be possible to use less energy than synapses do, just expensive to build.

> Maybe the best way to solve the "alignment problem", and other issues of creating super-intelligence, is to solve the problem of how best to raise and educate intelligent and well-adjusted humans?

Perhaps, but we're not exactly good at that.

Should still look onto it anyway, it's useful regardless, but just don't rely on that being the be-all and end-all of alignment.

[go to top]