zlacker

[parent] [thread] 0 comments
1. ben_w+(OP)[view] [source] 2023-07-06 10:10:37
> Not if the "secret sauce" is actually a natural limit to what levels of intelligence can be reached with the current architectures we're exploring.

If we were limited to only explore what we're currently exploring, we'd never have made Transformer models.

> It could be theoretically possible to build an AGI smarter than a human, but is it really plausible if it turns out to need a data center the size of the Hadron Collider and the energy of a small country to maintain itself?

That would be an example of "some kind of magic special sauce", given human brains fit on the inside if a skull and use 20 watts regardless of if they are Einstein or a village idiot, and we can make humans more capable by giving them normal computer with normal software like a calculator and a spreadsheet.

A human with a Pi Zero implant they can access by thought, which is basically the direction Neuralink is going but should be much easier in an AI that's simulating a brain scan, is vastly more capable than an un-augmented human.

Oh, and transistors operate faster than synapses by about the same ratio that wolves outpace continental drift; the limiting factor being that synapses use less energy right now — it's known to be possible to use less energy than synapses do, just expensive to build.

> Maybe the best way to solve the "alignment problem", and other issues of creating super-intelligence, is to solve the problem of how best to raise and educate intelligent and well-adjusted humans?

Perhaps, but we're not exactly good at that.

Should still look onto it anyway, it's useful regardless, but just don't rely on that being the be-all and end-all of alignment.

[go to top]