zlacker

[parent] [thread] 5 comments
1. brando+(OP)[view] [source] 2024-05-10 01:12:44
Another proof point that AGI is probably not possible.

Growing actual bio brains is just way easier. Its never going to happen in silicon.

Every machine will just have a cubic centimeter block of neuro meat embedded in it somewhere.

replies(4): >>skulk+31 >>mr_toa+p1 >>myrmid+uo1 >>creer+Ux2
2. skulk+31[view] [source] 2024-05-10 01:24:34
>>brando+(OP)
I agree, mostly because it's already being done!

https://www.youtube.com/watch?v=V2YDApNRK3g

https://www.youtube.com/watch?v=bEXefdbQDjw

3. mr_toa+p1[view] [source] 2024-05-10 01:28:33
>>brando+(OP)
You’d have to train them individually. One advantage of ANNs is that you can train them and then ship the model to anyone with a GPU.
4. myrmid+uo1[view] [source] 2024-05-10 15:11:21
>>brando+(OP)
Hard disagree on this.

I strongly believe that there is a TON of potential for synthetic biology-- but not in computation.

People just forget how superior current silicon is for running algorithms; if you consider e.g. a 17 by 17 digit multiplication (double precision), then a current CPU can do that in the time it takes for light to reach your eye from the screen in front of you (!!!). During all the completely unavoidable latency (the time any visual stimulus takes to propagate and reach your consciousness), the CPU does millions more of those operations.

Any biocomputer would be limited to low-bandwidth, ultra high latency operations purely by design.

If you solely consider AGI as application, where abysmal latency and low input bandwidth might be acceptable, then it still appears to be extremely unlikely that we are going to reach that goal via synthetic biology; our current capabilities are just disappointing and not looking like they are gonna improve quickly.

Building artificial neural networks on silicon, on the other hand, capitalises on the almost exponential gains we made during the last decades, and already produces results that compare to say, a schoolchild, quite favorably; I'd argue that current LLM based approaches already eclipse the intellectual capabilities of ANY animal, for example. Artificial bio brains, on the other hand, are basically competing with worms right now...

Also consider that even though our brains might look daunting from a pure "upper bound on required complexity/number of connections" point of view, these limits are very unlikely to be applicable, because they confound implementation details, redundancy and irrelevant details. And we have precise bound on other parameters, that our technology already matches easily:

1) Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

2) Bandwidth for training is quite low, even when compressing the ~20year training time for an actual human into a more manageable timeframe

3) Operating power does not require more than ~20W.

4) No understanding was necessary to create human intelligence-- its purely a result of an iterative process (evolution).

Also consider human flight as an analogy: we did not achieve that by copying beating wings, powered by dozens of muscle groups and complex control algorithms-- those are just implementation details of existing biological systems. All we needed was the wing-concept itself and a bunch of trial-and-error.

replies(1): >>thfura+9Z3
5. creer+Ux2[view] [source] 2024-05-10 22:33:39
>>brando+(OP)
No reason for an AGI not to have a few cubes of goo slotted in here and there. But yeah, because of the training issue, they might be coprocessors or storage or something.
◧◩
6. thfura+9Z3[view] [source] [discussion] 2024-05-11 18:54:15
>>myrmid+uo1
>Artificial intelligence architecture can be bootstrapped from a CD-ROM worth of data (~700MiB for the whole human genome-- even that is mostly redundant)

Are you counting epigenetic factors in that? They're heritable.

[go to top]