zlacker

[parent] [thread] 4 comments
1. outwor+(OP)[view] [source] 2023-11-17 23:18:02
Fun theory. We are very far from AGI, however.
replies(2): >>JohnFe+N4 >>selfho+f7
2. JohnFe+N4[view] [source] 2023-11-17 23:39:43
>>outwor+(OP)
We still don't even know if AGI is at all possible.
replies(1): >>TillE+cl
3. selfho+f7[view] [source] 2023-11-17 23:52:15
>>outwor+(OP)
Superintelligent AGI. I genuinely think that limited weak AGI is an engineering problem at this stage. Mind you, I will qualify that by saying very weak AGI.
◧◩
4. TillE+cl[view] [source] [discussion] 2023-11-18 00:58:42
>>JohnFe+N4
If you're a materialist, it surely is.

I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.

But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.

replies(1): >>JohnFe+Jy
◧◩◪
5. JohnFe+Jy[view] [source] [discussion] 2023-11-18 02:21:34
>>TillE+cl
> If you're a materialist, it surely is.

As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.

In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next couple of lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.

But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.

[go to top]