zlacker

[parent] [thread] 5 comments
1. guhida+(OP)[view] [source] 2024-05-15 15:23:57
I'm 100% certain that I need to do more than just predict the next token to be considered intelligent. Also call me when ChatGPT can manipulate matter.
replies(2): >>soulof+84 >>mypalm+vP
2. soulof+84[view] [source] 2024-05-15 15:43:31
>>guhida+(OP)
> Also call me when ChatGPT can manipulate matter.

You mean like PALM-E? https://palm-e.github.io/

Embodiment is the easy part.

3. mypalm+vP[view] [source] 2024-05-15 19:36:33
>>guhida+(OP)
Are you 100% certain that the human brain performs no language processing which is analogous to token prediction?
replies(1): >>stubis+5w1
◧◩
4. stubis+5w1[view] [source] [discussion] 2024-05-16 00:34:33
>>mypalm+vP
A human brain certainly does do predictions, which is very useful to the bit that makes decisions. But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize? The best it can do is blindly follow the mob, a behavior we consider unintelligent even when done by human brains.
replies(1): >>craken+LP1
◧◩◪
5. craken+LP1[view] [source] [discussion] 2024-05-16 04:37:07
>>stubis+5w1
> But how does a pure prediction engine make decisions? Make a judgement call? Analyze inconsistencies? Theorize?

My intuition leads me to believe that these are arising properties/characteristics of complex and large prediction engines. A sufficiently good prediction/optimization engine can act in an agentic way, while never had that explicit goal.

I recently read this very interesting piece that dives into this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...

replies(1): >>soulof+wl2
◧◩◪◨
6. soulof+wl2[view] [source] [discussion] 2024-05-16 11:46:35
>>craken+LP1
I'm of the belief that the entire conscious experience is a side effect of the need for us to make rapid predictions when time is of the essence, such as when hunting or fleeing. Otherwise, our subconscious could probably handle most of the work just fine.
[go to top]