zlacker

[parent] [thread] 0 comments
1. davegu+(OP)[view] [source] 2025-04-05 22:32:15
o3 progress on ARC was not a zero shot. It was based on fine tuning to the particular data set. A major point of ARC is that humans do not need fine tuning more than being explained what the problem is. And a few humans working on it together after minimal explanation can achieve 100%.

o3 doing well on ARC after domain training is not a great argument. There is something significant missing from LLMs being intelligent.

I'm not sure if you watched the entire video, but there were insightful observations. I don't think anyone believes LLMs aren't a significant breakthrough in HCI and language modelling. But it is many layers with many winters away from AGI.

[go to top]