No matter how much computing power you give them, they can't solve harder problems.
This research suggests we're not as close to AGI as the hype suggests.
Current "reasoning" breakthroughs may be hitting fundamental walls that can't be solved by just adding more data or compute.
Apple's researchers used controllable puzzle environments specifically because:
• They avoid data contamination • They require pure logical reasoning • They can scale complexity precisely • They reveal where models actually break
Models could handle 100+ moves in Tower of Hanoi puzzles but failed after just 4 moves in River Crossing puzzles.
This suggests they memorized Tower of Hanoi solutions during training but can't actually reason.
Other “end user” facing use cases have so far been comically bad or possibly harmful, and they just don’t meet the quality bar for inclusion in Apple products, which as much as some people like to doo doo on them and say they have gotten worse, still have a very high expectations of quality and UX from customers.
Why would anyone ever expect otherwise?
These models are inherently handicapped and always will be in terms of real world experience. They have no real grasp of things people understand intuitively --- like time or money or truth ... or even death.
They only *reality* they have to work from is a flawed statistical model built from their training data.
None of them are doing the equivalent of “vibe-coding”, but they use LLMs to get 20-50% done, then take over from there.
Apple likes to deliver products that are polished. Right now the user needs to do the polishing with LLM output. But that doesn’t mean it isn’t useful today