zlacker

[parent] [thread] 0 comments
1. narenm+(OP)[view] [source] 2025-04-04 02:20:32
i agree. it feels like scaling up these large models is such an inefficient route that seems to be warranting new ideas (test-time compute, etc).

we'll likely reach a point where it's infeasible for deep learning to completely encompass human-level reasoning, and we'll need neuroscience discoveries to continue progress. altman seems to be hyping up "bigger is better," not just for model parameters but openai's valuation.

[go to top]