zlacker

[parent] [thread] 1 comments
1. dang+(OP)[view] [source] 2023-09-02 18:10:54
> I could tell that the technical approach they were taking was just not working.

Could you say more about that? How could you tell?

replies(1): >>snowma+J3
2. snowma+J3[view] [source] 2023-09-02 18:37:45
>>dang+(OP)
Perhaps other people with deeper AI knowledge can weigh in here too. But at the time, there were the two things that tipped me off.

1) Cyc's reasoning fundamentally did not feel "human". Cyc was created on the premise that you could build AGI on top of formal logic inference. But after seeing how Cyc performed on real-world problems, I became convinced that formal logic is a poor model for human thought.

The biggest tell is that formal logic systems are very brittle. If there is any fact that is even slightly off, the reasoning chain fails and the system can't do anything. Humans aren't like that; when their information is slightly off, their performance degrades gracefully.

2). Imagine a graph where time/money was on the x-axis, and Cyc's performance was on the y-axis. You could roughly plot this using benchmarks like SAT scores. It was clear if you extrapolated this that Cyc was never going to hit human-level performance; the curve was going to asymptotically approach something well below human-level performance.

As a side note, if you look at the performance of LLMs, I would argue that you get the opposite result for both criteria.

[go to top]