zlacker

[parent] [thread] 0 comments
1. necove+(OP)[view] [source] 2024-10-20 06:39:40
You seem to be conflating "different hardware" with proof that "language hardware" uses "software" equivalent to LLMs.

LLMs basically become practical when you simply scale compute up, and maybe both regions are "general compute", but language ends up on the "GPU" out of pure necessity.

So to me, these are entirely distinct questions: is the language region able to do general cognitive operations? What happens when you need to spell out "ubiquitous" or declense a foreign word in a language with declension (which you don't have memory patterns for)?

I agree it seems obvious that for better efficiency (size of training data, parameter count, compute ability), human brains use different approach than LLMs today (in a sibling comment, I bring up an example of my kids at 2yo having a better grasp of language rules than ChatGPT with 100x more training data).

But let's dive deeper in understanding what each of these regions can do before we decide to compare to or apply stuff from AI/CS.

[go to top]