zlacker

[parent] [thread] 0 comments
1. ilaksh+(OP)[view] [source] 2023-07-05 18:41:50
My take is that its true that there is some limitation imposed by the data ingested but it's not exactly a hard limit. If you think of intelligence as compression, yes compression does have physical limits, but there are multiple dimensions of intelligence.

For example, leading-edge AI could create new layers of information that are more abstract than previously created. The ability to effectively and efficiently manipulate this creates something that could be referred to as higher intelligence.

The big thing that people are failing to anticipate though is hyperspeed intelligence. AI will be able to reason dozens of times faster than humans in the near future. And likely at a fairly genius (although perhaps not totally in-human) level. This effectively is superintelligence.

The reason this is more anticipatory rather than speculative is because LLMs are a very specific application that now have a huge amount of effort going into efficiency improvements. They can be improved in terms of the software stack running the models, the models themselves, and the hardware. And sometimes all of the above.

The history of computing shows exponential improvements in hardware efficiency. Especially in the context of this specific application, it is unlikely that we will see a total break from history.

So we should anticipate the IQ getting at least somewhat higher and the output speed increasing by likely more than one order of magnitude within the next decade.

[go to top]