To some extent this is true.
To calculate A + B you could for example generate A, B for trillions of combinations and encode that within the network. And it would calculate this faster than any human could.
But that's not intelligence. And Apple's research showed that LLMs are simply inferring relationships based on the tokens it has access to. Which you can throw off by adding useless information or trying to abstract A + B.
I don't feel like this is a very meaningful argument because if you can do that generation then you must already have a superhuman machine for that task.