If "general cognitive tasks" means "I give you a prompt in some form, and you give me an incredible response of some form " (forms may differ or be the same) then it is hard to disagree with you.
But if by "general cognitive task" you mean "all the cognitive things that human do", then it is really hard to see why you would have any confidence that LLMs have any hope of achieving superhuman performance at these things.
Needs to be a closed loop, running on its own.
We get its attention, and it responds, or frankly if we did manage any sort of sentience, even a simulation of it, then the fact is it may not respond.
To me, that is the real test.