But what we're good as using all of our capabilities to transform the world around us according to an internal model that is partially shared between individuals. And we have complete control over that internal model, diverging from reality and converging towards it on whims.
So we can't produce and manipulate text faster, but rarely the end game is to produce and manipulate text. Mostly it's about sharing ideas and facts (aka internal models) and the control is ultimately what matters. It can help us, just like a calculator can help us solve an equation.
EDIT
After learning to draw, I have that internal model that I switch to whenever I want to sketch something. It's like a special mode of observation, where you no longer simply see, but pickup a lot of extra details according to all the drawing rules you internalized. There's not a lot, they're just intrinsically connected with each other. The difficult part is hand-eye coordination and analyzing the divergences between what you see and the internal model.
I think that's why a lot of artists are disgusted with AI generators. There's no internal models. Trying to extract one from a generated picture is a futile exercice. Same with generated texts. Alterations from the common understanding follows no patterns.
A calculator is consistent and doesn’t “hallucinate” answers to equations. An LLM puts an untrustworthy filter between the truth and the person. Google was revolutionary because it increased access to information. LLMs only obscure that access, while pretending to be something more.
Also I used it for a few programming tasks I was pretty sure was in the datasets (how to draw charts with python and manipulate pandas frame). I know the domain, but wasn't in the mood to analyse the docs to get the implementation information. But the information I was seeking was just a few lines of sample code. In my experience, anything longer is pretty inconsistent and worthless explanations.