But what we're good as using all of our capabilities to transform the world around us according to an internal model that is partially shared between individuals. And we have complete control over that internal model, diverging from reality and converging towards it on whims.
So we can't produce and manipulate text faster, but rarely the end game is to produce and manipulate text. Mostly it's about sharing ideas and facts (aka internal models) and the control is ultimately what matters. It can help us, just like a calculator can help us solve an equation.
EDIT
After learning to draw, I have that internal model that I switch to whenever I want to sketch something. It's like a special mode of observation, where you no longer simply see, but pickup a lot of extra details according to all the drawing rules you internalized. There's not a lot, they're just intrinsically connected with each other. The difficult part is hand-eye coordination and analyzing the divergences between what you see and the internal model.
I think that's why a lot of artists are disgusted with AI generators. There's no internal models. Trying to extract one from a generated picture is a futile exercice. Same with generated texts. Alterations from the common understanding follows no patterns.
Learning language from small data.