I'd argue that there is probably at least one leap in terms of human-level writing which isn't just pure prediction. Humans write with intent, which is how we can maintain long run structure. I definitely write like GPT while I'm not paying attention, but with the executive on the task I outperform it. For all we know this is solvable with some small tweak to architecture, and I rather doubt that a model which has solved this problem need be conscious (though our own solution seems correlated with consciousness), but it is one more step.
>>ravi-d+(OP)
I agree that intent is the missing piece so far. GTP can respond better to prompts than most people, but does so with a complete lack of intent. The human provides 100% of it.