I will admit that I am conflicted, because I can see some really cool potential applications of Copilot, but I can't say I am not concerned if what Tim maintains is accurate for several different reasons.
Lets say Copilot becomes the way of the future. Does it mean we will be able to trust the code more or less? We already have people, who copy paste stack overflow without trying to understand what the code does. This is a different level, where machine learning seems to suggest a snippet. If it works 70% of time, we will have a new generation of programmers management always wanted.
All the research suggests that AI-assisted auto-complete merely helps developers go faster with more focus/flow. For example, there's an NYU study that compared security vulnerabilities produced by developers with and without AI-assistend auto-complete. The study found that developers produced the same number of potential vulnerabilities whether they used AI auto-complete or not. In other words, the judgement of the developer was the stronger indicator of code quality.
The bottom line is that your expertise matters. Copilot just frees you up to focus on the more creative work rather than fussing over syntax, boilerplate, etc.