Most humans would do exactly the same unless given either access to pen and paper or a calculator, and it would likely be trivial for GPT-3 input processing to detect it has been presented with a math question and to farm it out to a special calculation module. Once you start to augment its input like that progress would be very rapid but it would no longer be just a language model.
I believe that in the not too distant future there will be pressure to use these "magic" AIs to be applied everywhere, and this pressure will probably not look very hard at whether the AI is good at math or not. Just look at all the pseudoscience in the criminal system [3]. I believe this poses a real problem, so keeping hareping on this is probably the right response.
[1] https://www.nytimes.com/2017/05/01/us/politics/sent-to-priso... [2] https://www.weforum.org/agenda/2018/11/algorithms-court-crim...
[3] https://www.bostonreview.net/articles/nathan-robinson-forens...