You're better off using models specialized in translation; General purpose LLMs are more useful when fine-tuning on specific tasks (some form of extraction, summarization, generative tasks, etc.), or for general chatbot-like uses.
For foreign language corrections ("correct this German sentence and give a reason for the correction"), GPT-3.5 doesn't quite have the horsepower so I use GPT-4
For a couple dozen languages, GPT-4 is by far the best translator you can get your hand on so basically no.
Right now it's basically a chat bot that you can use to practice conversing with. It provides corrections for the things you type. Eventually I'd like to try adding Whisper as well to allow users to speak out loud.
When you hover over a word, you get a translation. Initially I thought using Open AI for every word translation would be too much, but I've been able to get it down to ~36-40 tokens/request. (3-4 cents/1000 requests). I also began parsing and uploading some of this [Wiktionary data](https://kaikki.org/dictionary/rawdata.html) and am working on a feature that integrates the GPT-3.5 translation with this Wiktionary data.
A lot of these features are still in the works but you can feel free to try it if you like (https://trytutor.app).