I don't think we will have an Open Source GPT4 for a long time so this is sorta clickbait, but for the small, specialized tasks, tuned on high quality data, we are already in the "Linux" era of OSS models. They can do real, practical work.
Not according to my calculation. For low request rate it is likely more expensive than GPT4.
Can you recommend where I can learn more about hardware requirements for running Mistral/Mixtral?