This, and people generally saying that chatGPT has been intentionally degraded, are just super strange for me. I believe it’s happening but it’s making me question my sanity. What am I doing to get decent outputs? Am I simply not as picky? I treat every conversion as though it needs to be vetted because it does regardless of how good the model is. I only trust output from the model that I am a subject matter expert on or in a closely adjacent field. Otherwise I treat it much like an internet comment - useful for surfacing curiosities but requires vetting.
Why this instead of GPT-4 through the web app? And how do you actually use it for coding, do you copy and paste your question into a python script, which then calls the OpenAI API and spits out the response?
If we were all flipping coins there would be people claiming that coins only come up tails. There would be nothing they were doing though to make the coin come up tails. That is just the streak they had.
Some days I get lucky with chatGPT4 and some days I don't.
It is also ridiculous how we talk about this as if all subjects and context presented to chatGPT4 are going to be uniform in output. One word difference in your own prompt might change things completely while trying to accomplish exactly the same thing. Now scale that to all the people talking about chatGPT with everyone using it for something different.