1. Yes, GPT-4 Turbo is quantitatively getting lazier at coding. I benchmarked the last 2 updates to GPT-4 Turbo, and it got lazier each time.
2. For coding, asking GPT-4 Turbo to emit code changes as unified diffs causes a 3X reduction in lazy coding.
Here are some articles that discuss these topics in much more detail.
"// ... the rest of your code goes here"
in it's responses, rather than writing it all out.
Claude is doing much better in this area, local/open LLMs are getting quite good, it feels like OpenAI is not heading in a good direction here, and I hope they course correct.
I hope some people remember and document details of this era, future generations may be so impressed with future reality that they may not even think to question it's fidelity, if that concept even exists in the future.
The former sounds like a great training set to enable the latter. :(