Historically, I'm a backend and distributed systems engineer, but integrating GPT4 into my workflows has unlocked an awe-inspiring ability to lay down fat beads of UI-heavy code in both professional and personal contexts.
But it's still an L3: gotta ask the right questions and doubt every line it produces until it compiles and the tests pass.
Did you happen to mean overestimates? Just trying to make sure I understand.
For example, GPT-4 produces Javascript code far better than it produces Clojure code. Often, when it comes to Clojure, GPT-4 produces broken examples, contradictory explanations, or even circular reasoning.