But this won't stop it from happening.
What you mean to claim here is that verification is 10x harder than authorship. That's true, but unhelpful to skeptics, because LLMs are extremely useful for verification.
Some answers were trivial to grade—either obviously correct or clearly wrong. The rest were painful and exhausting to evaluate.
Checking whether the code was correct and tracing it step by step in my head was so draining that I swore never to grade programming again.
Also, and know this doesn't matter, but it's so weird to see this downvoted. That's not an "I disagree" button...
Sometimes, code is hard to review. It's not very helpful if the reviewer just kills it because it's hard.
The idea is that people will spend 10x more time reading your code in all future time stacked together. Not that reading and understanding your code once takes 10x the effort of writing it, which is obviously untrue.
Here is the quote from Clean Code, where this idea seems to originate from:
> Indeed, the ratio of time spent reading versus writing code is well over 10 to 1.
A lot of code can not and will not be straightforwardly reviewable because it all depends on context. Using an LLM adds an additional layer of abstraction between you and the context, because now you have to untangle whether or not it accomplished the context you gave it.
I can review 100x more Go code in a set amount of time than I can, say React.
With Go there are repetitive structures (if err == nil) and not that many keywords, it's easy to sniff out the suspicious bits and focus on them.
With Javascript and all of the useWhatevers and cyclic dependencies and functions returning functions that call functions, it's a lot harder to figure out what the code does just by reading it.
I can think of edge cases where a certain section of code is easier and faster to write than to read, but in general - in our practical day-to-day experiences - reading a lot of code is faster than writing a lot of code. Not just faster but less mentally and physically taxing. Still mentally taxing, yes, but less.
I am absolutely still an AI skeptic, but like: we do this at work. If a dev has produced some absolutely nonsense overcomplicated impossible to understand PR, it gets rejected and sent back to the drawing board (and then I work with them to find out what happened, because thats a leadership failure more than a developer one IMO)