Maybe the real problem is we are testing people on how fast they can do something not if they can do something.
In general, being good at academics require you to think carefully not quickly. I suspect there is a correlation between people who think things through and people who do well in school.
Yes, but to go even further: timed tests often test, in part, your ability to handwrite quickly rather than slowly. There is great variation in handwriting speed — I saw it as a student and as a professor — and in classrooms, we should no more be testing students for handwriting speed than we should be testing them on athletic ability.
In general, timed tests that involve a lot of handwriting are appalling. We use them because they make classroom management easier, not because they are justifiable pedagogy.
Yes, I agree. But my point is about handwriting, rather than writing in general. Handwriting speed is something that we are effectively testing with many in-class exams. And handwriting speed - unlike reading or writing speed - is indeed unrelated to job performance. It is also unrelated to any reasonable measure of academic performance.
I would not concede that speed is not as important as doing it correctly in the context of evaluating learning. There are homework, projects, and papers where there is a lot of time available to probe whether they can think it through and do it correctly with no time limit. It's ideal if everyone can finish an exam, but there needs to be some kind of pressure for people to learn to quickly identify a kind of problem, identify the correct solution approach, and actually carry out the solution.
But they shouldn't be getting penalized for not doing a page of handwritten linear algebra correctly, I totally agree that you need to make sure you're testing what you think you're testing.
They could extend the test time for everyone, but in reality, you won't get many time extensions in real life, where speed is indeed a factor.
If someone can do 21 correct answers in an hour and someone else needed two hours to do the same, due to a faked disability, it's unfair both to the 1-hour student and an actually disabled student who might be missing a hand and needing more time to write/type with a prosthetic.
I mean.. we are comparing students abilities here, and doing stuff fast is one of those abilities. Even potato peelers in a restaurant are valued more if they're faster, why not programmers too? Or DMV workers?
It is actually very informative when one person can
Do humanities have to do handwritten essay tests in the modern world. I had to do those in middle school/high school. No idea if that is still a thing.
Like anything i had to do in a test when i was taking my CS degree is maybe 5% if not less of the portion of my real job tasks. Even if i was triple as fast at taking those tests, i think that would be a neglibile increase in on the job speed.
Later, when I was a professor in the United States, I saw some of my students grappling with the same problem.
I don't think that my students and I are extraordinary. Other people were, and are, limited by slow handwriting when they are required to handwrite their exams. You could try to identify these people and give them extra time. But the better move would be to stop requiring students to handwrite essays under a time constraint.
The students hated the infinite time ones, because nobody knew how much time other students spent on the test so one felt obliged to spend inordinate amounts of time on it.
Besides, if you couldn't solve the exam problems in 2 hours, you simply didn't know the material.
The exams I took were done in blue books where you were required to show your work.
I've never seen that come down to processing speed. Even as a programmer -- I can program probably 10x faster than most of my peers in straight programming contest style programs. But in terms of actual real work -- I'm probably slightly faster. But my value is really I spend a lot of time really understanding the ask and impact of the work I'm doing -- asking good questions, articulating what I'm delivering, etc...
That is, my faster processing speed results in very little added benefit. That is, time to deliver results can matter. Processing speed typically is a very small percentage of that time. And for these tests processing speed is often the main distinction. It's not like they're distinguishing one kid who can't solve this equation and another kid who can. It's generally more likely one kid can finish all 25 questions in 32 minutes and the other would take 38 minutes so they only finish 23 of them in the allotted 32. I don't think that ends up mattering in any real way.
I once hired a civil engineer to do a job for me, and he started billing me for time spent learning how to do it. I refused to pay him. (There was nothing unusual about the job, it was a simple repair task.)
I've ultimately decided that if it's something I'm required to learn for this specific task then I'm billing for the time spent doing that. But if it's something that I figure I should know as a person being hired to do a task in this particular domain then I won't bill for it.
To me it's the difference between hiring a mechanic to 'rebuild an engine' and 'rebuild a rare X764-DB-23 model of an exotic engine.'
It's reasonable to expect a mechanic to know how to rebuild an engine but it isn't necessarily reasonable to expect a mechanic to know how to rebuild that particular engine and therefore it's reasonable for that mechanic to charge you for their time spent learning the nuances and details of that particular engine by reading the manual, watching youtube tear down videos, or searching /r/mechanic/ on Reddit for commentary about that specific video.
It's important to strike a balance between these kinds of things as a contractor. You don't want to undervalue your time and you don't want to charge unreasonable rates.
I think I'd be careful about generalizing your experience, nor mine. If my time in academia has taught me anything is that there is pretty high variance. Not just between schools, but even in a single department. I'm sure everyone that's gone to uni at one point made a decision between "hard professor that I'll learn a lot from but get a bad grade" vs "easier professor which I'll get a good grade." The unicorn where you get both is just more rare. Let's be honest, most people will choose the latter, since the reality is that your grade probably matters more than the actual knowledge. IMO this is a failure of the system. Clear example of Goodhart's Law. But I also don't have a solution to present as measuring knowledge is simply just a difficult task. I'm sure you've all met people who are very smart and didn't do well in school as well as the inverse. The metric used to be "good enough" for "most people" but things have gotten so competitive that optimizing the metric is all that people can see.
I went to grad school in CS after a few years of work and when I taught I centered the classes around projects. This was more difficult in lower division classes but very effective in upper. But it is more work on the person running the class.
I don't think there's a clear solution that can be applied to all fields or all classes, but I do think it is important people rethink how to do things.
For example, get 90% on a test, that's applauded and earns a distinction. In a job context, 90% gets you fired. I don't want a worker who produces "90% well soldered boards". I don't want software that runs on "90% of our customers computers". Or a bug in every 10 lines of released code.
A test puts an arbitrary time limit on a task. In the real world time is seldom the goal. Correctness is more important. (Well, the mechanic was going to put all the wheel nuts on, but he ran out of time.)
College tests are largely a test of memory, not knowledge or understanding. "List the 7 layers of OSI in order." In the real world you can just Google it. Testing understanding is much harder to mark though, Testing memory is easy to set, easy to mark.
Some courses are moving away from timed tests, and more towards assignments through the year. That's a better measure (but alas also easier to cheat. )
Shockingly I got full credit, although the professor probably picked a bigger prime for her next class.
I'm also surprised at how common it is for people to openly discuss how irrelevant leetcode is to the actual work on the job but how it is still the status quo. On one hand we like to claim that an academic education is not beneficial but in the other hand use it as the main testing method.
I think why I'm most surprised is we, more than most other jobs, have a publicly visible "proof of competence." Most of us have git repos that are publicly available! I can totally understand that this isn't universal, but in very few industries is there such a publicly visible record of work. Who else has that? Artists? I'm not sure why this isn't more heavily weighted than these weird code tests that we've developed a secondary market to help people optimize for. It feels like a huge waste of money and time.
I've had similar experiences with auto repair shops. Recently I got a BS estimate for an alternator replacement, and a BS explanation. Fortunately, I had done my homework beforehand and knew everything about how to replace the alternator on my particular car, and the service rep knew he was outmaneuvered and gave me a fair price.
Women believe they are targeted by auto mechanics, but they target men as much as they can, too.
Ya know, the funny thing about students - if you presume they are honest, they tend to be honest. The students loved it, I loved it. If anyone cheated, the students would turn him in. Nobody ever bragged about cheating, 'cuz they would have been ostracized.
Besides, I actually wanted to learn the stuff.
A similar exam problem in AMA95 was to derive the hyperbolic transforms. The trick there was to know how the Fourier transforms (based on sine/cosine) were derived, and just substitute in sinh/cosh.
If you were a formula plugger or just memorized facts, you'd be dead in the water.
The ASR-33 teletype lasted another year.
I ceased knowing everything about my computer in the late 80s.
Alas, we now depend on "lockdown browser mode" for reliably taking tests where you can type, and still there's no support (AFAIK) for "lockdown vim in browser" for coding tests.
I actually loved my classical mechanics class. The professor was really good and in the homeworks he'd come up with creative problems. The hardest part was always starting. Once you could get the right setup then you could churn away like any other (maybe needing to know a few tricks here and there).
Coming over to CS I was a bit surprised how test based things were. I'm still surprised how everyone thinks you can test your program to prove its correctness. Or that people gravely misinterpret the previous sentence as "don't write tests" rather than "tests only say so much"
I think if you look at the 2012 Harvard cheating scandal, it's clear that this isn't true. There, the professor presumed honest students, hundreds cheated, and no student reported.
/s
And I recall a sci-fi short story long ago, technological civilization on a single continent with a permanently clouded sky. They had not figured out they were living on a sphere, they were having trouble with train tracks mysteriously being the wrong distance and train passengers feeling light on the high speed trains. I didn't check the guy's math but it sure seemed right when the answers looked exactly like Einstein's equations even though the units were very different. (Limiting velocity = orbital velocity, the discontinuity being weightlessness.)
One reason it did work is the students liked being trusted, and they did not like anyone that would threaten the system, and would turn them in.
BTW, that was 50 years ago. I have no information on how the honor system is fairing today.