Competitive coding, despite superficially involving typing code into an editor, has almost nothing to do with working on large pieces of software. It's a lot of rote memorisation, learning algorithms, matching them onto very particular problems, and so on, it's more of a sport. Just like playing too much bullet chess can be bad for your classical chess I can honestly see how it gets into the way of collaborative work.
Great write-up by Erik Bernhardsson, CTO of Better, here: https://erikbern.com/2020/01/13/how-to-hire-smarter-than-the....
The only benefit is you are better at interviewing and they are fun to do but the cons far outweigh the pros.
Every coding competition I’ve participated in required correctness as a first criteria for basic acceptance, and then speed as a secondary scoring criteria.
What I found is far more important are social skills. Can the person work as member of a team? How do they respond to feedback? Or when something is hard? When they get stuck? How do they communicate a design? Rally a team around them? Deal with disputes? React to changing situations? Can they take the initiative or need to be told what to do? Etc, etc.
Together with actual coding/design skills - and with proper management - these are the necessary conditions. All my humble opinion, of course.
The naive conclusion would be that height has nothing to do with basketball ability. The real answer is that markets are efficient and are already correcting one important feature against other predictors. Steph Curry wouldn't even be in the NBA if had the shooting ability of Gheorghe Mureșan.
This can have interesting outcomes. For instance, when Triplebyte published their blog post about which environments get the most hires⁰, it revealed the areas they haven't yet entirely accounted for in their quest to increase matching performance.
0: https://triplebyte.com/blog/technical-interview-performance-...
If you already know that there is a tactic in the position your entire frame of reference changes. Which is actually why puzzle composition is treated very differently from actually playing, and a lot of famous composers are not particularly strong players.
This is why I feel it compares well to coding competitions. It looks so similar, but the mindset is very different. And only looking at tactics, just like only looking at coding as a game problem is I think why it may damage your performance at work.
Building products and services is like driving an 18-wheeler cross country. Needs patience, dedication, long drawn effort, some degree of talent, team work, guidance, and what not.
Disclaimer: I don't have any data to back this up.
Perhaps what you mean is knowledge in artificial crafted problems is not the same as knowledge in practical tasks that you would perform in a real job on a real world application.
I would say that height is an advantage up to a certain point in basketball, but tall people are not especially rare. Within the market of basketball players, you can find tall people who also have other skills, sometimes you find short people (Steph Curry) who have exceptional skills.
Most coding competitions tend to assess a specific set of skills: puzzle solving ability; algorithmic knowledge; and being able to code fast. All of these skills are useful in "real life" programming.
However, since the code you write will be thrown away post-competition, your focus is on churning out solutions that "just work" — proper engineering practices and maintainability isn't relevant. All your code needs to do is to generate the correct outputs.
Does competing turn you into a strong coder? Absolutely. Does this equate to being a strong engineer? Nope. Software engineering isn't just about coding fast.
This is anecdotal, but from what I've seen (as a trainer and friend of several IOI medalists): some of them appreciate that coding != engineering and proceed to develop their engineering skills. Others don't and remain stuck at the "I'll come up with a fast solution" mindset.
Whether one or the other happens very much depends on the person, plus, I believe, whom they end up working with. After all, we've all heard about the "10x" programmer – and when your colleague or subordinate appears to code at 10x speed, you just might think twice about whether you're qualified to advise or guide them. That results in their keeping any bad habits they might have.
The statement might as well be "tourist has bad job performance". (https://en.wikipedia.org/wiki/Gennady_Korotkevich) And that isn't surprising given how much he has to train everyday to stay on top. He even turned down offers from Google/Facebook just to continue qualifying for the big annual competitions like Google Code Jam and Facebook Hacker Cup.
For a more in-depth account on how the top people train, you can check out this guy's advice on how to get two gold medals in IOI: https://codeforces.com/blog/entry/69100 and his training schedule: https://codeforces.com/blog/entry/69100?#comment-535272
Or this guy, who won IOI this year: https://www.youtube.com/watch?v=V_Cc4Yk2xe4&feature=youtu.be...
A better example would be Muggsy Bogues who was a full 12" shorter than Steph Curry and he could dunk.
Alternatively, is it possible that this is an instance where your local experience doesn't generalize.
Hiring is always a crapshoot. Pro sports teams spend a lot more time and money on talent evaluation than tech companies and still get it hilariously wrong all the time.
Read between the lines. If all the players are tall, and all the coaches are tall, and the game has been played for more than a half century with that assumption... who knows how to train/coach a short player?
Just because I see some stronger-worded rebuttals in this thread, I want to point out that just because this is true (it is Berkson's Paradox), that does not mean it cannot be a valuable observation. As the author pointed out, for example, it might mean that this attribute is overweighted in hiring, which is something worth considering.
Whiteboard weeds out candidates that cannot code period. I wouldn't believe the legend of the "senior engineer" who couldn't FizzBuzz until I met him. I've personally conducted interview where a simply word count function (take a string, count the words) couldn't be implemented in 30 minutes. And the resume listed proficiency in several programming languages.
Now this study tells us that competitive programmers aren't that great among the candidates that were hired, not among the pool of candidates. That's very different.
At lower levels like where I'm at, players are prone to mistakes and blunders, so having a good eye for tactics allows you to take advantage of those moments in the game as well as prevent yourself from getting into a bad situation.
But at elite levels, tactics have less importance (as he says in the video he estimates it drops to 50%) as every player at that level is extremely solid.
But nothing about a coding challenge is purely theoretical. Its mostly experience with the specific set of problems that come up in these challenges which is a different set of problems that come up in a commercial situation.
Sorry, just ribbing you a bit.
I believe the words do have subtle but distinct differences in meaning.
Agreed on coding challenges being a different set of problems then most commercial applications.
(That was a partly rhetorical question. My point is that what is considered cheating in coding competitions is pretty much normal in real life jobs. Somebody who knows to look for solutions on Stack Overflow, I call them "resourceful", not cheater. Being resourceful is really important in real life jobs.)
Coding competitions are also like F1 races in that the problem space usually is very well-defined and very narrow. You'll only run in this track, you know pretty much exactly who your opponents are, you know exactly how much budget you have. The only things that are unpredictable are the weather on the day of the race, the accidents that may happen, a team member getting sick.
Building actual products and services is like driving an 18-wheeler in that the road is much more open and you don't know what other vehicles and/or reckless drivers you'll come across, the weather variation over a very long distance, traffic, road works and detours. The driver also needs to stay awake for much, much longer lengths of time.
Does being good at kaggke competions negatively correlates with actual job performance?
My own 3 hiring criteria are still not talked about much and were very effective last I used them. Now I understand why a little better.
"Chess problem" is a term of art that refers to an artificial composed position with a unique solution that is constructed to both be a challenge to the solver and have aesthetic value. They often have constraints on the solution such as that White must deliver checkmate in two moves (three ply). This is what I assume you're referring to.
A position from an actual game (or that easily could have been) that demonstrates a tactic (or combination of them) is generally known as a "chess puzzle", largely because the term "chess problem" was already squatted on.
Somewhere in between the two is the "study", which is a constructed position, less artificial than a chess problem but still very carefully made to have a unique solution that walks a tightrope and generally requires absolutely exact calculation rather than working by general tactical principles.
The explanation of the effect did seem a bit too convenient.
- Convincing them to spend great effort into changing their data collection and labelling practices.
- Explaining why a particular technique was used and why it is correct.
- Explaining why they can't expect magic from 'big data'.
- Making models that are robust and easily maintained, vs fragile spaghetti.
But I don't think being good at kaggle implies being bad at data science soft skills. Technical skills are probably weakly correlated -- that's my prior, it would be good to see a study.
I did find a paper examining the performance of TopCoder participants: https://doi.org/10.1007/s10664-019-09755-0
The same is arguably true for many professions and walks of life.
If you can make your work serviceable by more people, it becomes less expensive to do so. And in many (not all) cases, that’s a superior life-time value of your work.
Alternative explanation: good job performance, especially in a big companies where such studies can be conducted, requires some consideration for corporate politics that correlates negatively with interest in programming?
Program with people. Learn from them. Exchange ideas. Get code reviews and give them.
Look at program source code on github for inspiration.
And just keep writing programs.
After that it's just part of the conversation. Probing the resume. And asking some of these question flat out. I also discuss real practical problems/challenges we have at work, most of which require a team to work together.
We also (used to pre-covid) take the person for lunch and decidedly talk not about work at that time.
Of course it's different between an engineer fresh from college and a seasoned software engineer or architect.
In the end, I think interviewing is itself a skill one needs to train. Just having a person solve some brain-teasers doesn't cut it. I have _way_ more teams, co-founders, etc, fail due to social issue rather than technical skills.
And I am by no means presuming that I'm good at it, just that these are things I look for.
An important part I forgot to mention in the first post that the person is interviewing the company as much as we interview the person. So we leave a significant amount of time for asking questions about the company... and try to make a good impression.
(Sorry for the essay.)
Some teams draft for current skill, others draft for absolutely maximum possible potential, others draft for some combination of both. Some teams are willing to risk a "bust" if there is the potential of ultra-elite league-best skill. And considering that no player who has reached the MVP level has fallen further than 15th, I'd say as a whole the NBA teams are doing very well.
From where people assert so confidently such nonsense? Chess is 90% tactics at the under 1800 Elo level or so. At the 2700+ level? No way.
It's easy to make generalizations that minimize or downplay some of these things. But it's no more knowledge then the original study on too little data was.
What proportion of people out there can learn to jump so high, even with extensive training/practice?
I'll spoil a bit of the book though. Free form "coffee interviews" or lunch interviews are the worst possible types. They introduce a ton of bias into the process that's not related to job performance.
In order to really get a good assessment of if the interview ratings were effective, they'd need to also hire some random unbiased sample of those who fail the interview process. There are alternative ways of slicing the data to help give insight, such as looking at only those who barely passed the interview process, or looking only at the bottom 10% of performers. However, when you're looking at such a highly biased sample (only the small-ish percentage of people hired), it's hard to say what the correlation is across the entire interview population.
At the risk of repeating myself, we don't particularly care the predictive power of the scores across the whole range, only their predictive power across those who aren't obvious no-hires and those who aren't obvious hires. That's the range where the power of the interview scores as a decision-making tool is most important.
Also, if two metrics disagree, it's not clear which one is problematic. It's possible that a poor correlation indicates that there's a problem with the performance rating system.
[0] - https://www.nytimes.com/2013/06/20/business/in-head-hunting-...
You haven't, Googles interviews are correlated to job performance. They have data on it internally, people who work there can look. What you probably read was that brain teasers like "why are manhole covers round" doesn't correlate to job performance.
Ricky Rubio is famous for playing elite basketball since he was a scrawny 14 year old.
And Ricky Rubio has pretty much the same height as Steph Curry.
I guess I should have been more critical at the time. Thanks for the clarification. Is it widely known where to look up this data internally now? I left Google over a decade ago.
Like it efficiently chooses all hockeyplayers with birthdays in January to March? [0]
The real answer is people hire based on their biases and organisational restrictions more than they hire on objective metrics - and we have plenty of evidence for that.
I mentioned Steph Curry because the original commenter did, but in general it's very strange to focus on the MVP. That's a small sample and cherry-picking the results, only talking about the successes and overlooking all of the draft busts. There's only 1 MVP in the league every season, and some players have won it multiple times. It was won 5 times by Michael Jordan, who incidentally was drafted 3rd (behind Sam Bowie). Only 21 players have won NBA MVP during that period.
In any case, that MVP record doesn't hold in other sports. For example, NFL MVP Tom Brady was drafted behind 198 other players in 2000.
> NBA teams are really good at "hiring".
Some long-suffering Minnesota Timberwolves fans might say otherwise.
If I'm biased, it might be because I defined the "cybersecurity industry" too broadly, not too narrowly: One can acquire certain skills from competing in CTFs/competitions, e.g. hard skills related to reverse engineering and vulnerability research... but I believe in most cases further skills are additionally needed to succeed in the industry, e.g. software engineering, and softer skills such as communication, planning and negotiation (useful for other jobs as well).
Overly optimizing skills to win CTFs while neglecting other matters can be harmful, like badly assigned character points in an RPG. :-)
"Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.
...One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything."
> An interesting paper [1] claims a negative correlation between sales performance and management performance for sales people promoted into managers. The conclusion is that “firms prioritize current job performance in promotion decisions at the expense of other observable characteristics that better predict managerial performance”. While this paper isn't about hiring, it's the exact same theory here: the x-axis would be something like “expected future management ability” and the y-axis “sales performance”.
That said, a quick search of "training to dunk 5'6"" on Youtube brings up a number of videos.