(It's linked at the bottom of this one, but I'm sure a lot of people don't get that far)
Every company has a predictive algorithm to use on students. Every startup that's stepping into the space is pushing the data and data scientists.
But they all have the same-old, usually decades old, baked in biases. AND they're not doing anything to address it!
Just because it's math doesn't mean it's not biased. I hate it more than anything professionally, right now.
When a model produces an unpalatable result, that doesn't mean it is biased. All these algorithmic fairness people are saying, once you peel back the layers of rhetorical obfuscation, is that we should make ML models lie. Lying helps nobody in the long run.