I swear, the big reason models are black boxes are because we _want_ them to be. There's clear anti-sentiment mentality against people doing theory and the result of this shows. I remember not too long ago Yi Tay (under @agihippo but main is @YiTayML) said "fuck theorists". I guess it's not a surprise Deep Mind recently hired him after that "get good" stuff.
Also, I'd like to point out, the author uses "we" but the paper only has one author on it. So may I suggest adding their cat as a coauthor? [0]
I am on the review panel of some conferences too and it is not uncommon to be assigned a paper outside of my comfort zone. That doesn't mean I cut and bail. You set aside time, read up on the area, ask authors questions, and judge accordingly. Unfortunately this doesn't happen most of the time - people seem to be in a rush to finish their review no matter the quality. At this point, we just mechanically keep resubmitting the paper every once a while.
Sorry, end of rant :)
I looked at your blog a bit and was able to find this, which may be it?
> Learning Interpretable Models Using Uncertainty Oracles
I have no formal math background really so I can’t speak to your methods but I appreciate that you have shared your work freely.
Did you have any issues defending your thesis due to the issues you described above related to publishing?
Noticed a typo in your abstract:
“Maybe” should be “may be” in sentence below (italics):
> We show that this technique addresses the above challenges: (a) it arrests the reduction in accuracy that comes from shrinking a model (in some cases we observe ~ 100% improvement over baselines), and also, (b) that this maybe applied with no change across model families with different notions of size; results are shown for Decision Trees, Linear Probability models and Gradient Boosted Models.
Thank you for pointing out the typo - will fix it!
[1] https://www.frontiersin.org/journals/artificial-intelligence...