zlacker

[parent] [thread] 2 comments
1. abhgh+(OP)[view] [source] 2024-11-30 08:44:12
Yes, that's the one: https://arxiv.org/pdf/1906.06852
replies(1): >>aspenm+h
2. aspenm+h[view] [source] 2024-11-30 08:46:32
>>abhgh+(OP)
I copied the DOI for convenience but they’re the same paper.

I have no formal math background really so I can’t speak to your methods but I appreciate that you have shared your work freely.

Did you have any issues defending your thesis due to the issues you described above related to publishing?

Noticed a typo in your abstract:

“Maybe” should be “may be” in sentence below (italics):

> We show that this technique addresses the above challenges: (a) it arrests the reduction in accuracy that comes from shrinking a model (in some cases we observe ~ 100% improvement over baselines), and also, (b) that this maybe applied with no change across model families with different notions of size; results are shown for Decision Trees, Linear Probability models and Gradient Boosted Models.

replies(1): >>abhgh+91
◧◩
3. abhgh+91[view] [source] [discussion] 2024-11-30 09:00:35
>>aspenm+h
Yes, it did come up during my defense, but it was deemed not to be a concern since I had one prior paper [1] (the original one in this thread of work, the paper I linked above was an improvement over it), and my advisor (co-author on both papers) vouched for the quality of the work.

Thank you for pointing out the typo - will fix it!

[1] https://www.frontiersin.org/journals/artificial-intelligence...

[go to top]