zlacker

[parent] [thread] 1 comments
1. aspenm+(OP)[view] [source] 2024-11-30 08:46:32
I copied the DOI for convenience but they’re the same paper.

I have no formal math background really so I can’t speak to your methods but I appreciate that you have shared your work freely.

Did you have any issues defending your thesis due to the issues you described above related to publishing?

Noticed a typo in your abstract:

“Maybe” should be “may be” in sentence below (italics):

> We show that this technique addresses the above challenges: (a) it arrests the reduction in accuracy that comes from shrinking a model (in some cases we observe ~ 100% improvement over baselines), and also, (b) that this maybe applied with no change across model families with different notions of size; results are shown for Decision Trees, Linear Probability models and Gradient Boosted Models.

replies(1): >>abhgh+S
2. abhgh+S[view] [source] 2024-11-30 09:00:35
>>aspenm+(OP)
Yes, it did come up during my defense, but it was deemed not to be a concern since I had one prior paper [1] (the original one in this thread of work, the paper I linked above was an improvement over it), and my advisor (co-author on both papers) vouched for the quality of the work.

Thank you for pointing out the typo - will fix it!

[1] https://www.frontiersin.org/journals/artificial-intelligence...

[go to top]