Viewing a single comment thread. View all comments

Affectionate_Leg_686 t1_j8ebfju wrote

I second this adding that "reviewer roulette" is now the norm in other research communities too. Some conferences are making an effort to impriove the reviewing process, e.g., ICML has metareviewers and an open back-and-forth discussion between the authors and the reviewers. Still, it has not solved the problem.

​

Regarding your work, If possible, define a metric that encapsulates accuracy vs. cost (memory and compute), show how this varies across different established models, and then use that as part of your case: why is your model much more "efficient" than the alternative of running X models in parallel.

In my experience, using a proxy metric for cost is preferable for the ML crowd. I mean something like operation counts and bits transferred. Of course, if you can measure time on existing hardware, say a GPU or CPU that would be best.

Good luck!

8