Viewing a single comment thread. View all comments

GFrings t1_j8dixv3 wrote

In general, absolutely yes. In practice, the review process for most tier 1 and 2 conferences right now is a complete roll of the dice. For example, WACV and some other conferences explicitly state in their reviewer guidelines that you should consider the novelty of the approach over the performance. But I still see many reviews that ping the work for lack of SOTAness. The best thing you can do is make your work as academically rigorous as possible (have good baseline experiments, ablation studies, analysis...) And submit until you get in. Don't worry about what you can't control, which is randomly being assigned to a dud reviewer.

12

Affectionate_Leg_686 t1_j8ebfju wrote

I second this adding that "reviewer roulette" is now the norm in other research communities too. Some conferences are making an effort to impriove the reviewing process, e.g., ICML has metareviewers and an open back-and-forth discussion between the authors and the reviewers. Still, it has not solved the problem.

​

Regarding your work, If possible, define a metric that encapsulates accuracy vs. cost (memory and compute), show how this varies across different established models, and then use that as part of your case: why is your model much more "efficient" than the alternative of running X models in parallel.

In my experience, using a proxy metric for cost is preferable for the ML crowd. I mean something like operation counts and bits transferred. Of course, if you can measure time on existing hardware, say a GPU or CPU that would be best.

Good luck!

8