Submitted by redlow0992 t3_11r97fn in MachineLearning

For the papers we have submitted in recent years, there has been a significant increase in the number of reviewers whose only complaint is the paper not following a "hip" version of the research topic. They don't care about the results and don't care about the merit of the work, their problem is that our work does not follow the trend. It feels like there is this subset of reviewers see anything that is more than a year old as "out of date" and a reason for rejection.

Have we been unlucky with our reviewer bingo recently or is this the case for others as well?

23

Comments

You must log in or register to comment.

respeckKnuckles t1_jc8xver wrote

"Not using gpt4" is going to be in all NLP conference paper reviews for the next six months.

11

YouAgainShmidhoobuh t1_jc9a44k wrote

Not so sure about this. It seems like a tempting argument but gpt4 has no explanation of model architecture or training approach at all, so there is no way for fair comparison of any kind.

8

bearific t1_jc9jo5y wrote

Yet when my sister submitted a paper before ChatGPT was released, she got complaints that she did not evaluate on ChatGPT literally days after it was released

9

ClassicJewJokes t1_jc9q98x wrote

Doesn't matter to most reviewers. Little care for accessibility as well, remember the flak you'd get for not using MuJoCo in a RL paper when it wasn't opensource.

4

foreignEnigma t1_jc7n9mq wrote

IMO, first, it's the fault of AC, who is sending the paper to the wrong set of reviewers. Second, I guess you may need to distinguish the work better and explain why it differs from the current trend. Third, good luck :)

6