Submitted by redlow0992 t3_11r97fn in MachineLearning

For the papers we have submitted in recent years, there has been a significant increase in the number of reviewers whose only complaint is the paper not following a "hip" version of the research topic. They don't care about the results and don't care about the merit of the work, their problem is that our work does not follow the trend. It feels like there is this subset of reviewers see anything that is more than a year old as "out of date" and a reason for rejection.

Have we been unlucky with our reviewer bingo recently or is this the case for others as well?

23

Comments

You must log in or register to comment.

foreignEnigma t1_jc7n9mq wrote

IMO, first, it's the fault of AC, who is sending the paper to the wrong set of reviewers. Second, I guess you may need to distinguish the work better and explain why it differs from the current trend. Third, good luck :)

6

respeckKnuckles t1_jc8xver wrote

"Not using gpt4" is going to be in all NLP conference paper reviews for the next six months.

11