Submitted by Even_Stay3387 t3_zcc9bg in MachineLearning

Here is the review link https://openreview.net/forum?id=nN3aVRQsxGd

Title: How Powerful are K-hop Message Passing Graph Neural Networks

Authors: Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, Muhan Zhang

The reviews are open. The paper's four review scores were 3, 4, 5, and 6, and only 6 was weak accept. This is not only far below the NeurIPS acceptance line (5.75 score), but even lower than the requirement for rejected papers to be transferred to AAAI. How to objectively evaluate the quality of the paper? Why was this paper accepted?

While the ac give the reason in the meta-review to say the reviewers are outdated. Actually, through tracking the rebuttal process, we can clearly see that reviewers and authors discussed many rounds and reviewers said concerns are not addressed. Why AC forcefully gave the acceptance????

​

31

Comments

You must log in or register to comment.

UnusualClimberBear t1_iyvpzci wrote

Because the area chair is the one making the recommendation. He managed to convince his senior area chair. Indeed you can suspect collusion, but without reading the paper, from the reviews, it looks like a typical paper with quality in the quantile 10%-60%, and at this level, acceptance is pretty random.

44

flashdude64 t1_iyvz72i wrote

There is so much academic dishonesty that it becomes an unreasonable to use only publications as a metric for institutions

−1

Even_Stay3387 OP t1_iyw3rb4 wrote

You could check the rebuttal process. Reviewers and authors discussed many rounds. Reviewers said concerns are not addressed. It is really ridiculous to say reviewers are outdated.

−2

Nameless1995 t1_iyytwtb wrote

I checked the review engagements. Reviewers 1 and 2 are willing to give borderline accept/weak accept. Even for Reviewer 1, the authors had the final say and reviewer 1 didn't respond further.

Reviewer 3 and 4 are giving weak reject/borderline reject.

Reviewer 3 was only ultimately hung up on the paper not providing a formal proof for some aspects (seemed to have implicitly accepted that other concerns are addressed). In the end the authors claim that they provide the formal proof, but reviewer 3 didn't respond further. Reviewer 4 didn't engage at all.

So I don't think it's "ridiculous" to say that the reviews are outdated. And ideally, we don't want the meta-reviewer to just average scores (otherwise there is no point for a meta-reviewer, just use a calculator, and then accept papers based on scores - would simplify the whole pipeline if we really want that).

13

Artichoke-Lower t1_iyvpb3l wrote

The end of the meta review adresses this:

« It is my opinion that the ratings offered by the reviewers are outdated (they were not updated in light of the author rebuttal). For this reason, I decided to accept the work. »

27

Even_Stay3387 OP t1_iyw3bfm wrote

This is not real. You could check the reviewer and the author have discussed many rounds. The reviewers said the concerns are not addressed. I am very confused why ac said the reviewers are outdated!!

−15

crouching_dragon_420 t1_iywanwb wrote

the AC is usually an experienced researcher and is more senior than the reviewers, who can be PhD students that got assigned papers to review by their supervisors. I would trust the AC metareview more than the reviewers.

18

xgu5 t1_iywg9ma wrote

If that’s the case, why do we even bother to have multiple anonymous reviewers? Just to let the AC make the decisions based on their “more senior” experience.

11

maybelator t1_iyycm4g wrote

Because they have 30+ papers to manage. The reviews allow them to focus on edge cases such as this one.

With CMT, the rebuttal is partially addressed to the AC. With openreview, I agree that it looks more uncomfortable.

10

curiousshortguy t1_iyx399h wrote

Supervisors are responsible and accountable for their reviews though. It's s shitty excuse at best.

3

Nameless1995 t1_iyysho5 wrote

> who can be PhD students that got assigned papers to review by their supervisors

Or the conference. PhD students can also directly get review requests and assignment from the conference. I review stuff as a PhD student that my supervisor don't know about.

2

SuperTankMan8964 t1_iywwi5n wrote

Why are you so surprised, were you one of the reviewers?...😳

22

SkeeringReal t1_iyyntvj wrote

That is pretty crazy, but not unheard of I guess.

Indeed the reviews did reply mostly, so saying they're outdated makes little sense.

Does seem to be a bit fishy, but who knows I guess, if it's a bad paper who cares if it's. published or not, no one will read it regardless.

2

Nameless1995 t1_iyyu1lg wrote

> if it's a bad paper who cares if it's. published or not, no one will read it regardless.

How do we know if it's a bad paper without reading it.

5

LeanderKu t1_iyzttaw wrote

While individually unlikely I would expect a conference of the size of neurips to have a few outliers. In the end, it’s the decision of the AC, he is more senior and experienced (the reviewers could be phd students after all). It can happen and after a sufficient sample size it will happen. This does not excuse scrutiny on those outliers btw., it’s unexpected after all.

2

spacewxyz t1_iyzbw0e wrote

I'm always instantly skeptical of anything coming out of China, including any science or papers. There are of course plenty of excellent, smart Chinese scientists with full integrity but you never know what sort of CCP agenda exists. There are of course ethical problems with accepting anything from China as you're basically accepting papers from modern day Nazi Germany.

−18