MathChief

MathChief t1_izjarfb wrote

1x1 conv is essentially a linear transformation (of number of channels) as the other redditor suggests, same as nn.Linear in PyTorch.

What I would to add is in PyTorch the 1x1 conv by default accepts tensor of shapes (B, C, *), for example (B, C, H, W) in 2d, this is convenient for implementing purposes. If you use nn.Linear, the channel dimension has to be first permuted to the last, and then applying the linear transformation, and permuted back. While using the 1x1 conv, which is essentially a wrapper for the C function that does the einsum automatically, it is just a single line thus the code is cleaner and less error prone.

7

MathChief t1_izgdz7x wrote

One more question: for the Van der Pol benchmark

Using the old faithful ode45 in MATLAB (Runge-Kutta) to run test problem you listed in the poster

vdp1 = @(t, y) [y(2); 25*(1-y(1)^2)*y(2)-y(1)];
tic;
[t,y] = ode45(vdp1,[0 5],[1.2, 0.1]);
toc;

It takes only 0.007329 seconds for marching 389 steps using FP64 on CPU. What is the loop time's unit? What is the bottleneck of the implemented algorithm?

6

MathChief t1_iz0jzwb wrote

Native mandarin speaker here. I don't think the neural translation has captured much of the sentimental and sarcastic nuances of the statements on zhihu.com at all.

A rough translation of some serious accusations in Chinese (a 3rd person paraphrasing).

> 另外,作者放出代码是想证明什么?本篇文章最大的错误就是投稿版本和camera ready版本数据严重不符,极大地影响了审稿人的判断,即使你的代码可以复现出camera ready的数据,依然无法解释最关键的错误。 作者还是不要做无谓的解释了,错误已经无法挽回,过多的借口只是越描越黑。主动向nips承认错误并撤稿是基本素质。

This poster said making the source codes public is like a futile attempt to make themselves look innocent. "Even if releasing the source codes can let other replicate the benchmarks, still, it cannot explain the key mistakes." This poster is pretty sure that the authors had cheated (without saying so). The bottom line is to withdraw from NIPS and acknowledging the cheating.

> 证据嘛,有时候会迟到,但迟早会来。还有人说我黑华人学者,他们这一套我实在太熟了。这种rebuttal里面报个更高的数字,欺骗一下审稿人,然后camera-ready不把这个数字加上去,这简直都是小儿科啦,也就是openreview会把这些内幕公之于众,而且这paper拿了个奖搞了个大新闻。更过分的造假比如串通一气、互相审稿那也屡见不鲜了。aaai完蛋一大主因就是先有几个水王当了ac,然后一人得道之后,后面的鸡犬也开始paper爆炸,然后这种ac越来越多,最后劣币驱逐良币。相比之下,训练数据里动点手脚,cherry-pick一下结果,那真的只算是小trick了。其实很多人抱着没什么意义的方向在那儿一次水个十几篇,一方面是因为舒适区内轻车熟路,另一方面不就是这个领域都是老熟人了吗…

This poster says that there are many Chinese scholars having ethical issues, like collusion rings, etc. The "人得道之后,后面的鸡犬也开始paper爆炸" part is referring a famous saying "一人得道,雞犬升天" from some ancient Chinese writing. "水王" can be understood as someone producing lots of templat'ish papers with no new scientific contributions. This phrase came from a saying among Chinese netizens "灌水" which means meaningless content filler like Lorem Ipsum. So when these "Lords of Lorem Ipsum" became AC, the "researchers" around them got lots of publications due to collusions.

Overall, the accusations on zhihu.com are career-ending serious. Unlike the "innocent until proven guilty" atmosphere here, zhihu'ers took the opposite stance, likely attributing to mainland Chinese culture.

6