underPanther
underPanther t1_jdjfndz wrote
Reply to comment by elegantrium in [D] ICML 2023 Reviewer-Author Discussion by zy415
Congratulations!
underPanther t1_jdho40y wrote
Reply to comment by zy415 in [D] ICML 2023 Reviewer-Author Discussion by zy415
I wonder if it's because the author-reviewer discussion period wasn't so clearly communicated at ICML this year compared to those conferences.
underPanther t1_jdgli5w wrote
Reply to comment by Educational-Walk8098 in [D] ICML 2023 Reviewer-Author Discussion by zy415
The 7s would not give these scores already unless they were prepared to argue for the acceptance of your paper in its current state.
Extra experiments are always nice, but I would be proud of yourself for the hard work that you have done already instead of the one experiment that you can't do.
underPanther t1_jdeofve wrote
Reply to comment by andrew21w in [D] Simple Questions Thread by AutoModerator
Sorry for the confusion! It's discriminatory activations that lead to universality in wide single layer networks. I've editted post to reflect this.
As an aside, you might also find the following interesting which is also extremely well-cited: https://www.sciencedirect.com/science/article/abs/pii/S0893608005801315
underPanther t1_jddpryu wrote
Reply to comment by andrew21w in [D] Simple Questions Thread by AutoModerator
Another reason: wide single-layer MLPs with polynomials cannot be universal. But lots of other activations do give universality with a single hidden layer.
The technical reason behind this is that non-discriminatory discriminatory activations can give universality with a single hidden layer (Cybenko 1989 is the reference).
But polynomials are not discriminatory (https://math.stackexchange.com/questions/3216437/non-trivial-examples-of-non-discriminatory-functions), so they fail to reach this criterion.
Also, if you craft a multilayer percepteron with polynomials, does this offer any benefit over fitting a Taylor series directly?
underPanther t1_jcbf1l8 wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Firstly, I don't see it as research if it's not published. It's a commercial product if they don't share it and profit from it. If you can reimplement it and publish it, it's yours for the taking.
Secondly, there's so much interesting work outside of large language models.
I don't care too much about what OpenAI get up to. They have a management team trying to become billionaires. That's fine. I'm happy doing science in my living room. Different priorities.
underPanther t1_jdukhr6 wrote
Reply to comment by passerby251 in [D] ICML 2023 Reviewer-Author Discussion by zy415
Well done on replying quickly. The timings of the last minute interaction should be apparent to the AC.