LeanderKu
LeanderKu t1_j8vxlqa wrote
Reply to comment by Red-Portal in [D] Lion , An Optimizer That Outperforms Adam - Symbolic Discovery of Optimization Algorithms by ExponentialCookie
I think learned optimizers have potential but this is disappointing. Nothing revolutionary in there…there are already sign based optimizers and this is just a slightly different take. I see learned optimizers as the possibility of getting unintuitive results but this just could have been thrown together by some grad student. Random but not surprising.
LeanderKu t1_j6y7cge wrote
Reply to [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
I actually find automatically generating notes to be a smart and useful application. I often have 1 on 1 remote meetings and I find it difficult to both present and discuss my work while also taking notes. It often happens to me that I focus on something so that I forget I should also take notes, which I then notice a week later when I have forgotten half of the tasks. If it would work reliably then I can imagine it to be a very useful addition.
I have never used teams though, everything's on zoom.
LeanderKu t1_j6hd6hp wrote
Reply to comment by JamesBaxter_Horse in [R] Train CIFAR10 in under 10 seconds on an A100 (new world record!) by tysam_and_co
Well, this is also an assumption. It would be interesting which lessons do translate and which won’t. I wouldn’t dismiss it so quickly. Also, It’s a fun game to play and interesting in its own!
LeanderKu t1_iyzttaw wrote
Reply to [D] Score 4.5 GNN paper from Muhan Zhang at Peking University was amazingly accepted by NeurIPS 2022 by Even_Stay3387
While individually unlikely I would expect a conference of the size of neurips to have a few outliers. In the end, it’s the decision of the AC, he is more senior and experienced (the reviewers could be phd students after all). It can happen and after a sufficient sample size it will happen. This does not excuse scrutiny on those outliers btw., it’s unexpected after all.
LeanderKu t1_javeqbn wrote
Reply to comment by A_HumblePotato in [R] High-resolution image reconstruction with latent diffusion models from human brain activity by SleekEagle
They probably don’t generalize. I bet they tried it