Submitted by ThisIsMyStonerAcount t3_z8di4c in MachineLearning

I've been to a number of NeurIPS so far. I have a PhD and work in industry. Publish here occasionally. Not willing to discuss identity of my employer, but AMA else. Whatever is on your mind, either on ML in general, or NeurIPS specifics.

29

Comments

You must log in or register to comment.

pythoslabs t1_iyboevp wrote

Is there a list of the rejected papers for NeurIPS2022 ? ( Wanted to get an idea of the other research areas - which are not probably deemed relevant by the reviewer :) )

I COULD find the list of the accepted papers though - link here if anyone is interested .

16

Alarming_Fig_3660 t1_iyblmuc wrote

Had it really become like the burning man of conferences? Meaning now it’s overrun by mostly non-academic folk? Is it still worth going to?

14

ThisIsMyStonerAcount OP t1_iybqune wrote

The poster sessions are still nice, though overcrowded. I didn't particularly enjoy the keynotes so far. But in general terms, it's still fairly academic.

19

DeezNUTSampler t1_iybwnnz wrote

Definitely not. I'd say a solid 70-80% are researchers, and a significant number of non-research industry folks are those who work on ML

13

bluboxsw t1_iyb5etz wrote

Any big leaps forward that surprised you so far?

12

ThisIsMyStonerAcount OP t1_iyb5zlc wrote

Only the leap between the beginning of the conference hall, and the beginning of NeurIPS at the end of that hall. All the big papers so far have been known for long enough.

41

cyborgjames123 t1_iycw8ht wrote

+1 lol. i am attending Neurips too, and that long walk is so annoying

7

learn-deeply t1_iyb3tkk wrote

Which company has the best after-parties?

8

ThisIsMyStonerAcount OP t1_iyb59jw wrote

Most haven't happened yet, so I don't know. GResearch is throwing one at a boat this year, which for now sounds like the most exciting thing this conference as far as Parties go. I.e., FlowRider's not gonna show up again, those times are over, now it's mostly about trying to get people to mingle. I hope the PhD students still party it up amongst themselves, but if so, I wasn't invited.

Apart from that, the biggest news in the ML research social sphere is that the 42. annual Workboat Show will begin tomorrow on the other side of the convention center. Everybody saw their big posters and everyone is talking about them. It's this year's inside joke. If no-one acknowledges them in keynotes or the end-of-conference speech, I'll be heavily disappointed.

21

pm_me_your_pay_slips t1_iybpgrm wrote

Flo-rida and the intel nerds trying to be cool unveiling what looked like a GPU was definitely a low point

8

mtocrat t1_iycqxw4 wrote

companies throwing money around like there's no tomorrow is a high point for people who like money and are working in the industry or aspiring to.

3

snekslayer t1_iybow9b wrote

Can I join these parties as a industry researcher? Or are there only for phd students?

4

ThisIsMyStonerAcount OP t1_iybpgkt wrote

They are for everyone who gets an invite. How selective those are and who gets one depends on the company. The general rule is that the more hireable you are, the more they'll be interested in inviting you. The best way to know about them is to talk to the people at the expo booths. Some are very public, e.g. Google let's everyone in who scans the QR code at their booth in time. On the other hand at GResearch you had to sweet talk the recruiter. I stopped talking to the booth people long ago, so I miss out on most events.

11

Cheap_Meeting t1_iybzc8v wrote

Did you hear about any parties on Thursday. Apparently Google, OpenAI and DeepMind are all tomorrow.

4

joimintz t1_iycfcus wrote

I heard some hedge funds are doing their parties on Thursday…

5

Mefaso t1_iybnevz wrote

How does it compare to 2019?

Are poster sessions still being closed due to overcrowding?

I assume there are no crazy Russian parties this year?

5

ThisIsMyStonerAcount OP t1_iybqgfx wrote

On the plus-side, the Mug looks nicer than the 2019 ones. On the negative side, a lot of the usual faces are missing. E.g. haven't seen Yoshua Bengio, or Neil Lawrence, or LeCun, or Schmidhuber, or Schoelkopf, Ilya Sutskever or Ian Goodfellow or lots of other folk. Not that there aren't a ton of brilliant people regardless, but it's kind of weird (might have just missed some of them in the crowd, though).

Poster sessions are crowded like 2019, but it empties out fairly quickly (e.g. 1 hour after the start of the session there's noticable less people). No-one's social distancing and <30% of people wore masks.

No Sperbank models this year. Nvidia is also missing, which was a bit more unexpected.

10

RobbinDeBank t1_iybsrv3 wrote

The what models?

2

ThisIsMyStonerAcount OP t1_iybtxn1 wrote

I might be misremembering, but didn't Sperbank have some people at their booth whose job seemed to consist purely of being eye candy?

3

RobbinDeBank t1_iybxpq5 wrote

Well I’m pretty new to the field so Idk. Just surprised to know that

1

Mefaso t1_iyckce7 wrote

Apparently their parties also had strippers, but you had to know Russians to get in.

This is all just second hand though

3

_thepurpleowl_ t1_iybxu7r wrote

I heard gpt-4 is the new hot talk at the conference. Is that true? What are some insane capabilities you heard?

5

ThisIsMyStonerAcount OP t1_iyd5quv wrote

no, the hot talk is WorkBoat, GPT-4 is vapor ware. But today is OpenAI's party, so if they're going to announce something, today would be the day, IMO.

10

blabboy t1_iydmcqp wrote

What is WorkBoat? I feel out of the loop

2

ThisIsMyStonerAcount OP t1_iydwd3d wrote

Inside joke for conference attendees. Has nothing to do with ML research.

2

Bot-69912020 t1_iyf8gr7 wrote

When my prof tried to get the conference batch, he accidentally queued at workboat and only realized it when they rejected him lol

Would have been really funny if they went through with it and he ended up walking through a boating conference, having no clue what is going on.

2

tastycake4me t1_iyc430i wrote

How much does it cost to attend NeurIPS ? even if your company covers the costs.

Also, I see pictures on social media of the conference but there's so much people. Is it crowded? Does it get annoying? And how would you rate the experience overall?

5

ThisIsMyStonerAcount OP t1_iyd6gca wrote

The ticket was 1000 USD for industry people, which is a huge hike from previous years. Academic rate should've been much lower, but IDK. Then there's the hotel cost. Mine costs ~2k USD for the week, but I've heard people pay way less for an AirBnB. Travel costs depend too heavily on where you're coming from to give a decent estimate.

It's crowded, but that's always been the case. There's a few thousand people here. It's okay most of the time, although it is hard to talk to people during poster sessions, which is unfortunate.

Overall, I'm happy to be here again and meet my old friends and acquaintances that you make working in the field over time. I missed that. And the general sense of being around ML people 24/7, it's nice. On the other hand, I can't wait to have a real meal again and not just fast food all the time. I'd give it an 8/10.

6

AdFew4357 t1_iybkqx0 wrote

I’m an undergraduate. I want and aspire to be in a position like yours. I want to be you. Like my career dream is yours. What can I do right now (applying to PhD programs in statistics ) to get to your spot. What advice do you have for undergrads if they want to break into your field of ML research. I know what I want to research, i have the books and resources and the classes to take. The professors and classmates I will meet in my PhD program. How can I end up in your position.

4

ThisIsMyStonerAcount OP t1_iybp1de wrote

So, maybe let's not put the cart in front of the horse, focus on getting through the PhD first. You sound super motivated, but just to be sure, these are your two most important next steps:

  1. If you can, find a good advisor/mentor. They'll introduce you to people and help you develop your research skills. TBH i don't know if the current PhD student job market is more of a "take what you can get" or a "lots of open positions" thing. But if you have several options on where to join, go with the ones that you feel like you can vibe with: they should have experience with publishing on top tier conferences, but also willing to help you succeed. use the search function, lots of advice on this reddit for what makes a good advisor.

  2. Decide on what kind of research you want to do/pick a topic. Would you rather do applied research or theoretical stuff? There's a whole spectrum from maths or learning theory to foundational (e.g. RL or deep learning) to applied (computer vision, NLP, ...). Find a topic that excites you enough that you can dedicate several years to learning it and excelling in a subfield of it. The classes you'll be taking and the conversations with people at your lab/conferences should give you a flavor of what's out there. There's a bit of luck involved with picking a research direction that proves to be relevant, so advise from advisors/mentors helps a ton at that particual stage. One of the most important lesions you should learn in your PhD is how to find and approach good research questions.

Those are the two most important things to optimize your PhD success, which in turn optimizes hireability. In general your best bet at an industry research position is to do work that is meaningful enough that someone notices, in a field that the company (or a reasearch team with headcount) cares about. What that is depends on where you want to go and which field you want to work in. Definitely try to collaborate with industry, apply for internships or similar programs, or try to find other ways to collaborate while you're still in your phd. But like I said: I'd focus more on enjoying my PhD first, everything else should merely be a regularizer. Find a topic that interests you and the rest will follow.

16

AdFew4357 t1_iybshr4 wrote

Thanks a lot of the advice. I’ll do some more research on 1), as I’ve applied to schools but I’ve done little screening on the “vibe” of potential advisors and labs.

3

ThisIsMyStonerAcount OP t1_iybudxa wrote

Look for someone whose current students seem happy (talk to the students if you can!). The ideal advisor is someone who values work/life balance, yet still manages to do good work, and is willing to talk to you/help you on a regular basis.

6

chechgm t1_iybufel wrote

Does Cambridge University Press have the usual NeurIPS deals?

4

Red-Portal t1_iydabf3 wrote

Yes!

3

chechgm t1_iydliul wrote

I found it online too but they don't have the book I want :'( are they handing sitewide discount codes?

1

waebal t1_iydzvzt wrote

The “deals” are maybe $1 cheaper than the price listed on amazon.com.

1

Ok-Associate878 t1_iyboq3w wrote

What’s an underrated paper? And was cohere the best party so far?

3

ThisIsMyStonerAcount OP t1_iybqo0r wrote

Underrated: I haven't read all of the Outstanding Papers yet, but I'm looking forward to digging into "Is Out-of-distribution Detection Learnable?" and "A Neural Corpus Indexer for Document Retrieval". (though arguably oustanding papers aren't underrated ;) ).

Cohere: Aidan wasn't there, which was a bit said, would've enjoyed meeting him again.

10

hophophop1233 t1_iyb2ytg wrote

What exactly is the current state of ai/ml and how do I learn it because I need to catch up. I’ve played around building my own networks in keras.

2

ThisIsMyStonerAcount OP t1_iyb4wme wrote

Well, I don't know at what level you're at, but I'm assuming you're undergrad and will keep this high level:

Well, we have models that have some understanding of text (e.g. GPT-3), and some notion of images (anything since ResNet or even AlexNet). Mostly in the vague sense that when we feed text or images into these "encoder" networks, they spit out a "representation vector" (i.e., a bunch of hard to decrypt numbers). We can feed those into "decoder" networks that do sensible things with those vectors (e.g. tell you that this vector is most likely of class "husky" at this and that position, or produce text that is the logical continuation of whatever textprompt you give it). We can train huuuuuuuuuuuuuge models like that (Billions of parameters to learn, probably cost 10^6$ to train for the first time). Very recently (last 1-2 years) we've learned to combine these two models (e.g. CLIP). So you can feed in text and get out an image (e.g. Stable Diffusion), or feed in text and image and get out whatever the text said to get out of the image (e.g. Flamingo).

That's roughly where we are in terms of big picture. Currently, we're working on better ways to train these models (e.g. by requiring less to no supervised data), or find out how they scale with input data and compute, or get smaller models out of the big ones, or whether to name the big ones "foundational" or "pretrained", or find creative ways to use or improve e.g. Stable Diffusion and similar models for other applications like reading code, as well as bunch of other stuff. No idea what the next big idea will be after. My hunch is on memory (or rediscovering recurrent nets).

Edit: this was extremely Deep Learning centric, sorry. There's of course other stuff going on: I don't follow Reinforcement Learning (learning over time with rewards) at all, so no clue about that, though it's arguably important for getting from ML to more general AI. Also, there are currently lots of issues being raised wrt. Fairness and Biases in AI (though I have seen almost no papers on it this year, why is that?). And more and more, people start to reason about "causality", i.e., how to go from correlation of things to causation between things... lots of other stuff outside my bubble.

23

random_boiler t1_iybvnyt wrote

Any advice for phd students in other fields trying to join ML research? Like I took dl classes already, but how do I get into research?

2

ThisIsMyStonerAcount OP t1_iyd417l wrote

I've seen people have a lot of success by doing a PhD in their respective field (e.g. Chemistry, Biology, Hydrology, ....) that applies ML to their field, and then slowly change their research to ML (e.g. going from "predicting stuff about atoms" to "using super state-of-the-art ML to predict stuff" to "adapting state-of-the-art stuff in ways that might be useful to other ML people and publishing at NeurIPS". You'll need to discuss this with your advisor though, and ideally find one who's willing to support you (or at the very least doesn't mind if you try).

4

random_boiler t1_iyd802y wrote

>going from "predicting stuff about atoms" to "using super state-of-the-art ML to predict stuff"

At this stage, they typically publish in their field right?

2

ThisIsMyStonerAcount OP t1_iyd9s80 wrote

Usually yes, you'd only publish at ML venues if you think it's more important for the ML people to hear about your research than it is for the people in your own field.

1

FirstOrderCat t1_iyc30d4 wrote

what are you doing there? what is your goal of visiting this conference?

2

ThisIsMyStonerAcount OP t1_iyd8ky3 wrote

I'd usually go to NeurIPS to present my own work, but this year I don't have a paper here. I came to mingle with other researchers, and catch up with old friends (people from my previous jobs/labs, people who left my current team, ex-interns, my advisor, co-authors from previous papers, random party acquaintances from parties at previous conferences, ....), make new ones, and see what other people are working on.

5

Grouchy_Document7786 t1_iych3ca wrote

What do you think about JAX framework ? Will it ever get any close to PyTorch popularity ? (On the research sector)

2

ThisIsMyStonerAcount OP t1_iyd57ok wrote

JAX is a huge step up from Tensorflow, even though I aboslutely don't understand why it takes Google so many iterations to eventually land on a PyTorch API. Jax might be close enough that they'll give up trying, but I feel like it still falls short in being as good as Pytorch. But Google will definitely continue using it, and they're one of the main drivers of AI research. So at least in that area it'll see non-trivial adoption. I still think Jax is inferior to PyTorch, so there's no reason to switch (the better support for TPUs might be a selling point, so if Google gets serious about pushing those onto people, there might be an uptick in Pytorch->Jax conversion).

The productization story is way worse than in PyTorch, and even within google, production still uses TF. Until that changes, I don't think Jax will make big inroads in industry.

6

euFalaHoje t1_iybvoss wrote

This isn’t NeurIPS related but what is your job title? I have a PhD too and I’m looking for a more research oriented role. In my current job I’m not doing any research. I miss it from my grad school days. What would you suggest?

1

ThisIsMyStonerAcount OP t1_iyd48d7 wrote

Research Scientist. If you wan to do a more research elated role, I'd suggest you find one. They're certainly out there, just talk to companies who are hiring (see the NeurIPS sponsor's page). I can't tell you how hard/easy it will be to get in if you don't have any recent, relevant publications in the area, though. I'd expect that at least early-stage startups might not care, but you won't have time to do research there, either.

2

ID4gotten t1_iyc266i wrote

Hi, thanks for doing this! A few questions: 1) Given the recent tech layoffs, what do you think the mood is among sponsors and attendees where hiring is concerned? Is there a lot of active recruiting? 2) Do you feel like the sponsors/vendors are exclusively hiring people fresh out of school, or is there a range of openings? 3) Are you seeing any groups working on neuro-symbolic methods to leverage the successes of LLMs?

(You were kind to address my earlier question in this sub about job seeking at NeurIPS - unfortunately I wasn't able to come this year. )

1

ThisIsMyStonerAcount OP t1_iyd86dz wrote

  1. The recruiters who are there are still intent on hiring and getting to know people. But the big corps have shifted from "we're hiring everything and everyone" to "if you're outstanding in an area we care about, we'd love to have you". I haven't talked much to recruiters, but it seemed like they were still trying hard to find interested people.

  2. Can't say. HR people were interested in me, though my badge clearly says that I already work in industry.

  3. Haven't seen much happening in that sphere (which does not imply that it isn't out there).

2

Hrnaboss t1_iycjmpu wrote

What are your thoughts and experience with more applied research and building towards a career in more technical management? Do you have any advice?

1

noop_noob t1_iye2jog wrote

Any news on GPT-4?

1

ThisIsMyStonerAcount OP t1_iyed1to wrote

I wouldn't get my hopes up on anything big on that front. Sure, they could train a more compute efficient model (c.f. Chinchilla), but in general, it'll be incremental work, not ground breaking. I'd be surprised if OpenAI actually dedicated a lot of resources to improving GPT-3, it would not be their style. There's comparatively little to gain in terms of new breakthroughs, IMO.

2

zeroows t1_iyethk3 wrote

What happened to Nips name?

1

ThisIsMyStonerAcount OP t1_iyfa33o wrote

Some people were bothered by the political incorrectness, so it was renamed in 2017

2

RandomTensor t1_iybbvzu wrote

Do you agree that there’s a 20% chance we will have conscious AI by 2032?

−7

ThisIsMyStonerAcount OP t1_iyblfm8 wrote

So obvious joke first: no, I don't agree because that's a continuous Random Variable and you're asking for a point estimate. badum tss

No but seriously, no-one can remotely predict scientific advances 10 years into the future... I don't have a good notion of what consciousness for an AI would look like. The definition Chalmers gave today ("experiencing subjective awareness") is a bit too wishy-washy, how do you measure that? But broadly speaking I don't think we'll have self-aware programs in 10 years.

12

canbooo t1_iyc5e42 wrote

Technically speaking, having 20% chance is not a point estimate, unless you assume that the distribution of the random variable itself is uncertain.

In that case, you accept being Bayesian so give us your f'in prior! /s

2

ThisIsMyStonerAcount OP t1_iyd4jfg wrote

what I meant is that you're asking me p(X=x)=0.2, where x is continuous, hence p(X=x) = 0.

2

canbooo t1_iydgqzt wrote

Oh, fair enough, my bad, I misunderstood what you mean. You are absolutely right for that case. For me the question is rather P(X>=x) = .2 since having more intelligence implies you have (implicit at least) 20% but this is already too many arguments for a joke. Enjoy the conference!

1

simplicialous t1_iye7mlu wrote

I think they're referring to a Bernoulli distribution being discrete, while the estimator that answers the dudes question would have to be wrt a continuous distribution.

&#x200B;

Ironically I work with Continuous-Bernoulli latent-density VAEs so I don't get it. woosh.

2

canbooo t1_iyeabow wrote

Unsure about your assumption about the other assumptions but loled at the end nonetheless. Just to completely confuse some redditors:

r/woosh

1

simplicialous t1_iyebm79 wrote

Just shootin' from the hip.. I'm not sure why the answer to the guy's question would have to be continuous though...

I do know that the Bernoulli distribution (that is used to generate probability estimates) is discrete though...

🤷‍♀️

1

waebal t1_iydz7lb wrote

Chalmers’ talk was at a very high level and geared towards an audience that is completely clueless about philosophy of mind, but he did talk quite a bit about what would constitute evidence for consciousness. He just doesn’t see strong evidence in existing systems.

1

Phoneaccount25732 t1_iybm23q wrote

To operationalize the question a bit and hopefully make it more interesting, let's consider whether 2032 will have AI models that are equally as conscious as fish, in whatever sense fish might be said to have consciousness.

−5

ThisIsMyStonerAcount OP t1_iybqrrj wrote

How is that operationalizing it?

2

Phoneaccount25732 t1_iybrlqw wrote

It's easier to break down the subjective experience of a fish into mechanical subcomponents than it is to do so for higher intelligences.

−3

waebal t1_iye0yb0 wrote

I agree. Chalmers points out that consciousness doesn’t require human-level intelligence and may be a much lower bar, especially if consciousness exists as a spectrum or along multiple dimensions. If you’re willing to admit the possibility that there’s something that it’s like to be a bat, or a dog, or a fish, then it seems plausible that there could be something that it is like to be a large language model with the ability to genuinely understand language beyond a surface level. Chalmers seems to think we are getting close to that point, even if e.g. Lamda isn’t quite there yet.

1