purplebrown_updown
purplebrown_updown t1_j67e5vz wrote
Reply to [D] Meta AI Residency 2023 by BeautyInUgly
How are they hiring? I would be cautious about them rescinding offers. Seriously. I’ve heard of interns who had their offers rescinded after they’ve declined other offers.
purplebrown_updown t1_j2y17va wrote
Reply to [D]There is still no discussion nor response under my ICLR submission after two months. What do you think I should do? by minogame
Ugh. Thats frustrating. I don’t think there is anything more to do.
purplebrown_updown t1_iyz8ykc wrote
Reply to [D] What is the advantage of multi output regression over doing it individually for each target variable by triary95
Efficiency mostly. But it can also be a matter of accuracy. You should also be hyper parameter tuning each model so that becomes cumbersome, especially if you have thousands of outputs.
purplebrown_updown t1_iykyv2h wrote
Reply to comment by purplebrown_updown in OpenAI ChatGPT [R] by Sea-Photo5230
Now that’s it’s been out thought it’s clear there are some big deficiencies. And it has the same problem as meta’s galactica - it’s overly confident when it’s obviously wrong.
purplebrown_updown t1_iygwqo9 wrote
Reply to OpenAI ChatGPT [R] by Sea-Photo5230
This thing is insane. I’m kind of blown away. It’s scary good. And I’m someone who hates being sensationalist about AI.
purplebrown_updown t1_istfdxj wrote
That’s so stupid. That’s why interviewing is such a pain. Nobody gets only 30 min to solve a problem. Also, if you don’t expect your employee to pick up new skills or learn, you’re doing it wrong. I don’t go into a new job expecting to do the same exact thing I’ve been doing for the past five years. I would never take a job like that. That’s boring.
purplebrown_updown t1_isfko71 wrote
Reply to [D] Interpolation in medical imaging? by Delacroid
Check out a paper by gottlieb on sync interpolation.
purplebrown_updown t1_ir97rwy wrote
Reply to comment by master3243 in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
I don’t think you read the above post. You should so that you can stop drinking the kool aid.
purplebrown_updown t1_ir8zsto wrote
Reply to comment by ReginaldIII in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
sounds like more marketing than substance, which deep mind is known for.
purplebrown_updown t1_je3xwqa wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Question. I’m guessing they want to continuously feed more data to gpt so how do they avoid using up all their training. Is this what’s called data leakage?