Submitted by Mogady t3_y7708w in MachineLearning

Hi all, Just want to share my recent experience with you.

I'm an ML engineer have 4 years of experience mostly with NLP. Recently I needed a remote job so I applied to company X which claims they hire the top 3% (No one knows how they got this number).

I applied two times, the first time passed the coding test and failed in the technical interview cause I wasn't able to solve 2 questions within 30min (solved the first one and the second almost got it before the time is up).

Second Trial: I acknowledged my weaknesses and grinded Leetcode for a while (since this is what only matters these days to get a job), and applied again, this time I moved to the Technical Interview phase directly, again chatted a bit (doesn't matter at all what you will say about our experience) and he gave me a dataset and asked to reach 96% accuracy within 30 min :D :D, I only allowed to navigate the docs but not StackOverflow or google search, I thought this should be about showing my abilities to understand the problem, the given data and process it as much as I can and get a good result fastly.

so I did that iteratively and reached 90% ACC, some extra features had Nans, couldn't remember how to do it with Numby without searching (cause I already stacked multiple features together in an array), and the time is up, I told him what I would have done If I had more time.

The next day he sent me a rejection email, after asking for an explanation he told me " Successful candidates can do more progress within the time given, as have experience with pandas as they know (or they can easily find out) the pandas functions that allow them to do things quickly (for example, encoding categorical values, can be done in one line, and handling missing values can also be done in one line " (I did it as a separate process cause I'm used to having a separate processing function while deploying).

Why the fuck my experience is measured by how quickly I can remember and use Pandas functions without searching them? I mainly did NLP work for 3 years, I only used Pandas and Jupyter as a way of analyzing the data and navigating it before doing the actual work, why do I need to remember that? so not being able to one-line code (which is shitty BTW if you actually building a project you would get rid of pandas as much as you can) doesn't mean I'm good enough to be top 3% :D.

I assume at this point top1% don't need to code right? they just mentally telepath with the tools and the job is done by itself.

If after all these years of working and building projects from scratch literally(doing all the SWE and ML jobs alone) doesn't matter cause I can't do one-line Jupyter pandas code, then I'm doomed.

and Why the fuk everything is about speed these days? Is it a problem with me and I'm really not good enough or what ??



You must log in or register to comment.

_learning_to_learn t1_issvbdy wrote

I completely understand your frustration having gone through the same. But looking at the positive side, you were saved from a group of people who prioritise memorising docs and single line solutions instead of the approach and conceptual understanding.

There are few companies/start-ups who aren't experienced with recruitment and make such rookie mistakes. But there are so a lot of great places which actually evaluate your understanding and approach to a given problem.


artsybashev t1_ist6rs2 wrote

I felt good after getting in one of these companies. I'm still enlisted there but I never landed a customer gig there since they were not interested in paying what I'm used to (I can put my own price on their portal). I've had a couple of customer interviews during the 3 years but I've been more successful finding my own customers.

The interview was definitely not the best experience but not the worst either. It does measure your knowledge of some of the common tools used in the industry and puts emphasis in the most common tools.

Consulting companies usually wants to produce value for the customer as fast as possible without thinking too much if the details. This might push them towards their method of choosing who to hire.


DorianGre t1_isw2dip wrote

I reject one liners in pull requests in any language that combine more than 3 functions. Its not maintainable.


Ulfgardleo t1_iswx9zs wrote


4 function calls in numpy.


DorianGre t1_isxufxh wrote

Don’t be pedantic. You understood what I meant.


Apprehensive-Grade81 t1_isswkoj wrote

Any tips on what companies/startups we should be looking into? I’m new in the space and am still navigating the interview process.


_learning_to_learn t1_issz842 wrote

The first step would be to be clear on what you expect from the company you're applying to. Once you have that clarity, the next step would be to evaluate what a company has to offer you and how it fits into your career plan.

Based on my experience, before applying to start-ups, it's always good to talk to their current and past employees, look at the history of the founders and study the product they are building and their customers.


Appropriate_Ant_4629 t1_ist71um wrote

> talk to their current and past employees

That's brilliant.

From now on I'm going to ask employers for a list of references of past employees I can contact.


cyancynic t1_ist9bp6 wrote

Glassdoor can be useful sometimes


Appropriate_Ant_4629 t1_isth0b4 wrote


But if the employer actually can (and is willing to) provide references of happy past employees, it says a lot about their culture.

They sometimes will asks candidates for references from their previous employers - so it's only fair for them to do the same.


fernandodandrea t1_isuwhpc wrote

I wouldn't want a previous employer giving my contact information like this.


Appropriate_Ant_4629 t1_isvnwbh wrote

> I wouldn't want a previous employer giving my contact information like this.

Which says something about that employer too.

There are some of my previous employers where I'd be happy to be a reference. Others that I wouldn't want to.


TrueBirch t1_iswykvd wrote

It's a common practice to offer departing employees money in exchange for signing an agreement not to criticize the company. My employer does this.


Apprehensive-Grade81 t1_ist21ac wrote

I get that. I have trouble approaching a company with that mindset, though. Often I’ll I end up taking whatever they give without any pushback because I am too worried about coming off negatively and hurting my chances during the interview.

I’m sure a lot of this is anxiety/imposter syndrome from entering a new industry, but it’s hard to convince myself of that in the moment.


Ataru074 t1_ist8z4p wrote

But that’s on you, not on them. Imposter syndrome is real and hard to shake off… but keep in mind that large employers are much better at selecting personnel than small companies. So, maybe it’s time to step up the game and go for the big ass corporations.


Apprehensive-Grade81 t1_istbg1b wrote

100% agree with you. I actually came from the startup space and am quite comfortable there, but I’ve been going for larger corporations because I want to make sure I hit a standard that can assess my abilities on the market today.

I didn’t mean to come off as either whiny or as a victim of the interview process. I was just acknowledging that there is a roadblock in doing what I know is needed, but I also know that’s 100% on me. I know it’ll get better with time as well as it’s a growth period for me, which is moving outside of a comfort zone and enduring some painful experiences- but these are the good pains associated with growth.

I appreciate your advice, though. My mentor gives me the same talk, so at least I know I’m in good company (he’s just insanely brilliant, so I’m lucky to have him helping me along as well).


Ataru074 t1_istbyrf wrote

I recommend anyone to go work for a while for the big dogs. That gives you a much better perspective in your professional life.

I mean, if you can get a job in a F100 corp, do it, see if you like it, and then have the peace of mind that if you are good enough for one of the largest corporation on the planet, you should be plenty good for pretentious small shit.


rstjohn t1_iswxhvk wrote

Not quite the same thing. People who do well in big companies aren't always cut out for startup work and vice versa.


Ataru074 t1_isx0dq8 wrote

Absolutely true. With a catch… a startup can be a rollercoaster for your own self esteem, a large corp is a gauge.


Jurph t1_isxlxf9 wrote

> prioritise memorising docs and single line solutions

This is actually toxic to long-term best practices for a business whose intellectual property is stored as source code. Source code is for humans to read, and so one-lining it into a very clever but obscure invocation is costly in two ways: it costs the writer time & effort to "compress" it, and then it costs every maintainer time & effort to "decompress" it. Five well-commented lines of code that have clear variable names are superior -- from a business case, and a security case -- than one line. In most scripting languages those one-liners compile (hand-wave, whatever) to the same machine code as the five good lines, so there's typically no performance difference.

> claims they hire the top 3%

They hire 3% of candidates, so obviously it's the top 3%, and not an arbitrary slice of the candidate pool filtered by their bogus biases, right? I'm a hiring manager and this interview process sounds totally garbage. I suspect they have no data that correlate their interview process to productivity on the job.


Purple_noise_84 t1_isswx9e wrote

And when they find one they lowball them. You dodged a bullet.


kacjugr t1_ists2uf wrote

Exactly this. If they're already KPIing you to death in the interview, it'll only get worse as you progress to offer and employment. Sounds like a nightmare to work there.


Cheap_Meeting t1_ist2j62 wrote

Sometimes places which are not as good are just as difficult to get into as the good places. They are under the illusion that they only hire the best people, but because they don't know what they are doing their hiring criteria are more or less arbitrary.


suricatasuricata t1_istjakr wrote

Yes, I think this is important to realize. Don't confuse the hardness (or easiness) of the interview with how challenging (or not) the actual job would be. After all, it is relatively cheap to make your interview process challenging. It is significantly harder to be an impactful company, hire the right kind of people etc.


Advanced_Ad_3868 t1_istxgia wrote

Many say they hire the best, but the places I've been at that were actually the best? They tend to be pretty humble. Dunning Kruger and all that.


master3243 t1_isv7ezg wrote

People in ML (of all people) should know that when looking at a crappy metric, the top 3 models are probably crappy models that generalize poorly to the real world dataset.


CactusSmackedus t1_isvki2t wrote

Yep. People telling you how exclusive they are, and a shitty interview process (one that seems to amplify noise more than signal) are two really big red flags.


Ocelotofdamage t1_isw67p5 wrote

Yeah, talking about how exclusive you are is a bad look. We just try to talk about how interesting the work is and how exciting our initiatives are, that’s way more convincing for the candidates you want anyway.


wintermute93 t1_isx5ctw wrote

Right. If your company is wasting time and resources interviewing people only to reject 29 out of every 30 candidates, that's not a problem with the candidates, that's a you problem. Either you're advertising the position wrong or you're evaluating candidates wrong or both.


Ocelotofdamage t1_isxag7u wrote

I mean we do reject 29 out of 30 candidates, but that’s just a function of the insane number of applications we have. There’s really not a great way to tell on paper who’s got the skills it takes to succeed in our business unless you talk to them and give them some problems.


babua t1_isvs90p wrote

If your interview is designed to fail 97% of interviewees, it's completely truthful to claim that you only hire the top 3% ... ^of^the^people^who^apply^to^you


SkinnyJoshPeck t1_ist26dr wrote

lol i just got rejected by glassdoor because i didn’t have a “mastery” of pandas.

who fuckin’ cares? i have experience over years with verifiable projects that made multiple companies real cash-fuckin-money. i made the model, i tuned it up, got it working well within time limit, etc etc.

just because i’m not a fucking pandas wizard, doesn’t mean i’m not a competent ML eng/scientist/whatever. i can’t remember meeting a good PM who cared what model i used, let alone if i used x and y in pandas over z to accomplish my goal.

if the stats are good, the model generalizes well, and training time isn’t abysmal - who cares???


Ularsing t1_isvms80 wrote

I feel compelled to add here that Pandas has an absolutely dogshit API plagued by breaking changes and bastardizations of R code. It's the best package for what it does, but it leaves a lot to be desired. Trying to prioritize Pandas knowledge reads like someone trying to hire based on their omniscient understanding of the field that they gained from their coding bootcamp.


marr75 t1_isxougd wrote

I read a good blog post from a guy talking about how modern IDEs encourage you to learn really weird "motions" (using pycharm's refactor, codegen, and code completion mid-stream, for example). He wasn't saying it was bad per se, just that we should all remember the point isn't to be "good" at the IDE, it's to solve problems with the code.

I feel the same about pandas. If anything, the skill to focus on is vectorizing your operations. That's the biggest readability and performance improvement and it's portable to dplyr, polars, etc.


chief167 t1_istrptp wrote

As someone who sometimes has to hire people, perhaps this is the issue:

Imagine how difficult it is for big companies to get a MLOps framework going, with all the red tape and scattered IT systems. It was very painful where I work. In the end we got something working using a python platform that really needs you to use pandas and sklearn type interfaces.

Let's hypothetically say you are a great data scientist using R, or Sas or MATLAB or ... If I don't have a lot of options I'd hire you and put you on a training program for our framework. But if I have multiple decent candidates, and some don't require retraining, yeah imma gonna pick one of them. I am not spending 2 months trying to get compliance and cybersec to approve your docker container with R code in it, if I can have a similar model in our pre-approved workflow.


SkinnyJoshPeck t1_isu0ui7 wrote

I hear ya; I think the point is less about proficiency and more about mastery -- in my case, I was marked down heavily since I didn't use iloc. Something like

df[df.col < 10]
df[df.iloc[:, 0] < 10]

because I guess it makes it more clear to the reader, and it protects the code from explicit column names; the fact that I didn't use it made me seem like I didn't know pandas well.

to your point, though, I see the importance in the infrastructure. In this case, it was for an ml scientist role where I wouldn't actually be doing any of the MLOps, just designing and tuning the models.


phb07jm t1_isudd5v wrote

Can someone please explain why the second is preferable? I would always do the first because it's more likely that the position of a column will change than the name.


silvershadow t1_isurezr wrote

Change the iloc to a loc and then I would maybe see the argument.

.iloc and .loc explicitly return the original data frame, while [] indexing can in some cases return a copy. Pandas makes no promises on what you get

So depending on what the full expression was the criticism of using [] inducing could make sense. You’d need to see the full context of what OP was writing though.

From the sounds of what they wrote though, this is not the thinking the interviewer was following.


chief167 t1_isuculx wrote

Ok yeah well that's stupid. Because I am actually in favour of column names instead of indexes. Indexes are pain in the ass when your incoming dataframe changes, it creates an implicit dependency.

But your last line is my point. You shouldn't be concerned about MLops stuff, but if your models is already in the right framework, it saves soooo much time


monkeyunited t1_isufjc7 wrote

That’s dumb and violates the “explicit is better than implicit” rule.


AutumnStar t1_isuy0el wrote

I agree with gist of your comment, but FYI, model selection matters a lot for many different reasons. You have no idea how many people I’ve interviewed who just want to just use Neural Nets or XGBoost every time. Or people who couldn’t tell me any advantages/disadvantages for any algorithm.

I tend to look for people who can critically think well. That’s the hardest skill to find in any DS. They should have some experience and competence in coding, obviously, but realistically almost everything else can be taught more easily.


jzini t1_isvofgm wrote

By Glassdoor? Yo do they realize they are Glassdoor?


ajt9000 t1_ist5cll wrote

Yeah I generally hate coding exercises in interviews. Especially timed ones. Its really weird to me when a company wants to scrutinize your solution to some BS problem that you're given 45 minutes to solve but doesn't give a fuck about your portfolio of past projects.


fromnighttilldawn t1_isvw0vc wrote

That's just capitalism bro.

  1. you always need to prove your capacity to work; doesn't matter if you've been doing this for dozens of years
  2. you are always one life event away from losing all of it
  3. your every living moment is squeezed into turning a profit for random strangers
  4. you put on a fake smile, go to work, and pretend with everybody else that the system is normal or even "not so bad" or "could be worse"

chrysanthemum6359 t1_iswpd7s wrote

What on Earth do bad hiring practices have to do with capitalism? Aside from providing an economic climate where people can actually get jobs...


visarga t1_isyfm7d wrote

I don't think most candidates have a repo to show, maybe just an empty one.


ajt9000 t1_isyibso wrote

I don't publish most of my projects publicly either, but I have a long list of historical projects that I can talk about, and many of them I can produce code for upon request. I think that is quite a bit better for demonstrating ability than any crap that I write in under an hour. Especially when the code exams have stipulations like "you cannot use any code or algorithms that you searched for on the internet".


pornthrowaway42069l t1_issyo30 wrote

I had similar experience in some big companies.

Bombed the leetcode, but found an opportunity to show-case my (fairly cool) project code during technical interview. Asking the guy questions, he confused feature importance with feature selection, couldn't answer about a baseline model (They had a black-box without one), and a bunch of other things. When I said "I kind of prepared for pandas + SQL more", said "We expect you to know those things". I guess they expect me to know how to use pandas and SQL but not python for crappy leetcode questions.

The truth is, most companies/ml departments have no idea what they want or should be doing. Good luck to that head of ML team, because I was glad I wasn't selected, with such great interview and ML skills it's a bullet dodged.


doodlesandyac t1_isth498 wrote

Yup most companies have no idea what they want or need and have people interviewing you who have biases toward things that don’t matter


serge_mamian t1_ist4n81 wrote

You dodged a bullet. Good companies don’t interview like that.


[deleted] t1_isvrgis wrote

I mean let’s be real this is just a variation of exactly how the major multi-billion dollar tech companies do interviews.

They can get away with it because they’ll pay people $300-500k+ and it’s just a gauntlet you have to get through. Small companies who replicate this concept while paying 1/4 the salary are out of their damn minds though.


serge_mamian t1_iswdbjm wrote

They might ask you Leetcode-style questions, systems design, ML systems design, but from my experience (working for FAANG) nobody asks to reach accuracy of 90% within 30 minutes and that you had to memorize pandas one liner to do a certain operation that OP did in a loop or whatever. The interviewer is a complete moron.


DorianGre t1_isw2z1a wrote

There was a point in my career that I began refusing to do coding interviews.


serge_mamian t1_iswe8y2 wrote

At a certain skill level, yes, that's the way to go. But it takes hard effort to get there. I would certainly walk out of the interview if somebody asked me to pull out a pandas one liner from memory, cause unless I am desperate, it's gonna be a shitshow to work for that team.


DorianGre t1_iswfgbn wrote

I’ve been lucky in that at some point people started calling me and offering me jobs, so I haven’t interviewed in forever. Happened again yesterday in fact. It will happen to you to if you become known for some obscure but useful area of tech.


jargon59 t1_ist4sdf wrote

I don't know where the company is located, but there are a lot of these types of companies in the Bay Area. Many startups have this mentality of "I will only hire the best" and pick Google-level interviews to weed out most people (mostly to appease their own ego). What they don't understand is that their company cannot pay a FAANG compensation, and the candidates that had obviously grinded leetcode/interview prep would probably have no problem getting a FAANG job. When their offer inevitably falls through, then they complain about there's not enough talent nowadays.


ghost_agni t1_istcxss wrote

Don't join an organisation that measures your ability to produce results under extreme pressure, because that is how the job will be. Such organisations have a clear strategy, ride the ML hype train, make money and then dump. The best companies never ask questions which only have 1 right answer, they always ask case study type questions so they can evaluate how you think.


Wedrux t1_ist8tqt wrote

I worked for several years now as Data scientist and now in a technical lead role and have done some Interviews.

It is so unimported if you can use pandas in one line or whatever. What matters if you understand how your model is used later, how about scaling the approach and genaralisation?

I would always take a New employee who thought about these aspects over one who blindly trains a model but 10 minutes faster.


yourmamaman t1_istbg9n wrote

I also hire DSs and I do the same as you.

My assumption is that these types of interviews were designed by consultancy companies for companies that don't have experienced DSs of their own.


Dihedralman t1_iswmh38 wrote

The kind of consultansies that recommend lines of code as a productivity measurement.


yourmamaman t1_iswqhqx wrote

This is where the statement "Something is better than nothing" fails.


marr75 t1_isxnz3w wrote

I think they're designed by very traditional engineering managers. The coding test trend gained popularity thanks to Jeff Atwood because he used it as an early screen for people applying for lucrative jobs they didn't actually know how to do (which is useful!). Managers were using it as a higher and higher floor for skills and we got the leetcode style (a fresh bootcamper might pass fizzbuzz but they're unlikely to have months to grind leetcode). We've also seen an explosion in roles who code but are more responsible for the wisdom and value of their creations (the spec and visual design aren't enough or even relevant for a model, a Lagrangian relaxation, a recommendation engine, etc).

Real conversation I had with another executive, "Hey, we've got that coding screener for engineers, can we whip up something similar for [name a role]?" You start combining these different forces - a desire for selectivity, a desire to lower hiring cost, more complex technology roles that have to chart some of their own spec, and just human laziness - and you get what OP described.


[deleted] t1_istl15k wrote



OnyxPhoenix t1_iswwphw wrote

Looking for work at the minute and squid game is honestly what it feels like.

4 and 5 stage interview processes with one little slip up and you're out.


pcgamerwannabe t1_istbwn0 wrote

Imagine the codebase dude. Fuck working there


vikigenius t1_ist4gxv wrote

I rarely ever interview for big companies these days for the exact same reason. They are so rigidly stuck to their outdated interviewing process, it's not for me. I have a well paying job that I love already. Who has time to grind leetcode and brush up a bunch of algorithms that I have no interest in that I am never going to use in my everyday work.

At least some of the new up and coming startups seem to have a much more interesting interview process. I know some people hate them but small take home assessments are the best, especially if it's an interesting problem and does not take more than 3-4 hours to solve.


suricatasuricata t1_istj1ms wrote

> I rarely ever interview for big companies these days for the exact same reason.

I don't know of a big company (as in one of the elites) that would conduct an interview like what OP experienced. They might ask you leetcode questions, but no one is going to ask you to memorize pandas. Between the two, I will take understanding data structures over memorizing pandas any day.


vikigenius t1_istm2qo wrote

I was focusing more on the Leetcode side of things. The memorization thing was obviously worse and I don't know of any big company that does it either.

I am an NLP Researcher with good research experience. Leetcode is not going to be helpful for me at all. Sure I can take some time and grind leetcode for a month. I used to do competitive programming back when I was in college so it shouldn't be a problem. But I have a full time job and a life. So it just feels like a waste of time for me.


suricatasuricata t1_istn29a wrote

I completely and totally sympathize with your issues with Leetcode. As someone who never enjoyed competitive programming, I think it is especially frustrating when I realize that it has become the default standard for hiring filters. The way I see it, choosing not to go down that route, seems to at this stage restrict one to very few companies.

In fact, there are way more companies who are finding it easier to set up an automatic hackerrank filter, which invariably involves a competitive programming question.

If you have a PhD, have relevant publications and you can apply for a research role, I suppose you can avoid it, at least at the Big companies.

But for Engineers, IDK, I know folks who are at the Principal/Staff level at G/FB/Amazon and even they have talked about having to undergo at least one Leetcode filter. And to be clear, I am talking about Machine Learning Engineers and not regular Software Engineers.


fromnighttilldawn t1_isvxdee wrote

My friend just went through an interview where they were asking them about some pandas operator. And it was one of those big companies.

I do believe that many ML interviewers are mentally insane.


tech_ml_an_co t1_istpcr1 wrote

Ohh cs interview processes are so broken, ml is not different. I really don't know why that happened. I just recently had a leetcode interview as a lead ml engineer. I mean seriously, I can guarantee that I was able to solve that when I finished my degree 10 years ago. But today why should I invest my free time into leetcode, instead of learning something useful?


madbadanddangerous t1_isutti6 wrote

> But today why should I invest my free time into leetcode, instead of learning something useful?

This is it right here, for any CS job. Grinding leetcode and testing interviewees for that is a waste of time for everyone. Certainly at the lead position, but even for IC roles


fromnighttilldawn t1_isvwn52 wrote

>I really don't know why that happened.

Once upon a time there was a google engineer who wrote a book called the "crack the coding interview" and the rest is just layers upon layers of BS piled on top of the dogma teachings of that book, until red-and-black trees and divide-and-conquer are no different than some relics you find in a cult.

>But today why should I invest my free time into leetcode, instead of learning something useful?

If it is not useful for creating profits, then it is not useful - logic of capitalism 101.


suricatasuricata t1_istl32a wrote

> I only allowed to navigate the docs but not StackOverflow or google search, I thought this should be about showing my abilities to understand the problem, the given data and process it as much as I can and get a good result fastly.

To me this is a sign that they are overfitting to a specific type of candidate. And simply put, their interview process is not robust. If the underlying intent is to find a candidate who has deeper insight into their tools, as opposed to what can be gained via copy pasting blocks of code, you can probe that super easily without this contrived approach. Ask them about how their tools work, contrasting one option versus the other and so on.

IMO, you shouldn't overfit to this experience either. I mean, if we allow for the possibility that an Engineer can be incompetent/crappy, we should allow for the possibility that a Manager (prepared to bet that the hiring manager probably has the same ~ 4 years or so of experience managing) can also be below average.


emerging-tech-reader t1_iswhsrm wrote

> To me this is a sign that they are overfitting to a specific type of candidate.

In banking/finance jobs it is quite common to not even have access beyond what is supplied internally as documentation.

This is what that suggests to me.


suricatasuricata t1_iswjri1 wrote

Are such jobs remote? I have worked previously in defense gigs where you are forced to work on computers that are air gapped. Seems to me that entire setup relies on being present physically.


ToMyFutureSelves t1_istc8qa wrote

The truth is that companies have almost no clue how to differentiate high performing hires from low performing ones. There are a ton of 'tests' that claim to take the top X % of developers, but there is little evidence that their metric correlates to success. (The same way that top 3% SAT doesn't highly correlate to high performance in college/industry).

These tests ARE good for general measures of capability though, and should be used to weed out the wildly unqualified. For example, an SAT score of 1000 is way different from 2000.

But then again, if you don't use a test to determine quality, you have to fallback on personal judgement, and that doesn't work any better.


StartNo5083 t1_iste355 wrote

One time I was doing an in person SQL exam White board style. I write all the queries perfectly only for one of them I used a where clause instead of having. They rejected me for that. Like wtf in the real world it throw an error and I fix it in two seconds. Who codes perfect straight through 100% of the time. These technical exams don’t emulate the real work environment


cyancynic t1_ist9535 wrote

So toptal. Dont take it hard they are assholes and unreasonable hazers in interviews. Find a new company. I’ve been on both sides of that company. Their people are all leetcode wizards but shit engineers IME.


Mogady OP t1_ist9vhe wrote

I wish they asked me PS, I was ready for this, but anyway still I can't see what I did wrong, and this is the second time I get this irritating email from them telling me I'm not good enough as top 3%


cyancynic t1_iste5da wrote

Go read the Glassdoor interview experiences. And go write one on them. They’ve earned it.


purplebrown_updown t1_istfdxj wrote

That’s so stupid. That’s why interviewing is such a pain. Nobody gets only 30 min to solve a problem. Also, if you don’t expect your employee to pick up new skills or learn, you’re doing it wrong. I don’t go into a new job expecting to do the same exact thing I’ve been doing for the past five years. I would never take a job like that. That’s boring.


abnormal_human t1_istxc9l wrote

As a person who hires people, these interviews sound like bullshit and you shouldn't work there or feel bad about this.

My experience hiring DS/ML people is that technical skill is rarely the problem. At this point, 80% of what I am measuring is whether you are product-oriented enough to deliver something without a ton of hand holding. When the interview process is too technical-focused, you end up hiring a bunch of hermits who fail because of communication/collaboration problems or take too narrow a view on the product side.


anonamen t1_isu3alo wrote

Eh, if I'm reading between the lines correctly, this experience is an artifact of their business model. It's a ludicrous hiring process in part because they need to claim to hire the top-3% for marketing purposes, which means they need to reject a ton of people, which means they need to come up with some way to screen out a lot of applicants quickly. They found one.

It's also a consultancy, so they really, really care about speed. They don't care about coding quality, getting rid of pandas, etc. They care about producing something they can bill for as quickly as possible, so you can move on and produce something else they can bill for as quickly as possible.

Re: top-3%. Joel Spolsky's article on this kind of bullshit metric is great. Short version: a ton of companies can credibly claim to hire the top-3% of applicants, because people apply to a lot of jobs, and because the worst people apply for a whole lot of jobs. The same people are in the denominator for all of the companies hiring the top-N%.

Are they hiring the top-3% of people? Of course not. There's no objective metric for that, and we know where the top-3% of data scientists work (roughly speaking) and it's not company X. Company X is just rejecting a large number of people relative to the number they hire. Universities do this too. They'll deliberately encourage huge numbers of people to apply that they know will be rejected solely to push down their acceptance rate and make them appear to be more competitive, which they hope will convince people that they're high-quality. Complete red-herring though. The quality of the people hired isn't related to N hired / N applied. It's about the applicant pool and the selection process. But it's a nice tricky metric for a consulting firm to throw around.


MaNewt t1_ist4mf0 wrote

They don't sound like a healthy group to work for to be honest.


LegacyAngel t1_istcze3 wrote

Apply to higher tier places is the only thing I found works. I did an interview at MAANG where the interviewer had me qualitatively derive knowledge distillation from first principles (qualitative, no code or math on board). I didn't even realize what he was doing until he was asking about edge computing. Really fun. It honestly sucks that finding the good places is almost harder than finding good candidates.


hrpomrx t1_istsc4n wrote

These interviews are a joke. You need to be like one of the characters in Moonwalking with Einstein to pass them these days.

Quality and doing things right first time are far more important than speed. If "quality" means you have to look things up and refer to documentation, then why not? The implementation may have changed since the speedy person last remembered it.


theRealGrahamDorsey t1_isv1521 wrote

I can't for the life of me understand why us muggles participate in this shit.

  • having python muscle memory and doing stupid tasks quickly is not equivalent to problem solving potential for the same reason why being great at spelling bee does not equate to being the next Hemingway.

  • There is no such thing as solving a real world problem in under 30 mins. Absolutely dead stupid idea. Expecting candidates to do a 96%+ in under 30 min is a clear insight the employers exists totally in their own butho. No matter how economically successful they are or they promise to make you they are not worth it.

  • Also, none of us here are in THE 3% group. The folks in this group, you probably know them by name. They are usually dead people that publish papers from the grave and linger in old science and math text-books. Even if you were among the 3%, I bet you can't score that high in such a stupid assessment consistently.

  • AND NO, this is not a learning experience. You just wasted your time and effort playing some fools game. TWICE!!!!

We all better than this, lets stop wasting our life and talent leetcoding and getting banged by a bunch total ass wipes.


Prestigious_Army5547 t1_iswl3lk wrote

Just lazy ass interviewers who like to throw the same question at everyone instead of actually trying to decipher their technical skill by talking to them about their past projects. Asking good valuable questions about their resume means the interviewer needs to do some research about the past projects prior to the interview which companies don't like to do.


Brudaks t1_issxwcr wrote

If a company has more qualified applicants than open positions, then obviously simply being qualified can not imply getting a job; and while the criteria for filling out the 'shortlist' might be the qualifications, the criteria according to whom they'll choose the actual candidate can be quite arbitrary.

From their perspective, as long as the other guy/gal they got instead of you is also decent, their process has no problems.


Mogady OP t1_istb0r4 wrote

I understand this, but at this point they simply reply "we found more suitable candidates" not you failed to one-line pandas


demi12395 t1_istap94 wrote

I guess what they really need is github copilot to write 'quick' code, not an experienced engineer that can strategically solve problems


DeepGamingAI t1_it2ia8u wrote

>what they really need is github copilot

This would be such a cool reply email to a company asking for a coding interview round. Just send a link to copilot and tell the company this should fit your job profile better than I would.


bubushkinator t1_istjdce wrote

All my FAANGMULA MLE interviews were just leetcode with ML System Design and domain knowledge trivia

Super easy

Don't apply to shit companies and you won't get shit interviews

Even Quant interviews just ask me theoretical math - none of this kaggle shit


Environmental-Tea364 t1_isv5bi9 wrote

Do you think the LC questions for MLE is the same difficulty as an SWE interview?


bubushkinator t1_isvh6xp wrote

Depends on the company more than anything

Expect Hards for high paying companies (Finance, Uber, Netflix, Brex, Rippling, etc)


__methodd__ t1_isug8s9 wrote

ML interviews are all over the place unfortunately. With LC at least there are common standards, but even then there's a luck element because different interviewers have different ideas for what is a "complete" answer. God help you if you're a python guy/girl getting interviewed by a C++ nerd.

For ML it's worse because there are so many different work flows within ML. sklearn, pandas, and tensorflow are common, but most companies have a unique combo of tools for production ML. Playing around with a toy dataset and vanilla sklearn is not a great test if, unlike LC, the rules of that game aren't established ahead of time. Most companies with tests like this don't have standardized processes so the recruiter doesn't know what to tell you to prepare.

On top of that there are so many different backgrounds in ML from pure math to stats to traditional engineering and of course CS. ML detail interviews are trivia contests, and ML breadth interviews test your ability for recalling buzzwords. Design interviews suffer the same limitations.

The good news is that you get used to this crap shoot and can eventually start passing interviews. There are, at the end of the day, only so many algorithms and pieces of trivia a reasonable person can throw at you.


htrp t1_isz52k9 wrote

> ML interviews are all over the place unfortunately. With LC at least there are common standards, but even then there's a luck element because different interviewers have different ideas for what is a "complete" answer. God help you if you're a python guy/girl getting interviewed by a C++ nerd.

If your shop is doing ML work in C++ god help you. If your interviewer is interviewing you in C++ without knowing ML .... god help the company.


__methodd__ t1_iszecri wrote

You might not have gone thru big tech interviews before. A lot of companies have the same coding standards for MLE as SWE, so interviews are conducted by SWEs across functions. That will be 1 or 2 coding interviews out of 5 or 6 total.

Every company will allow you to interview in Python, but the point is that a C++ nerd might gatekeep. Maybe the interviewer thinks the hash function in your python dict is really critical for a deep dive. "This noob doesn't understand basic data structures."

I got failed on a coding interview when I used a dict to represent some data, and I could have used an array/list instead. (The only reason I know about that is because the recruiter saw the feedback and saved my ass from oblivion and gave me another shot.)

But yes I agree with the sentiment of God help this company. God help the candidate too lol. It's super super common. Unfortunately smaller companies have even less standardization around their hiring pipelines, so they're even more annoying.


RockyMcNuts t1_isuud8b wrote

don't sweat it too much, rejection happens, I know it doesn't feel good.

for pandas this is pretty good

for the data science interview this is guy is OK -

sounds to me like you're just a little rusty on the plain vanilla ML, every day for a week or two grab a free data set from a site and try to model it in 1 hour

use pandas-profiling and seaborn pairplot for EDA , or try the EDA tools out there

use an automated hyperparameter optimization routine for e.g. XGBoost with Optuna or Hyperopt

you'll crush it!

it's not a bad skill to have to do quick and dirty EDA and basic ml or automl on a data set for a good baseline. I prefer that sort of interview because it's directly related to the skills you use on the job, it's not a crazy time-consuming take-home, it's not these really open-ended questions like how do you build Google Maps from scratch where they are looking for some very specific concepts and if you miss them you're SOL.


Mogady OP t1_isvk1jz wrote

thanks for the resource, but maybe I didn't show that in the post properly, I can do all of that and I know about it :D it is not that I'm rusty, it is just when I do all of these EDA, plots, and experiments, I don't pay attention to every line I write so that I can recall it without searching again. even If I was working with kind of problems recently I would search how to remove Nan rows from a Numpy array millions of times and copy the same one-line code. This is simply how I work, I understand Numpy and I know which functions I need to use it is just I don't spend time focusing all the details.


Mogady OP t1_isvkmba wrote

and I was able to do many things in the interview (dealing with categorical, strings , numerical, organizing the features as array, applying the models, testing it and get a score) I could do more but simply you can't recall everything, I use HuggingFace literally every day and I have hacked it multiple times to suit my needs, but still, I can't remember how to import the LM head without searching or how to access the attention layer.


RockyMcNuts t1_isvwh2c wrote

yeah I hear you ... I've been rejected for stupid shit many times

I applied to a similar platform, maybe same, first question was to do linear regression just with linear algebra, I couldn't remember all the details to save my life, got maybe 20% there. second question was, here's a random data set, do the eda and model, and I crushed it with 10 minutes to spare, they said almost no one finished it. because a bank had asked me similar stuff and it was a little shaky so I practiced every day for a week or two. the bank also asked me some bullshit dynamic programming leetcode stuff that I hand-waved through and that's prob why they rejected me, it was pretty silly.

the tough love: interviewing is ALWAYS a signaling problem that doesn't line up perfectly with the job. the onus is on you to solve for the test. maybe it's an arbitrary test but if you hack it you show you have the desire and focus and ability to get something arbitrary done. the good news is, if the top candidates know dropna() off the top of their head, if you practice for a week, so will you. there's a time to vent a little and then the time to do da 'ting dat de doctor ordered.


Advanced_Ad_3868 t1_istwv3e wrote

I can barely remember any pandas syntax. It's such a poorly designed API, why would I try to commit it memory? That might just fuck up your intuitive sense for what good API design looks like.

I have 20 YOE, have taken state of the art models from research to production. I can write code in more than a dozen languages.

And yet I use Google and stack overflow constantly because I value my time.

Don't worry op, they are not hiring the best. They are just hiring people that have never programmed outside of the world of pandas.


cajmorgans t1_iswqq7b wrote

Why are you taking this personally when it’s obviously a really bad place to be? You didn’t even learn after the first rejection.


Different_Fig4002 t1_istqbp4 wrote

OP, I understand your frustrations. I was there too at one point. I finally had a interview for an internship this summer where the manager was going to test my tech skills. He starts off by saying he doesn't like doing leetcode as they dont test actual understandings. He instead gave me a problem and asked me to solve it in anyway I see fit. I was free to use google or anything I needed. I actually enjoyed the experience as we solved the problem rather than test my memory. For every 10 companies that care about memorizing crap, there is 1 company that will look for the scientist in you and hire you because you are what they need.


willIEverGraduate t1_isuqm28 wrote

No recruitment process is perfect. It's possible they missed out on a great candidate by rejecting you.

But they offered you honest feedback, and personally I would be happy to receive it. It's up to you whether you consider it worth it to work on the things they pointed out.


MathematicianPT t1_isv5dh7 wrote

This post screams Toptal, a very toxic and inhumane recruiting process!


Kitchen_Tower2800 t1_isvk85u wrote

Lolz at dinging you for not doing 15 different things in one line. That's like getting penalized for including unit tests.


PM_ME_A_ROAST t1_isw7mef wrote

let me guess.....toptal?


htrp t1_isz4sz3 wrote

> Toptal - Hire Freelance Talent from the Top 3%

Sounds less like a company and more like a body shop


emerging-tech-reader t1_iswgvv3 wrote

Devils advocate here as I have given numerous interviews where we have had to give technical questions.

None of the interviews I gave was a pass/fail on the technical question.

The purpose is to gauge the level of skills the applicant has.

Even experts will look at stack overflow, but it is how the applicant approaches the question tells you more than if they are right or wrong.

Someone who has been working with a language/library for a long time building models would know the most common methods/syntax.

So if an applicant claimed they were an expert at pandas, then not knowing those commands would work against them.

The fact they gave applicants access to the documentation means they were taking people of different skill levels.

I would also recommend to be wary about talking about grabbing code from Stack Overflow in an interview. Some job roles require compliance on code source. Saying you pull stuff from SO could disqualify you immediately.


My point is, just because there is a technical question don't assume it's a BS interview, and that you will fail just because you don't know the answer straightaway.


Gere1 t1_iswqj2w wrote

I could chime in on the "these big, wealthy companies are all stupid, they don't appreciate brilliant work and you should be glad they did not hire you", but then you might experience a similar situation next time again. Your choice is to acknowledge another weakness or repeat the situation. Here is an alternative view.
I do find that 30min is far too short to get a noiseless evaluation of candidates. Little things can trip even the most experienced coder and the pressure makes things worse.
However, Leetcode exercises coding, but not ML coding in particular. If you had been interested practicing on Kaggle, you would have undoubtedly known the Pandas shortcuts. They correctly, concluded that you had experience coding, but not as an all-rounder for ML and rather as a specialist for NLP. They may even have thought that you did not show interest in ML beyond your assigned work previously.
The attitude of doing "projects from scratch" and finding Pandas "shitty" hints that you may be over-engineering opinionated code and no company wants to pay coders who waste time and are opinionated about their work. Maybe the interviewer sensed that. Companies don't necessarily respond with all reasons why they rejected someone.
You may find all what I wrote BS, but if you don't consider it even the slightest bit, it confirms things I wrote and the next interviewer may sense that again.


whata_wonderful_day t1_istvc0k wrote

Yeah that sucks. On that note, I'm hiring! Feel free to dm me. Roles are remote


NumberChiffre t1_isu020d wrote

Seems you definitely should spend more time on either pandas or numpy, for sure you know both but knowing one from the top of your head shows you are using them on a daily basis. Then again, I’ve seen quite a few firms interviewing like this, and most of the times those firms are boring as hell (not as good as they advertise).


DigThatData t1_isu432w wrote

it sounds like this was probably a job you wouldn't have wanted anyway. I'm sorry you had such a frustrating experience, but I strongly suspect you dodged a bullet here


klop2031 t1_isuevhg wrote

100% agree. Much of it you study completely useless things.


VirtualHat t1_isullw4 wrote

Sometimes I get the feeling that people's reasons for rejecting a candidate don't align with the real reason. Could be as simple as "we're already hiring a friend of one of our co-workers", but rather than tell you that, they make up a reason that is (legally) defensible but obviously not correct.

This happens a bit in certain companies where an internal promotion has already been decided on, but for 'fairness', they need to interview external applications just to reject them.


merlinsbeers t1_isuqck3 wrote

These people are idiots who are evolving themselves into a corner.


Ok_Assistance_2364 t1_isv3vls wrote

I work in a big tech firm in Europe attracting top ML and DS and we definitely don’t use this hiring process. Technique is important but we don’t do code test, only discussion, business cases and culture fit. We also base our answer to at least 6 interviewers deciding democratically. So i would be happy if I were you that you got rejected from this company you applied to. Sounds like the kind of job where you d be a slave.


Vladz0r t1_isv4psk wrote

So what I'm getting out of this rant is... get good at Pandas to gimmick your way through these dumb interviews. Got it 👍


Humble_Article_4386 t1_isv5u46 wrote

What company was it ? You should expose them anonymously imo


Seankala t1_isv7m8s wrote

Lol are you by any chance in Asia? As an Asian who was raised in the US but is living and working in his "mother country," everything about this post screams "Asia" to me.


giogit t1_isv9k97 wrote

That remember me the Chinese room test. They probably don't realise they are looking for a lookup table and not for a living ML expert!


jzini t1_isvo3bx wrote

This is ridiculous. Your ability to solve a real problem with limited information is far more important. Unfortunately big tech is looking for compliance as well as intellect. It’s a blunt object that works not because it predicts ability but it predicts the subset of conformity within ability.


PaamayimNekudotayim t1_isw522v wrote

Everything is about speed these days because ultimately this sort of work will be automated, so you (and everyone else) are competing with an imaginary robot.


ffuffle t1_isw56y2 wrote

Interviews go both ways, in this case it saved you from working in what was definitely a toxic environment you would have hated


inspired_loser t1_isw8oja wrote

If you don't mind, can you please tell me what were the questions in the coding test? Might help me prepare better !


EmperorOfCanada t1_iswda2a wrote

In any area of programming there will be specialized knowledge required. Often within that same area different companies will have chosen different approaches.

Thus a stupid set of interviewers will start throwing out questions which they could not have potentially answered even months before they gave the interview.

For example, dealing with huge amounts of sensor data requires all kinds of timeseries expertise. It can be picked up very quickly, especially if you are in an organization which has already stumbled down all the deadends. There are also many different timeseries databases which are fairly different than each other.

In the above example you could have two different companies, each with successful sensor data solutions, interview the other group and reject them because they used a "different" approach.

The reality of a great programmer is not what they know, but their ability to learn what they need to solve the problem in front of them right now.

As I have said many times, I could take any language I know well and write up a programming quiz where I would fail miserably. Just take C++ keywords. Give this quiz to ab "Expert" what is the keyword compl used for? You could yell in their face how they don't have an encyclopedic knowledge.

If you talk to someone who makes games using C++ they will be see the world differently than someone who makes safety critical embedded systems. Their use of the same language will be almost as if they are using two different languages.

You dodged a bullet avoiding a company where academic knowledge is more important than being a good programmer.


lehmanmafia t1_iswx0bd wrote

Just practice vectorization. Like take this as an example: Set all 0 values to 1. You could do it with a for loop like

for i in range(len(arr)):

if arr[i] == 0:

    arr[i] = 1

But using vectorization you can easily do

arr[arr==0] = 1

This is probably why you're having issues with completing it on time just because the first way requires way more coding. On top of that it's more efficient in terms of time complexity. This is especially useful in CV since you're dealing with images which are like a 600×600×3 array. Without vectorization making adjustments to a data set of like 10,000 images would take forever.


TrueBirch t1_iswyryu wrote

This is really frustrating to read. When I interview people, the code test is straightforward and passing isn't even necessary. The best intern I ever had failed the code exam and sent me a detailed email after our interview explaining where he went wrong. His approach to problem solving and his determination mattered more to me.


ratulotron t1_isx5230 wrote

You definitely dodged a bullet here, I can assure you. The cluster of behaviors in a workplace associated with this attitude that your code AND your thoughts should fall in only their specific pattern would have chipped away the love you have for your work little by little.

I can tell you right away that this practice of "memorizing" code comes from traditional software development. I had a lot of interviews in my short career where I am supposed to be a walking breathing data storage instead of someone who loves implementing good solutions. If you remember something from your muscle memory, that's definitely amazing but doesn't prove your solution is going to be worse than others.


give_me_the_truth t1_isx6x1l wrote

Just to help me be aware about how one achieves good accuracy quickly: At a very high level what are the basic steps you followed to achieve 90% accuracy?


Mogady OP t1_isxaoym wrote

The data itself was linear and the relation was obvious(at least to one of the binary classes either the 0 class or the 1), so I simply kept adding more features to the classifier and it kept getting higher values, of course, this doesn't give any info about the precision and recall for each class (0, 1)


Tsadkiel t1_isxd5gf wrote

Sounds like they just got someone to get 90% accuracy on their data set for free :/

Don't do this shit. Refuse and walk out of part of your interview means doing your job without pay.


askfordreams t1_isxdqne wrote

I'm a mess in coding interviews, especially if they're timed. No matter the interview, if its timed, my brain just stops and fails under pressure. But a TIMED interview where I have to build a model with only pandas docs, to reach x% accuracy? Who can even do this..


arcandor t1_isxefvm wrote

Hiring is broken. You dodged a bullet, that company's filter is ineffective and actively rejecting potential good candidates.


Gibikis t1_iuh7ls9 wrote

Recently, during my ML interview i got questions about tail recursion in Python and its performance. Recruiters also asked me about stack behaviour and different, exact data structeres which stands behind Python structures. Is that normal questions? Do i really need to know such things?


Azmisov t1_istdwdb wrote

Well, it is the top 3%. Out of 100 candidates, 3 of them are able to find a solution, do it quickly, and don't need to search the web to figure out how to do things. I understand it seems unrealistic, but just think about it competing for 3 spots out of a 100... I'd say they're justified in nitpicking, since 97% of data scientists are going to be just like you. Seems like the only thing you need to pass is some practice looking things up in the NumPy/Pandas/etc documentation, instead of relying as much on stackoverflow or plain web searching? Honestly, I know this sounds harsh, but I'd be a little worried if a candidate couldn't figure out how to filter NaNs without searching the web.


Mogady OP t1_isthauc wrote

No man it doesn't work like that, yes you might be worried if this is the only thing you asked about, but The Nans part came late when I already used almost all the features but the last two had them and I had 5 min left, I can do that easily with Pandas but NumPy is a little complex a[~np.isnan(a).any(axis=1), :], also when you say 97% like you, what is us? let's say this month you worked with tabular data, and the next month you worked with a CV project, are you expected to remember all the syntax of openCV, Pandas, Numpy,Sklearn at that point?


Azmisov t1_istr5kb wrote

You have 100 data scientists who all have a degree, career experience, maybe a few cool projects under their belt. If you can only pick three of them, why not pick the ones that can solve a problem faster and show a little more skill in coding? Tough, but if there's a lot of competition for a job, that's just how it goes. Also you said you were also allowed to use the documentation, which I think is pretty reasonable, so you don't have to have the entire API memorized.


nogear t1_iswuyiq wrote

Probably depends on the role. When we are looking for data scientists we usually interview for excellence in the "scientific" part of it. What do I care how good a applicant memorized the awful pandas API? At least for us that doesn't make the difference. Scientific creativity and persistance as well as understanding of model requirements and good judgement is what we are looking for.


scraper01 t1_isu5912 wrote

Recruiters are homocorporate morons, who are expected to spec out candidates based on criteria that more than often is blatant reification. That people would have expected you to behave like a machine, so who cares.


Even-Adeptness-3749 t1_istym30 wrote

Statistically 90% of companies hire only top 3% of developers. Those are the facts. Just ask around.


anon011358 t1_ists729 wrote

You didn’t know absolute basic pandas/numpy and you are complaining?


CommunismDoesntWork t1_ist73vi wrote

>I mainly did NLP work for 3 years

Did you apply to an NLP job? Machine learning skills aren't transferable. "ML engineers" don't exist. You can be an NLP engineer, a computer vision engineer, a data scientist(they work with tabular data and probably use pandas), and I'm sure you can be whatever the stock market guys call themselves. But you absolutely can't be all of them just because you know one of them.

However, I'm positive it wouldn't take long for them to train you on their domain if all they're doing is pandas. So it's weird they're being selective.


Mogady OP t1_istab29 wrote

they are a recruitment platform, that's the point actually they never asked me a specific question related to my experience, just random questions everywhere


Azmisov t1_istem0z wrote

Recruitment platforms tend to favor generalists, as a lot of contracts are one-off and require you to do everything yourself. Also they might get one NLP job every couple months compared to 50 more traditional data science, so having someone who only specializes in NLP is not great for them.


Mogady OP t1_isti0cl wrote

who said I only specialize in NLP, yes this is mainly my experience, but I didn't fail to show how to apply a classifier on a traditional dataset, there is a difference between failing to show the ability to do something, and failing to do it 100% correctly within the given "time-frame". Also, they could have simply ignored my resume.


LegacyAngel t1_istd9uv wrote

> But you absolutely can't be all of them just because you know one of them.

But you can definitely pick up at least intermediate knowledge with enough ground space given. I agree don't interview to a CV job when you are a NLP, but given what most places actually do, i don't think it is such a hard barrier to expect someone to pick up what they need to do in 6 months. I am sure a CV engineer could pick up tokenization relatively quickly.


[deleted] t1_isstrjt wrote



Mogady OP t1_issupod wrote

why ? what is the problem with focusing more on the problem than the tools? there are tons of tools out there for ML but some people still insist that it is all about Pandas and Sklearn so you should excel at them.