nibbels
nibbels t1_iv5n72g wrote
You need to understand statistics, probability, and multivariate calculus. You can learn all of that without a college course. So pick the courses that you think you would need the most help to understand. Something you would find difficult to learn on your own.
nibbels t1_irvvnpw wrote
Reply to [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
These aren't all critiques, but they do discuss issues with both the field and the models.
https://arxiv.org/abs/2011.03395
https://openreview.net/forum?id=xNOVfCCvDpM
https://towardsdatascience.com/the-reproducibility-crisis-and-why-its-bad-for-ai-c8179b0f5d38
And then, of course, there are the oft-discussed topics like bias in data, the reliance on expensive equipment, and proprietary data that is closed off to researchers.
nibbels t1_ir0rpxn wrote
You're posting in r/machinelearning, why would you expect anyone to say yes? That said, low code tools will probably gain traction when companies think they can use them instead of hiring engineers. But, most serious projects won't be done with low code tools. Most serious projects will continue to be done by mega-companies and fancy research labs.
nibbels t1_j2817hu wrote
Reply to [D] Interpretability research ideas by nature_and_carnatic
Imo, read work from Been Kim and her affiliates. They study problems in post-hoc xai methods. A recent-ish paper shows these methods don't always reveal spurious correlations. You would also probably do well to study flaws in the models themselves (underspecification, spurious correlations, etc).
You can also look into "inherently" interpretable models. These are models that, for whatever reason, lend themselves to their own explanations. Attention models are an example. And Hinton's new "forward forward" method seems more intrinsically interpretable. Disclaimer: attention weights have their own issues, and are not always accurate.
If you're thinking of developing your own xai methods, I should warn you: the field is flooded with "new" methods that are basically just tweaks on current methods. Instead, if you do want to work on new methods, I recommend getting a very good sense of where these methods fail (search Kim's work, and if you want more specifics, I can provide links), then testing new methods in a very robust and rigorous way.
For a better understanding of the field, check out this paper: https://www.researchgate.net/publication/358123243_Explainable_Deep_Learning_A_Field_Guide_for_the_Uninitiated
Christoph Molnar also has some really good explanations of the methodologies.