GFrings

GFrings t1_j8dixv3 wrote

In general, absolutely yes. In practice, the review process for most tier 1 and 2 conferences right now is a complete roll of the dice. For example, WACV and some other conferences explicitly state in their reviewer guidelines that you should consider the novelty of the approach over the performance. But I still see many reviews that ping the work for lack of SOTAness. The best thing you can do is make your work as academically rigorous as possible (have good baseline experiments, ablation studies, analysis...) And submit until you get in. Don't worry about what you can't control, which is randomly being assigned to a dud reviewer.

12

GFrings t1_j3cc42r wrote

I'm sort of confused as to what this buys me as a developer. Sure, I can run the model with one click maybe. But that doesn't seem to get me any closer to my typical goal which is to have a module that I can drop right into my code base and use the model to solve one sub problem of a much larger system. I can see using this as sort of a fast way to demo a model maybe, but most repos are clean enough that it takes maybe 30 minutes to reproduce the environment and run the author model myself.

There are already a lot of open source tools that solve the other problem, by the way. One is pytorch-liberator which can extract all the code and dependencies from a module and package in a nice portable format for integration elsewhere.

As a general tip to you and your sales team, when you go to market with something like this you should have some value propositions lined up already instead of asking us whether we think it's valuable. Most folks will tend to assume not unless you can help them see what makes this useful.

2

GFrings t1_j2xjsz8 wrote

I think the context in which the task is performed, and what is meant by "outperform", is important. If given enough time and energy, a person could probably find all the dogs in a dataset of images. But could they find 200k dogs in a dataset of millions of images overnight? Probably not. In this sense, machines far outperform humans who are limited by attention span and ability to parralelize.

82

GFrings t1_irelmbh wrote

This. We need to start treating ML as a collection of fields with synergies. Just like they teach in engineering school, you're going for a T shaped skill tree. Be generally knowledgable about what's out there, but do a deep dive of one particular niche that interests you.

Also, it should be said that if you're not a researcher, one of the hardest parts of AI is the data. Everyone gets this wrong, particularly when they start chasing the latest backbone, or loss function, or a hundred things that see dozens of publications per week on. The ML engineering around standing up a good experiment, regardless of what you're actually putting into the model slot of this experiment, is where 90% of the effort should be going.

82