GFrings
GFrings t1_jdtfmvd wrote
Reply to [I ate] A Fry box (Chicken bacon ranch, poutine, chili cheese and Greek fries). by fireballhotchoccy
RIP every tube in your body
GFrings t1_jcd941i wrote
Manage people who still do machine learning. Motivating scientists isn't unlike the constant iterative grind of matching the right reward function to the right model.
GFrings t1_j8dixv3 wrote
Reply to [D] Is a non-SOTA paper still good to publish if it has an interesting method that does have strong improvements over baselines (read text for more context)? Are there good examples of this kind of work being published? by orangelord234
In general, absolutely yes. In practice, the review process for most tier 1 and 2 conferences right now is a complete roll of the dice. For example, WACV and some other conferences explicitly state in their reviewer guidelines that you should consider the novelty of the approach over the performance. But I still see many reviews that ping the work for lack of SOTAness. The best thing you can do is make your work as academically rigorous as possible (have good baseline experiments, ablation studies, analysis...) And submit until you get in. Don't worry about what you can't control, which is randomly being assigned to a dud reviewer.
GFrings t1_j6lr1tn wrote
Mel spectrograms so scary
GFrings t1_j3cc42r wrote
Reply to [D] I recently quit my job to start a ML company. Would really appreciate feedback on what we're working on. by jrmylee
I'm sort of confused as to what this buys me as a developer. Sure, I can run the model with one click maybe. But that doesn't seem to get me any closer to my typical goal which is to have a module that I can drop right into my code base and use the model to solve one sub problem of a much larger system. I can see using this as sort of a fast way to demo a model maybe, but most repos are clean enough that it takes maybe 30 minutes to reproduce the environment and run the author model myself.
There are already a lot of open source tools that solve the other problem, by the way. One is pytorch-liberator which can extract all the code and dependencies from a module and package in a nice portable format for integration elsewhere.
As a general tip to you and your sales team, when you go to market with something like this you should have some value propositions lined up already instead of asking us whether we think it's valuable. Most folks will tend to assume not unless you can help them see what makes this useful.
GFrings t1_j2xjsz8 wrote
Reply to [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
I think the context in which the task is performed, and what is meant by "outperform", is important. If given enough time and energy, a person could probably find all the dogs in a dataset of images. But could they find 200k dogs in a dataset of millions of images overnight? Probably not. In this sense, machines far outperform humans who are limited by attention span and ability to parralelize.
GFrings t1_iz4r2cr wrote
You could spin up a local instance of cvat, which is a FOSS labelling tool that has a ton of features, and put whatever yoi want through the pipeline.
GFrings t1_irvckfn wrote
How does it make assumptions about the depth of those 100 points it just makes up from the 3?
GFrings t1_irelmbh wrote
Reply to comment by 3jckd in [D] Giving Up on Staying Up to Date and Splitting the Field by beezlebub33
This. We need to start treating ML as a collection of fields with synergies. Just like they teach in engineering school, you're going for a T shaped skill tree. Be generally knowledgable about what's out there, but do a deep dive of one particular niche that interests you.
Also, it should be said that if you're not a researcher, one of the hardest parts of AI is the data. Everyone gets this wrong, particularly when they start chasing the latest backbone, or loss function, or a hundred things that see dozens of publications per week on. The ML engineering around standing up a good experiment, regardless of what you're actually putting into the model slot of this experiment, is where 90% of the effort should be going.
GFrings t1_je8i7ro wrote
Reply to comment by Technical-Vast1314 in [R] You Only Segment Once: Towards Real-Time Panoptic Segmentation [CVPR 2023] by Technical-Vast1314
Isn't semantic segmentation made redundant by the instance segmentation? Or is there a difference in coverage for the two tasks, in terms of the ground truth labels?