ShadowStormDrift
ShadowStormDrift t1_j5ufflp wrote
Reply to Classify dataset with only 100 images? by Murii_
With 100 images all data augmentation is going to give you is an overfit network.
You do not have enough images. Try get a few thousand then maybe you'll get results that aren't complete bullshit.
Speak to whoever is funding this. 100 images to solve a non trivial problem is a joke.
ShadowStormDrift t1_j29wqlt wrote
Reply to Laptop for Machine Learning by sifarsafar
I have a Mac M1 Pro. Given to me by my work.
DO NOT. I REPEAT. DO NOT USE A MAC TO DO DEEP LEARNING.
You will not have a good time.
Their decision to go with their own architecture (One chip as CPU and GPU) has completely gimped them in this space.
Most popular DL frameworks ship with CUDA. Cuda is controlled by Nvidia. Native M1 chips are not compatible with CUDA.
This means by doing DL on a Mac you are locking yourself out of the entire DL ecosystem.
Additionally, they (Apple) are also highly restrictive upon what they do and do not allow on their eco system leading to a VERY restrictive development environment. Seriously, getting something like OpenRefine working on a Mac was not possible due to their stance of "Only authorized programs may be installed here". At the time of my attempt, OpenRefine, a highly popular framework for inspecting massive CSV files, was not authorized on the new Mac M1 series.
Sure they may eventually deign to authorize something as popular as OpenRefine... but frankly you will be better off getting actual work done instead of waiting for a company to realize that nobody is big enough to police the entirety of the internet.
ShadowStormDrift t1_ivfus80 wrote
Reply to comment by shumpitostick in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
What about confounding variables?
For example. Looking for trends across governments:hard. Looking for trends WITHIN government departments: Easier. (two different departments might trend in opposite directions and cancel each other out when pooled together)
ShadowStormDrift t1_iuivphh wrote
Reply to comment by GPUaccelerated in Do companies actually care about their model's training/inference speed? by GPUaccelerated
ShadowStormDrift t1_iu53ih6 wrote
Reply to comment by GPUaccelerated in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Of course!
The semantic search as well as a few other key features haven't made it up yet. We're aiming to have them up end of November, mid December.
We've got a two server setup with the second being our "Work-horse" intended for GPU related jobs. It's an RTX 3090 with 32GB VRAM, 64GB DDR4 RAM and a 8 core CPU (I forget it's exact setup)
ShadowStormDrift t1_iu3fkqs wrote
I code up a semantic search engine. I was able to get it down to 3 seconds for one search.
That's blazingly fast by my standard (used to take 45 minutes) that still haunts my dreams. If 10 people use the site simultaneously that's 30 seconds before number 10 gets his results back. Which is unacceptable.
So yes. I do care if I can get that done quicker.
ShadowStormDrift t1_itt8dx4 wrote
Reply to Binary segmentation with imbalanced data by jantonio78
Almost all my experience with Deep Learning in industry is people being given tiny datasets and expected to perform miracles upon them. This feels like one of those cases
ShadowStormDrift t1_jcew7hj wrote
Reply to How To Fine-tune LLaMA Models, Smaller Models With Performance Of GPT3 by l33thaxman
I need proof.