Submitted by naequs t3_yon48p in MachineLearning
IndieAIResearcher t1_ivf77hf wrote
Reply to comment by trendymoniker in [D] Do you think there is a competitive future for smaller, locally trained/served models? by naequs
Examples?
trendymoniker t1_ivf84sd wrote
Easy answer is distillations like EfficientNet or DistillBERT. You can also get an intuition for the process by taking a small easy dataset β like MNIST or CIFAR β and running a big hyperparameter search over models. There will be small models which perform close to the best models.
These days nobody uses ResNet or Inception but there was a time they were the bleeding edge. Now itβs all smaller more precise stuff.
There other dimension you can win over big models is hardcoding in your priors.
IndieAIResearcher t1_ivf9i33 wrote
Thanks :)
Viewing a single comment thread. View all comments