Submitted by These-Assignment-936 t3_10y2mu0 in MachineLearning
avocadoughnut t1_j7xvd0p wrote
Reply to comment by currentscurrents in [D] Using LLMs as decision engines by These-Assignment-936
Makes me wonder if pretraining makes the model converge on essentially a more efficient architecture that we could be using instead. I'm hoping this thought has already been explored, it would be interesting to read about.
Sm0oth_kriminal t1_j7y6wv6 wrote
This is probably only the case in which there’s a very low “compression ratio” of model parameters to learned entropy.
Basically, if the model has “too many” parameters it can be distilled but we’ve found that, empirically, until that point is hit, transformers scale extremely well and are generally better than any other known architecture.
Another topic is sparsificafion, which takes a trained model and tries to cut out some percentage of weights that have a minimal output effect, then fine tuning that model. You can check out Neural Magic online and associated works… they can run models on CPUs that normally require GPUs
avocadoughnut t1_j7yaq8w wrote
I'm considering a higher level idea. There's no way that transformers are the end-all-be-all model architecture. By identifying the mechanisms that large models are learning, I'm hoping a better architecture can be found that reduces the total number of multiplications and samples needed for training. It's like feature engineering.
nikgeo25 t1_j7yjicm wrote
Know any papers related to their work? Magic sounds deceptive...
Viewing a single comment thread. View all comments