Submitted by ackbladder_ t3_zrpsfm in MachineLearning
Hi,
For my final year project for my BSc CompSci and AI course I’m implementing the world models paper to play games. Essentially a variational autoencoder and another network to predict future latent states of the game environment.
The emphasis of my project is to reduce the number of parameters, and consequently the training time (making a case for reducing the energy consumption). I’ll use existing models and their size alongside game performance to compare with my own.
I’ve had trouble finding existing literature as to how this can be done. Obviously there isn’t a way to find an ‘optimal’ number required to solve a task, but wanted to find techniques to reduce excess bulk in a NN without sacrificing performance.
Does anyone have any ideas or know of any resources?
TIA
Deep-Station-1746 t1_j146dhq wrote
> reduce excess bulk in a NN without sacrificing performance
Simply put, that is not possible. There's literally always a trade-off. So, the question is, what are you willing to sacrifice? How much performance are you willing to forego?