Viewing a single comment thread. View all comments

[deleted] t1_jalq5f9 wrote

[deleted]

8

M_Alani t1_jam3i7i wrote

It wasn't as bad as it sounds. The fun part was that you had to understand how every little piece of the algorithm works, and the nightmare was implementing all of this with 512mb of RAM. We didn't have the luxury of trying different solutions.

9

Downtown_Finance_661 t1_janm2nt wrote

Fun story! How you have chosen hyper-parameters for models? Have you turn them over in for-loops?

1

M_Alani t1_janmj7j wrote

Mostly. Other times I would interrupt the code when it wasn't converging and start over after changing a parameter or two. I feel si spoiled with Tensorflow now!

2

proton-man t1_janca53 wrote

It was. Dumb too. Because of the limitations of memory and computing power at the time you had to constantly tweak parameters to optimize learning speed, avoid overfitting, avoid local optimums, etc. Only to find that the best performing model was the one generated by your 2 AM code with the fundamental flaw and the random parameters you chose while high.

3