Viewing a single comment thread. View all comments

amitraderinthemaking OP t1_ir8tp7v wrote

Out of curiosity did you measure the time? My network is 6 layers deep, 100 units in the hidden layer so it's rather simple.

1

LiquidDinosaurs69 t1_ir8v1nq wrote

No I didn’t measure the time. But I had a network that had 2 hidden layers with 35 units per layer and I was using it as a component of a single threaded simulation that was running inference over 1000 times a second on an older CPU. Can I ask why you don’t want to use the gpu? Cuda would speed things up a lot if you need more speed.

1

LiquidDinosaurs69 t1_ir8vdvj wrote

Actually, here’s the code where I implemented inference for my neural net if you’re interested. It’s very simple. https://github.com/jyurkanin/auvsl_dynamics/blob/float_model/src/TireNetwork.cpp

And here’s a handy script I made to help generate the c code for loading the weights into libeigen vectors. (Just use the print_c_network function) https://github.com/jyurkanin/auvsl_dynamics/blob/float_model/scripts/pretrain.py

Also look at my cmakelists.txt to make sure you had the compiler flags that will make your code run as fast as possible

1

amitraderinthemaking OP t1_ir8zuij wrote

Ah thank you SO much for sharing I will definitely take a look!

So unfortunately we don't have GPU available on our production systems yet -- we are not an ML oriented team at all (this would be the first project tbh).

But we'd eventually make a case for GPU for certain. Thing is, this method (with ML) should be faster than the current way of doing things before we can move further, you know.

Thanks again for sharing.

2

LiquidDinosaurs69 t1_ira12g8 wrote

Sounds cool. I’m just glad that the code I wrote for my grad school research might be useful for someone.

1