danielgafni
danielgafni t1_j7wsnlw wrote
Reply to [D]Image Recognition ability of machine learning in financial markets questions by Ready-Acanthaceae970
The approach you are describing isn’t the best.
-
There is no sense in rendering these images as OHLCV data is timeseries, not 2D images. Most of the data would just be white pixels. Which is not really wrong but is greatly inefficient. Instead of using 2D convolutions 1D convolutions can be used on the timeseries directly (which is called a Wavenet) which would remove rendering from your pipeline and greatly speedup training and inference.
-
OHLCV data won’t give you enough information to neither predict the future or backtest your trading algorithm accurately due to loss of data after aggregations.
danielgafni t1_j7dkl6x wrote
English is a pretty simple language in comparison to other popular languages. Not sure why do you think it’s more complex than Chinese…
danielgafni t1_j62mh4o wrote
Reply to [P] EvoTorch 0.4.0 dropped with GPU-accelerated implementations of CMA-ES, MAP-Elites and NSGA-II. by NaturalGradient
How does it compare to evojax? A huge deal there is training all the networks in the population in parallel. This gives absolutely massive speedups as you can imagine. Can evotorch do it?
danielgafni t1_it97yro wrote
Reply to comment by redditnit21 in Testing Accuracy higher than Training Accuracy by redditnit21
Don’t remove it, it’s just how it works. There is nothing wrong with having a higher train loss if you are using dropout.
danielgafni t1_isl8voi wrote
Are you using dropout or other regularizations that affect training but not testing? You’ve got the answer
danielgafni t1_j9cjwet wrote
Reply to comment by skippy_nk in [D] Things you wish you knew before you started training on the cloud? by I_will_delete_myself
Time to learn about Zellij