AntelopeStatus8176

AntelopeStatus8176 t1_jdq1t5p wrote

I have a set of 20.000 raw measurement data slices, each of which
contains 3.000 measurement samplepoints. For each of the data slices,
there is a target value assigned to it. The target values are continous.
My first approach was to do feature engineering on the raw
measurement slices to reduce data and to speed up ML-teaching. This
approach works reasonably well in estimating the target value for
unknown data slices of the testing data set.
My second approach would be to use the raw data slices as input.
On a second thought, this appears to be dramatically computing power
intensive, or at least way more than i can handle with my standard-PC.
To my understanding, this would mean to construct an ANN with 3.000
input nodes and several deep layers.
Can anyone give advice whether teaching with raw measurement data
with this kind of huge datasets does even make sense and if so, which
algorithms to use? Preferably examples in python

1