Submitted by fedetask t3_yhbjfi in MachineLearning
I am dealing with a deep learning task where the model has several inputs of very different sizes. Moreover, the inputs of smaller sizes are those that actually have more influence on the output.
To give you an idea of the scale, one input is a 200-dimensional vector, another input is a 1-dimensional number, and another is a 5-dimensional vector. They are all useful for predicting the correct output, but the 1 and 5 -dimensional ones are particularly helpful.
At the moment I am concatenating all of them, but I suspect that this isn't the best approach in this case, as there is noise in the training process (it's for an RL agent) and I fear that it would be difficult for the model to learn to put enough focus on those small inputs.
Do you know any work that examines the effect of different input sizes on nns? It might turn out that this is not a problem after all.
eigenham t1_iud3a5l wrote
>To give you an idea of the scale, one input is a 200-dimensional vector, another input is a 1-dimensional number, and another is a 5-dimensional vector.
When you're talking about vector length, are you 1) talking about a sequence model, and 2) the length of the sequence? Or are you talking about the number of elements in an actual vector input?