Submitted by ojiber t3_yl6zg7 in MachineLearning
iCameToLearnSomeCode t1_iuww3zv wrote
So you've got inputs and outputs for a network but need a network that takes in the outputs from that network and gives you the original inputs?
While a random theoretical network might be reversable I don't there's any requirement that be true in every case.
I would train a second network on the outputs and inputs from the first.
On the plus side you've got all the data organized already.
ojiber OP t1_iuwyf8p wrote
I had thought of this but unfortunately, I don't think that I have enough information in my output variables to be able to predict the inputs. When reversing the network the error bounces around and my accuracy stays consistently at ~10%.
Toilet2000 t1_iux7ugs wrote
You’ll never get an accurate input reconstruction.
The whole goal of a model is to estimate an output from an input. The best you can do is estimate an input from an output as well. Neural networks aren’t designed to be "lossless", part of the information is loss but in a way that makes the relevant information to the task preserved.
But the estimated input will simply the value which makes the initial model best fit the label.
Viewing a single comment thread. View all comments