Comments

You must log in or register to comment.

bumbo-pa t1_iuwt4vz wrote

I think you mostly answered your question. How would you reverse engineer a mean to deduce the data distribution?

16

iCameToLearnSomeCode t1_iuww3zv wrote

So you've got inputs and outputs for a network but need a network that takes in the outputs from that network and gives you the original inputs?

While a random theoretical network might be reversable I don't there's any requirement that be true in every case.

I would train a second network on the outputs and inputs from the first.

On the plus side you've got all the data organized already.

10

MLFanatic1337 t1_iuwwu6b wrote

You could iterate through synthetic data experiments until you can produce the output in the correct ratios. This would take a lot of time most likely and could be a false positive

1

ojiber OP t1_iuwy5xs wrote

I'm sorry I don't understand, are you saying that I answered the question in that you cannot do it? That you can't predict a likely input parameter for a network using a set of known outputs? It seems like you very much can do this, but that my approach to solving the problem is wrong. Do you have any suggestions as to how this could be done?

1

ojiber OP t1_iuwyf8p wrote

I had thought of this but unfortunately, I don't think that I have enough information in my output variables to be able to predict the inputs. When reversing the network the error bounces around and my accuracy stays consistently at ~10%.

1

ojiber OP t1_iuwzhvf wrote

>This would take a lot of time most likely and could be a false positive

Could you use a gradient approach to speed this up? I don't know how you would do this, but if you could find the gradient of the search space you could use this to try and minimize for a certain set of parameters.

What do you think?

0

limpbizkit4prez t1_iux01hn wrote

There was a paper I read a few years ago about a group of researchers estimating the architecture and parameters of a NN just by querying it a bunch. If I get the time I'll try to find and share it. I know it's not exactly what you're looking for, but that might be a step in the right direction

2

ojiber OP t1_iux06k8 wrote

Thank you, I think this is a nice fallback idea if all else fails. I'd like to be able to use some more sophisticated methods to identify regions of parameter space that come close to producing a set of outputs but this could be a place to start.

And looking at their post history, no I don't think that is my partner. Just another student with a similar problem in need of people to bounce ideas off of. :)

3

bluuerp t1_iux2ixx wrote

A neural network reduces a large number of parameters down to a few. And even those that don't like autoencoders have some kind of bottleneck. Hence they are lossy data compression methods. That is how they learn. It is in it's vary nature not reversible. You can't reverse dog/cat output to a full image...but you can use gradCAM to get estimates. I.e you can use gradient ascent to get what you are looking for. Do that for a bunch of different random noise start values and you can estimate which neurons are most responsible for a certain output class.

3

datlanta t1_iux4bxv wrote

Design of Experiments.

1

Professional-Ebb4970 t1_iux570q wrote

There are Reversible Neural Networks where this is possible to do, they're used for things such as for Normalizing Flows and even for very large NNs that don't fit in memory, since if the outputs are reversible you don't need to save intermediate activations during the forward pass, you can just recalculate them in the backward pass.

3

Toilet2000 t1_iux7ugs wrote

You’ll never get an accurate input reconstruction.

The whole goal of a model is to estimate an output from an input. The best you can do is estimate an input from an output as well. Neural networks aren’t designed to be "lossless", part of the information is loss but in a way that makes the relevant information to the task preserved.

But the estimated input will simply the value which makes the initial model best fit the label.

4

john_the_jedi t1_iux99v3 wrote

I would peruse the work on "model inversion". Inverting a model is not free, and the reconstructed inputs are noisy but for certain classes of models/learning problems, this is very doable.

This might get you started https://www.youtube.com/watch?v=_g-oXYMhz4M

2

--dany-- t1_iuxbw6e wrote

Machine learning is just another way to approximate a function. Treat your 9-input 6-output neural network as a black box target function to approximate, and gather enough examples as your training dataset to train a new neural network. According to universal approximation theorem (https://en.wikipedia.org/wiki/Universal_approximation_theorem) if your new neural network is complex enough, it will be infinitely close the black box.

bonus point: if you know the architecture of the target black box model, you will get a very close copy of it. But don’t expect the weights will be exactly the same.

1

WikiSummarizerBot t1_iuxby0l wrote

Universal approximation theorem

>In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

StopSendingSteamKeys t1_iuxc4pu wrote

Maybe I'm reading this wrong, but couldn't you apply gradient descent/backprop on the inputs instead of the parameters to get some input values that will produce your exact output. (Like in DeepDream)

1

bumbo-pa t1_iuxguex wrote

I'm saying that neural networks are fundamentally lossy dimension reductions. From a high theoretic level that makes it irreversible (at least 100% or without any prior assumptions on the input data). Any estimation of inputs need some additional knowledge or presupposition. That makes the "inversion" problem very sensible to the specifics of your situation.

That being said, interesting posts in the thread.

2

Frosty_Burger_256 t1_iuxhlwy wrote

Remove neural nets from the picture and ask yourself what you're doing first.

You have the outputs of a non-linear function, and you want to corresponding inputs to this function. This is akin to finding the inverse function for the problem.

Now bringing in neural nets into the picture, I believe it is quite hard to analytically compute an inverse to it's underlying function. I'd believe it's just easier to train a new neural net representing the inverse. Of course, in certain domains, this might very well not work(image classification for example).

1