Submitted by ojiber t3_yl6zg7 in MachineLearning
--dany-- t1_iuxbw6e wrote
Machine learning is just another way to approximate a function. Treat your 9-input 6-output neural network as a black box target function to approximate, and gather enough examples as your training dataset to train a new neural network. According to universal approximation theorem (https://en.wikipedia.org/wiki/Universal_approximation_theorem) if your new neural network is complex enough, it will be infinitely close the black box.
bonus point: if you know the architecture of the target black box model, you will get a very close copy of it. But don’t expect the weights will be exactly the same.
WikiSummarizerBot t1_iuxby0l wrote
Universal approximation theorem
>In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
Viewing a single comment thread. View all comments