crimson1206
crimson1206 t1_j8ts496 wrote
Reply to comment by danja in Physics-Informed Neural Networks by vadhavaniyafaijan
Well how is it relevant then? Im happy to be corrected but I dont see how its relevant to this post
It just tells you that there is a well approximating NN for any given function. It doesn't tell you how to find such a NN and it doesnt tell you about extrapolation capabilities of a NN which is well approximating on just a subdomain (which is what this post here is mainly about) either.
The universal approximation theorem in practice just gives a justification for why using NNs as function approximators could be a reasonable thing to do. That's already pretty much the extent of their relevancy to practical issues though
crimson1206 t1_j8njti4 wrote
Reply to comment by danja in Physics-Informed Neural Networks by vadhavaniyafaijan
By normal NN I'm referring to a standard MLP without anything fancy going on. I.e. input -> hidden layers & activations -> output.
The universal approximation theorem isn't relevant here. Obviously a NN could fit this function given training data. This post is about lacking extrapolation capabilities/how PINNs improve extrapolation though
crimson1206 t1_j8mb0gu wrote
Reply to comment by humpeldumpel in Physics-Informed Neural Networks by vadhavaniyafaijan
The normal NN will not learn this function even with more steps. It’s a bit strange that the graphic didn’t show more steps but it doesn’t really change results
crimson1206 t1_izxfkya wrote
Reply to comment by MightyDuck35 in Getting started with Deep Learning by MightyDuck35
To correct the other comment all the examples they mentioned are from linear algebra, though calculus is important too.
To understand the deep learning you’ll need linear algebra and multidimensional calculus. For some parts of deep learning you’ll also need probability & statistics knowledge.
crimson1206 t1_j8w4qnt wrote
Reply to comment by Oceanboi in Physics-Informed Neural Networks by vadhavaniyafaijan
The steps really don’t matter. The normal NN will not learn to extrapolate better with more steps.
This post is precisely showing how the PINN has better generalization than the normal NN