Comments

You must log in or register to comment.

The-Last-Lion-Turtle t1_j89jm9s wrote

The purpose of a deep network is to approximate complex non linear functions. With relu the network is piecewise linear. Imagine slicing a space with many planes, locally it's flat, but zooming out it has a very complex shape, similar to getting a 3D model out of triangles. Each layer adds an additional linear deformation and a slice to the space.

Read the resnent paper. It's a great explanation for both why depth matters for performance and how it causes issues for training. The solution of residual connections is central to every deep learning architecture after this paper.

3

big_ol_tender t1_j89ruh6 wrote

If you haven’t already, I’d suggest the 3blue1brown series on neural networks on YouTube. It is the easiest introduction I’ve come across.

2

_Redone OP t1_j89s65u wrote

I have already but i think my question is bit deeper i didn't find the answer on that vidéo

1

Dylan_TMB t1_j8a0kuw wrote

You might be looking for something deeper when there is nothing there.

1

Dylan_TMB t1_j8a0hrj wrote

If you want to be someone that understands it very deeply get REALLY good at linear algebra and REALLY good understanding of multi-variate calculus.

The not so deep answer to your questions is your understanding right now is right. You have a bunch of functions that take multiple inputs and spit out 1 output and that output is combined with other outputs to be put into other functions. Each function has parameters that can vary which changes the output. When you train you give a bunch of examples that in real life you know (hope) are related. The model learns parameters such that it maps input to output.

That's all that's happening.

1