Submitted by ArthurLCTTheCool t3_10lsw4c in deeplearning

Pretty much the title, why do we add the bias instead of subtracting?
Also when i watched 3blue1browns video about neural networks nad he said that you subtract the with the bias, but other sources tell me or explain that you simply add the bias in the dot product instead of subtracting.

//Newbie

13

Comments

You must log in or register to comment.

nibbajenkem t1_j5yuece wrote

Doesn't matter. The bias can be negative if that is what the model learns

16

mrdevlar t1_j5z7mnn wrote

They are the same thing.

> 1 + (-1) = 0

> 1 - 1 = 0

6

suflaj t1_j5zlq6k wrote

Aside from what others have mentioned, let's assume that we don't have a symmetrical situation, i.e. that the range of the function we're learning, as well as the domain of weights and biases, is [0, inf>. Then it makes more sense to add bias than to subtract it, as it will lead to smaller weights and less chance to overflow or for the gradients to explode.

It makes more sense to subtract the biases if in the scenario described above, you want a more expressive layer, but with less numerical stability. This is because a subtractive bias allows the weights to be of greater magnitude, which in terms gives you more effective range for the weights.

But note that neural networks are not done with integer weights, and in some libraries there is no autograd for integers even.

3

sEi_ t1_j6264vy wrote

If you add a negative number then you are adding a subtraction if you get my point.

Afaik, So if you say adding or substracting bias doesn't matter.

1

Blasket_Basket t1_j62eegq wrote

All subtraction is just addition of a negative number. The model will learn a value for the bias parameter, which can be between negative infinity and infinity

1

trajo123 t1_j62yj1s wrote

Convention coming from linear algebra: Ax+B where B is a vector of real numbers, positive or negative.

What makes you feel that subtracting is in any way more meaningful?

1