Viewing a single comment thread. View all comments

andrew21w t1_jdcb0vo wrote

Why nobody uses polynomials as activation functions?

My mere perception is that polynomials are the best since they can approximate nearly any kind of function you like? So they're perfect....

But why aren't they used?

2

underPanther t1_jddpryu wrote

Another reason: wide single-layer MLPs with polynomials cannot be universal. But lots of other activations do give universality with a single hidden layer.

The technical reason behind this is that non-discriminatory discriminatory activations can give universality with a single hidden layer (Cybenko 1989 is the reference).

But polynomials are not discriminatory (https://math.stackexchange.com/questions/3216437/non-trivial-examples-of-non-discriminatory-functions), so they fail to reach this criterion.

Also, if you craft a multilayer percepteron with polynomials, does this offer any benefit over fitting a Taylor series directly?

2

andrew21w t1_jde4ayx wrote

The thread you sent me says that polynomials are non discriminatory.

Are there other kinds of functions that are non discriminatory?

1