ThisIsMyStonerAcount t1_iz9c2oo wrote
Reply to comment by Blutorangensaft in [D] If you had to pick 10-20 significant papers that summarize the research trajectory of AI from the past 100 years what would they be by versaceblues
Minsky's work was very relevant because at the time the perceptron was the state of the art, and that algorithm can't solve XOR. That's why his work started the first AI winter.
Blutorangensaft t1_iz9pl46 wrote
I get that, but I find it a little hard to believe that nobody immediately pointed out a nonlinearity would solve the issue. Or is that just hindsight bias, thinking it was easier because of what we know today?
ThisIsMyStonerAcount t1_iza1qzy wrote
What nonlinearity would solve the issue? The usual ones we use today certainly wouldn't. Are you thinking a 2nd order polynomial? I'm not sure that's a generally applicable function, with being non-monotonical and all?
(Or do you mean a hidden layer? If so: yeah, that's absolutely hindsight bias).
Blutorangensaft t1_iza9zrt wrote
I see. I meant the combination of a nonlinear activation function and another hidden layer. Was curious what people thought, thanks for your comment.
chaosmosis t1_izawfsz wrote
Non-monotonic activation functions can allow for single layers to solve xor, but they take forever to converge.
Viewing a single comment thread. View all comments