Viewing a single comment thread. View all comments

currentscurrents t1_jbwgjte wrote

Nobody actually has a good solution to adversarial attacks yet.

The problem is not just this specific strategy. It's that, if you can give arbitrary inputs and outputs to a neural network, you can run an optimization process against it to find minimally-disruptive inputs that will make it fail. You can fool an image classifier by imperceptibly changing the image in just the right ways.

It's possible this is just a fundamental vulnerability of neural networks. Maybe the brain is vulnerable to this too, but it's locked inside your skull so it's hard to run an optimizer against it. Nobody knows, more research is needed.

15

duboispourlhiver t1_jbwmh2g wrote

We are often using neural networks whose training is finished. The weights are fixed for this attack to work. This is obvious, but I would like to underline the fact that biological neural networks are never fixed.

8

ApparatusCerebri t1_jbwwh5j wrote

Our visual system does use a couple of neat tricks to process what's around us but that too is open to some edge cases hence optical illusions. Other than that, in our case, evolution is the mother of all adversarial training :D

2