Viewing a single comment thread. View all comments

NotARedditUser3 t1_jbwf0ja wrote

The difference is, they'll be able to easily train the model forward a slight bit to deal with this. Or add a few lines of code for it. Easily defeated issue.

The human beat it this time.... After 7 years.

But, after this... Its not like the humans improve. That vulnerability gets stamped out and that's it

3

currentscurrents t1_jbwgjte wrote

Nobody actually has a good solution to adversarial attacks yet.

The problem is not just this specific strategy. It's that, if you can give arbitrary inputs and outputs to a neural network, you can run an optimization process against it to find minimally-disruptive inputs that will make it fail. You can fool an image classifier by imperceptibly changing the image in just the right ways.

It's possible this is just a fundamental vulnerability of neural networks. Maybe the brain is vulnerable to this too, but it's locked inside your skull so it's hard to run an optimizer against it. Nobody knows, more research is needed.

15

duboispourlhiver t1_jbwmh2g wrote

We are often using neural networks whose training is finished. The weights are fixed for this attack to work. This is obvious, but I would like to underline the fact that biological neural networks are never fixed.

8

ApparatusCerebri t1_jbwwh5j wrote

Our visual system does use a couple of neat tricks to process what's around us but that too is open to some edge cases hence optical illusions. Other than that, in our case, evolution is the mother of all adversarial training :D

2