Comments

You must log in or register to comment.

currentscurrents t1_jbwfbdd wrote

TL;DR they trained an adversarial attack against AlphaGo. They used an optimizer to find scenarios where the network performed poorly. Then a human was able to replicate these scenarios in a real game against the AI.

The headline is kinda BS imo; it's a stretch to say it was beat by a human since they were just following the instructions from the optimizer. But adversarial attacks are a serious threat to deploying neural networks for anything important, we really do need to find a way to beat them.

72

serge_cell t1_jbwt0s9 wrote

It's a question of training. AlphaGo was not trained agains adversarial attacks. If it was the whole family of attacks wouldn't work, and new adversarial traning would be order of magnitude more difficult. It's a shield and sword again.

6

Excellent_Dirt_7504 t1_jbwwi8v wrote

If you train against one attack, you remain vulnerable to another. There is no evidence of a defense that is robust to any adversarial attack.

7

suflaj t1_jbx9h57 wrote

But there is evidence of a defense by taking as many adversarial attacks as possible and training against them. Ultimately, the ultimate defense is generalization. We know it exists, we know it is achievable, we only don't know HOW it's achievable (for non-trivial problems).

6

OptimizedGarbage t1_jbxticv wrote

It kinda was though? It was trained using self-play, so the agent it was playing against was adversarially searching for exploitable weaknesses. They actually cite this as one of the reasons for it's success in the paper

1

ertgbnm t1_jbyocgi wrote

Isn't alphago trained against itself? So I would consider it adversarial training.

1

serge_cell t1_jc1to7o wrote

There was a paper about it. There was a find - specific set of positions not encountered or pooply represented during self-play. Fully trained AlphaGo was failing on those positions. However then they were explicitly added to the training set the problem was fixed and AlphaGo was able to play them well. This adversarial traning seems just an automatic way to find those positions.

PS fintess landscape is not convex it separated by hills and valleys. Self-play may have a problem in reaching all important states.

1

NotARedditUser3 t1_jbwf0ja wrote

The difference is, they'll be able to easily train the model forward a slight bit to deal with this. Or add a few lines of code for it. Easily defeated issue.

The human beat it this time.... After 7 years.

But, after this... Its not like the humans improve. That vulnerability gets stamped out and that's it

3

currentscurrents t1_jbwgjte wrote

Nobody actually has a good solution to adversarial attacks yet.

The problem is not just this specific strategy. It's that, if you can give arbitrary inputs and outputs to a neural network, you can run an optimization process against it to find minimally-disruptive inputs that will make it fail. You can fool an image classifier by imperceptibly changing the image in just the right ways.

It's possible this is just a fundamental vulnerability of neural networks. Maybe the brain is vulnerable to this too, but it's locked inside your skull so it's hard to run an optimizer against it. Nobody knows, more research is needed.

15

duboispourlhiver t1_jbwmh2g wrote

We are often using neural networks whose training is finished. The weights are fixed for this attack to work. This is obvious, but I would like to underline the fact that biological neural networks are never fixed.

8

ApparatusCerebri t1_jbwwh5j wrote

Our visual system does use a couple of neat tricks to process what's around us but that too is open to some edge cases hence optical illusions. Other than that, in our case, evolution is the mother of all adversarial training :D

2

Jean-Porte t1_jc1axno wrote

-Machine find a strategy to beat machine

-Human implements the strategy and beats machine

-Therefore human beats machine

3

GraydientAI t1_jbweagi wrote

The difference is, the human gets tired and the AI can play 10,000,000 games simultaneously nonstop

Humans have zero chance haha

1

Curious_Tiger_9527 t1_jbwizur wrote

Is it really intelligence or just computer knowing all best possible moves

0

duboispourlhiver t1_jbwmn0u wrote

The computer doesn't compute all the moves and doesn't know the exact mathematically best move. It uses digital neurons to infer rules from a huge number of games and find very good moves. I call this intelligence (artificial intelligence)

11

Nukemouse t1_jbwd60k wrote

I thought there was a computer that could just compute all possible go board states? Was that not the case?

−5

schwah t1_jbwpmcd wrote

There are ~10^170 valid board states for Go, and roughly 10^80 atoms in the observable universe. So even with a universe sized computer, you still wouldn't come close to having the compute power for that.

AlphaGo uses neural nets to estimate the utility of board states and a depth limited search to find the best move.

8

SuperNovaEmber t1_jbwxgu5 wrote

Wow, that understanding is deeply flawed. In computer systems we have compression and instancing and other tricks. But that's all besides the following point.

Atoms, for instance, how many different types are possible? Lets even ignore isotopes!

It's just like calculating a base. Like a byte can have 256 values. You get 4 bytes(32 bits) together and that's 4.3 billion states or 256^4 (base 256 with 4 bytes) or 2^32 (binary, base 2 with 32 bits). So instead of 256 values we got 118 unique atoms and instead of bytes we got atoms, 10^80 of them.

Simple, right? 118^10^80 combinations possible. Highest exponent first, mind you. Otherwise you only will get 1,658 digits instead of the actual result.... Which is not even remotely close..... Not 80 digits. Not 170 digits. Not 1,658, even.

That's 207188200730612538547439527925963726569493435639287375683771302641055893615162425 digits..... Again. This is not the answer. Just the number of digits in the answer.

Universe gots zero problems computing GO, bro

That's nothing compared to all the possible spaces all the possible atoms could occupy over all extents over space(and)time.

That's a calculation I'll leave up to you downvoters, gl hf!

−5

schwah t1_jby3w3r wrote

Okay fair enough, it's not as simple as 10^170 > 10^80.

But I don't think your math makes much sense either. You can't just count the number of isotopes - nearly all of the universe is hydrogen and helium. And even with compression, it is going to take a lot more than 1 bit to represent a board state. Memory (at least with todays technology) requires billions of atoms per bit. And that is only memory - the computational substrate is also needed. And obviously we are ignoring some pretty significant engineering challenges of a computer that size, like how to deal with gravitational collapse.

I'll grant that it's potentially possible that you could brute force Go with a Universe-o-tron (if you ignore the practical limitations of physics), but it's definitely not a slam dunk like you're implying.

4

SuperNovaEmber t1_jbymjai wrote

Oh dear. You miss the most important. Which I did not mention. I figured you know?

Every empty point in space is theoretically capable of storing an atom, give or take.

Most of space is empty. The calculation of all the observable atoms 'brute forced' into all possible voids?

That's really the point, friend. You're talking combinatorics of one thing and then falsely equalizing with simply number of atoms?? Not the possible combinations of those atoms?? Which far exceeds the visible signs of your awakening?? It's not even astronomically close, bud.

In theory, a device around the size of a deck of cards contains more than enough energy to compute to end game.

The "observable" universe operates at an insanely high frequency. Consider the edge of the universe is over 10 orders of magnitude closer than the Planck length, using meters of course.

We're 10 billion times closer to the edge of the universe than the fabric of reality.

−6

schwah t1_jc0jztm wrote

No, you are confused. Of course the universe has many more potential states than a Go board... A Go board is just a 19x19 grid. But the number of possible states of matter in the universe is not relevant. There is still not nearly enough matter to represent every Go state simultaneously in memory, which is what would be required for an exhaustive search of the game tree.

6