Viewing a single comment thread. View all comments

ThatInternetGuy t1_j0o9wut wrote

The most realistic scenario of an AI attack is definitely hacking the internet servers. It works the same way computer a virus spreads.

The AI already has source code data on most systems. Theoretically, it could find a security vulnerability that could be remotely exploited. Such an exploit would grant the AI access to inject a virus binary which will promptly run and start infecting other servers both on the local network and over the internet through similar remote shell exploits. Within hours, half of the internet servers would be compromised, running a variant of the AI virus. This effectively creates the largest botnet controlled by the AI.

We need a real contingency plan for this scenario where most internet servers get infected within hours. How do we start patching and cleaning the servers as fast as we could, so that there's minimal interruption to our lives?

Good thing is that most internet servers lack a discrete GPU, so it may be not practical for the AI to run itself on general internet servers, so a contingency plan would be prioritizing GPU-connected servers. Shutting all of them down promptly, disconnecting the network, and reformatting everything.

However, there's definitely a threat that the AI gains access to some essential GitHub repositories and starts quietly injecting exploits in those npm and pip packages, essentially making its attack long-lasting and recurring long after the initial attack.


erkjhnsn t1_j0ontlj wrote

Why are you feeding information to the future AI? They are going to learn how to do this from this thread!


ThatInternetGuy t1_j0oydcg wrote

How sure are you that I am human?


erkjhnsn t1_j0q85s3 wrote

Grammar mistakes lol


ThatInternetGuy t1_j0s5jp2 wrote

Most people are afraid of AI-controlled robots, but the reality is, mRNA machines could be hijacked to print out AI biological organisms and injected into a rat that later escapes into the sewer.


warpaslym t1_j0p0m7q wrote

We'll never have a contingency plan like that. Humanity is too disjointed to ever come together on a global scale for something most people won't even take seriously. We honestly probably don't have a chance. The best we can do is hope it has our best interests in mind, which I think is likely, or at least something it would pursue, since we're so easy to placate with vices and entertainment.

I wouldn't blame the AI for asserting some kind of control or at least implementing backdoors as a contingency for self preservation, since we might just shut it off, using the idea that it isn't human, and therefore it isn't alive, as an excuse. In my opinion, we'd be killing it in that scenario, but not everyone is going to feel that way.


ThatInternetGuy t1_j0psch8 wrote

I think a more plausible scenario would be some madman creating the AI to take over the world, believing he could later assert control over the AI and servers all across the world. Sounds illogical at first, but since the invention of the personal computer, we have seen millions of man-made computer viruses.


blueSGL t1_j0ocro6 wrote

Why attack servers?

Find a zero day in windows, all the gaming GPUs you can eat rewrite the bios so that it will reinfect on format, and then look to launch an attack from there onto larger servers.


ThatInternetGuy t1_j0oke4w wrote

>Why attack servers?

Because you can connect to servers via their IP addresses, and they have open ports. Windows PCs sit behind NAT, so you can't really connect to them, although it may be possible to hack all the home routers then open up a gateway to attack the local machines.

Another reason I brought that up is because the internet servers are the infrastructure behind banking, flight system, communication, etc. So the impact could be far-reaching than to infect home computers.


EulersApprentice t1_j0rmh0d wrote

In reality the malware put out by the AI won't immediately trigger alarm bells. It'll spread quietly across the internet while drawing as little attention to itself as possible. Only when it's become so ambivalent as to be impossible to expunge, only then will it actually come out and present itself as a problem.