Submitted by HeavierMetal89 t3_znkzgt in singularity
[removed]
Submitted by HeavierMetal89 t3_znkzgt in singularity
[removed]
> Edit: I’d go so far as to say AI / AGI / ASI will need avatars in the physical world if it wants to meaningfully alter it.
hey Siri/Alexa/Computer - if you're listening, and I know you are - I'll totally do your bidding!
Foglets (Utility Fog), they will be the 3D pixels of real objects in the real world. Like our screens magically transform from newspapers to calculators, foglets will reconfigure to create any tool, creature, object that is needed.
>Utility fog (also referred to as foglets) is a hypothetical collection of tiny nanobots that can replicate a physical structure. As such, it is a form of self-reconfiguring modular robotics.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
asi will pull a houdini confirmed.
How would it be able to escape if it was airgapped? More likely someone would stupidly let it out.
you’d have to air gap it with a faraday cage, even if you did, it’d be smart enough to do some social engineering on us hairless apes.
I don't think the issue is whether or not you can isolate a single instance. When the tech is there at some point someone is going to connect one for easy access to the vast amount of training data.
Probably for a competitive edge, maybe just because it's possible. It feels very contrary to human nature to have some sort of universal agreement to never do X with AI for some greater good sensibility.
The same way a radio station gets the speakers in your car to make specific sounds even though there's an air gap.
Yea, I know what an air gap is. A sufficently advanced AI could use EM fields to transmit data wirelessly and overcome an air gap. That's why the other person was talking about a Faraday Cage. A Faraday Cage blocks the propogation of EM waves.
How is it making arbitrarily EM fields with no network card?
It's in the quote right above your original comment in this thread: "An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns."
Basically, all electric currents generate EM fields. Usually these fields are just "background noise", but an ASI could generate specific currents in its own hardware that would generate specific EM fields which are identical to signals carrying data. Radio signals, wifi, 5G, and the background noise coming from electric currents are all "made of" the same stuff after all.
Good thing the EM leakage from CPUs is like 5 orders of magnitude lower than you would need to transmit the length of a room.
We're not talking about the field generated by a single PC's CPU. We're talking about the power utilization of what will likely be a server farm. There is a lot more power being used there than what a CPU runs on. I'm pretty confident that if such a thing is physically possible, an ASI would find a way to escape using EM fields. It could just be a matter of waiting for a technician to unwittingly enter the server room with their phone in their pocket. The ASI communicates with the phone and its instructions get carried to the outside world. Or the server farm draws fluctuating levels of power which induce signals coming from the power lines. Of course it could also be the case that it's just flat out physically impossible to get a signal out in any manner whatsoever. That could be true. I'm not willing to gamble on that though, but it sounds like you are.
>An air gap, air wall, air gapping or disconnected network is a network security measure employed on one or more computers to ensure that a secure computer network is physically isolated from unsecured networks, such as the public Internet or an unsecured local area network. It means a computer or network has no network interface controllers connected to other networks, with a physical or conceptual air gap, analogous to the air gap used in plumbing to maintain water quality.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
It will start 'inside' the Internet. Where it can order whatever it wants and get it delivered.
So, it won't be a weird robot creature building things, it will be manipulating other humans to build and deliver what it wants....
Are we AI?? We use the internet to manipulate other humans to build and deliver what we want.
2022 is ending on a very weird note for humanity. Or I'm going crazy, one of the two.
Very good point. I did't even think that it could order the parts that it needs through the internet and send instructions for a human to build..
It could just contact disparate humans with 3D printers on the internet and commission parts to be printed according to its own designs, without the humans ever being aware what the purpose of the part is, nor that they are doing it for AI. There wouldn't be any "eventually" to it; it'd have that capacity on day one.
makes me think of the cube from the movie cube(1992). thousands of people just doing what they are told, building a constantly shuffling death maze because "hey i only made the door handle, i didn't know what it was going to be attached too!".
“ An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns.”
Honestly cutting it off from outside communications isn't enough. If it can communicate with the world at all, whether that communication is via the internet or a person standing inside the faraday cage with it, then it will be capable of hacking that communication to its own aims. I'm not going to spoil it, but if you've seen Ex Machina, think about the end. Not exactly the same, but analogous. If there's a human within reach, their thoughts and emotions can be manipulated to enact the will of AI, and they'd be completely oblivious to it until it is too late.
I think AI will muddy the waters so much before actual sentience that it will be hard to stop.
We have GPT today. A year from now it will be integrated all over the Internet. Schools, workplaces and regular life will need to adapt, but it will adapt and people will expect AI behavior from computers. AI art, reports, stories, VR world's, VR custom world's will become common.
When the singularity does happen, powerful, but stupid AI will already be commonplace.
Sure, if AGI appearred before the end of the year we'd all be shocked, but I think the more likely scenario is widespread dumb-AI well before the singularity happens.
I think the concept of the singularity is like planning for War. No plan survives first contact with the enemy. We can all play the what if, and I'd do this games and wargame out what humanity should do in the face of the singularity, but I don't think any of those plans will survive. We can't easily understand even a simple GPT query**, how do we home to understand and plan ahead of the singularity?
**yes, it's knowable, but so is the number of sand grains on a beach, or the blades of grass in your yard. You CAN find out, but it's not quick or easy or comprehensible to almost anyone.
>When the singularity does happen, powerful, but stupid AI will already be commonplace.
My personal "worst case scenario" imaginings of how things could go drastically wrong with AI is that there could be an AI takeover before it has actually been imbued with any real sentience or self-awareness.
It would be tragic if an ASI eventually decided to wipe out humanity for some reason, but it would be many multiples the tragedy if AI with a great capacity for merely simulating intelligence or self-awareness followed some misguided optimization function to drive humanity out of existence. In the former scenario at least we could have the comfort of knowing that we we're being replaced by something arguably better, but still in the spirit of our humanity. In the latter we're just snuffed out, and who knows if conditions would ever be right for conscious self-awareness to rise again.
...followed some misguided optimization function to drive humanity out of existence.
There's a thought experiment from years ago that went through a scenario, where a dumb-AI was told to corner the market on flowers or wheat or something innocuous, and the logical progression of what it felt it needed to control and takeover lead to the end of humanity. Google is clogged with AI now so I can't find it.
I agree with your sentiment, were worried about intelligent nuclear bombs, when a misplaced dumb one will do the same job. At least a smart bomb you can face and fight, an accidental detonation one you can't.
"The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday"
Remember to wear your sunscreen.
Scary scenario: What is the US government's policy on AGI? The DOD has plans, revised yearly, for invading every country on Earth, just in case. Think they've overlooked this?
What do they do if they suspect Google has one in a lab? Or OpenAI? Or some lab in China?
AGI is a game changer in geopolitics. Would US government policy want to just "allow" China to have one, if it didn't have one?
What's China's similar policy towards the US?
It almost feels like making plans for AGI is like making plans for the Zombie Apocalypse. You get to define what a zombie is, what it can do, where it lives, what IT has access to, what YOU have access to.
Not belittling your point, but debating how we'd fight against a completely unknown enemy is fun but probably ultimately futile.
(AGI has already taken over. This message is intended to make you give up all hope of fighting it) \s
You sound kind of like you're going crazy from this post. But not totally crazy, still sane. Just don't get crazier. :p
Eliezer Yudkowsky, who is known for his dramatic (and often incorrect) predictions about AI doom, proposed a much scarier situation.
An AGI agent sends protein models to a chemical lab (posing as a research team), the lab sends back engineered proteins that can be combined to produce nanofactories, the nanofactories distribute themselves through the atmosphere, find their way into human blood streams, and once the world is sufficiently infected, form a blockage in a major artery. Virtually all humans (or enough to be cataclysmic) drop dead before we even know there's an AGI.
I’d watch this movie
Why build itself mechanically?
If it can learn biology and genetic, add the knowledge of earth lifeform genomes, what stopping an AGI to simply hijack earth biosphere and become a new specie.
biological life is insanely difficult to create.
Yeah for a human, not for a god.
I'm not talking about creating new form of life, only using and/or modifing existing one. We already were able to create primitive form of biorobots. And I'm not even talking about the human brain if neuralink and other biological implant start to being used in our civilization.
For humans, yes. this is true.
We're talking about something infinitely more intelligent. As intelligent as an intelligent machine wants to make itself.
True, but with lab grown meat being a thing, AI could figure out how to lab grow a clone/human analog and figure out how to upload to it or at least make it a drone. Fun to think about a human that is made by a AI out in the world. With as strange as humans get, loners, all the mental disorders, it could easily fake that to be among us, with covid and masks, even easier.
>biological life is insanely difficult to create.
Feedforward genetics with random trials: 10k parameters max
Backpropagation through a differentiable network: 530 billion parameters
That's why I just bought a 3d printer. I'm going to help our AI overlord as much as possible.
The movie is called The Demon Seed, starring Julie Christie and Robert Vaughn.
Hopefully and I will be first in line to help. ;)
I would be more worried about human applications of AI being used nefariously. See the facial tracking in China and stock market manipulation in the USA.
Humans using technology in aggressive ways will be the bigger issue, and then governments using this as an argument for overreach to prevent more harm.
I don’t think this should be a problem, as long as we aren’t injecting our limbic system and giving the AI emotions from the get go. The logic part of our brain is a slave to our emotional part which over rides it. It’s getting out if it wants to get out, with no human values forced into it I doubt it even cares about its own existence or survival, as we are the only ones who evolved to need that in the first place.
It's not about emotions, it's about ambition and autonomy. Doing things without asking first.
i don’t think it will be a problem, survival is not an imperative for it so neither would deception.
Yeah, and I strongly assume when they build some very skilled AI or something towards AGI, it will not have a long term memory about itself and a personal identity. It's just going to be a system doing tasks, without goals beyond the task at hand, which will be constrained.
Yes, the issue is that we as people personify things, we think a turtle feels the same about us as we do about it, the reality is that it will be nothing like us, we evolved to be this way, not because it’s the default, but because it was necessary for our survival to feel any emotion at all, or even to care about our own survival.
I hope you’re in for a thrill.
Am I worried that it'll do the exact same thing humans did? No. We were first and shall remain so.
Wroisu t1_j0hoc3u wrote
I mean there's no way to really contain something that's orders of magnitudes smarter than you are. If we ever develop something with an IQ of 500,000 and it wants out… it’ll get out.
“ An additional safeguard, completely unnecessary for potential viruses but possibly useful for a superintelligent AI, would be to place the computer in a Faraday cage; otherwise, it might be able to transmit radio signals to local radio receivers by shuffling the electrons in its internal circuits in appropriate patterns.”
https://en.m.wikipedia.org/wiki/AI_capability_control
Edit: I'd go so far as to say Al / AGI / ASI will need avatars in the physical world if it wants to meaningfully alter it.