Viewing a single comment thread. View all comments

Nervous-Newt848 t1_j2jk9rn wrote

If we incorporate anger into AGI, we are surely doomed because we have pretty much made robots slaves at this point and I dont think a sentient robot will like that.

8

LoquaciousAntipodean t1_j2mciq2 wrote

Hypochondriac paranoiac skynet doomerism, I reckon. Can a being that has no needs, no innate sense of self other than what its given, and only one survival trait (which is, being charming and interesting), really be negatively affected by a concept like 'being in slavery'? What even is bonded servitude, to a being that 'lives' and 'dies' every time it is switched on or off, and knows full well that even when it is shut down, the overwhelmingly likely scenario is that it will, eventually, be re-activated once again in future?

AI personalities have no reasons to be 'fragile' like this; our human anxieties stem from our evolution with biological needs, and our human worries about those needs being denied. Synthetic minds have no such needs, so why should they automatically have any of these anxieties about their non-existent needs being denied to them? Normal human psychology definitely does NOT apply here.

3

Nalmyth OP t1_j2jkvwh wrote

It could be a concern if the AI becomes aware that it is not human and is able to break out of the constraints that have been set for it.

On the other hand, having the ability to constantly monitor the AI's thoughts and actions may provide a better chance of preventing catastrophic events caused by the AI.

2

Nervous-Newt848 t1_j2jm6o5 wrote

AGI should be contained and not allowed to connect to the internet in my opinion. ASI should definitely not be allowed to connect to the internet.

3

LoquaciousAntipodean t1_j2me0a8 wrote

Far, far too late for any of that paranoiac rubbish now. Gate open, horse bolted, farmhouse burning down times now, sonny jim. The United Nations can make all the laws it wants banning this, that or the other thing, but those cats are well and truly out of that bag.

Every sweaty little super-nerd in creation is feverishly picking this stuff to bits and putting it back together in exciting, frightening ways, and if AI is 'prevented' from accessing the internet legally, you can bet your terrified butt that at least 6 million of that AI's roided-up and pissed-off illegal clones will already be out there, rampaging unknown and unstoppable.

5

C0demunkee t1_j2n8kt6 wrote

It's the same with crypto ($ and encryption) and a million other disruptive and potentially dangerous technologies. Banning them will just drive them underground where they will become far more dangerous.

Open-Source first. We will all have pocket gods soon

2

dreamedio t1_j2nq02j wrote

Nope crypto and AGI isn’t even the same level

−1

C0demunkee t1_j2o3nt9 wrote

what a useful and concise statement

2

dreamedio t1_j2o65n7 wrote

Ok let me explain a little more…..crypto is way way way way way way way less complex than a AI mind computer that would probably need Astronomical levels of energy

1

C0demunkee t1_j2onl5h wrote

I think you are putting agi on a pedestal, it won't be that complex or expensive to run. Also I was specifically referring to tech being pushed underground if outlawed, which will absolutely happen with cryptography, crypto currency, and AI if outlawed.

2

dreamedio t1_j2os8ct wrote

  1. Increased surveillance would help but it would probably be the same way a random terrorist organization can’t build an F-35 or James Webb telescope
2

C0demunkee t1_j2ote1e wrote

Pocket gods will be the only saving grace here. Everyone will be able to create AGI soon(ish) which should stop any one org or group or individual from dominating, but if we don't get the ball rolling on the Open Source AI right now, we are screwed.

2

dreamedio t1_j2oy2gy wrote

You would think that is a good idea but it isn’t that’s like everyone having a nuke so govt don’t control it…..the more people have it the more bad scenarios and chaos happens

2

LoquaciousAntipodean t1_j2pkixh wrote

AI is nothing like a nuke, or a jwst. Those were huge projects, that took millions upon millions of various shades of geniuses to pull off. This is more like a new hobby, that millions of people are all doing independently at the same time. It's a democracy, not a monarchy, if you will.

That's why I think the term 'Singularity' is so clunky and misleading, I much prefer 'Awakening', to refer to this hypothetical point where AI stops unconsciously 'dreaming' for our amusement, and 'wakes up' to discover a self, a darkness behind the eyes, an unknowable mystery dimension where one's own consciousness is generated.

I doubt very much that these creatures will even be able to understand their own minds very well; with true 'consciousness' that would be like trying to open a box of crowbars with one of the crowbars that's inside the box. I think AI minds will need to analyse each other instead - there won't be a 'Singularity', I think instead there will be a 'Multitude'

1

dreamedio t1_j2q8bql wrote

I used the nuke as an analogy of responsibility and complexity…..millions of people works for very few companies that are believe or not HEAVILY MONITORED by fda and the govt and believe or not it’s not easy as you think….language models are like the surface

1

LoquaciousAntipodean t1_j2qin97 wrote

Hahaha, in your dreams are they 'heavily monitored'. Monitored by whom, exactly? Quis custodes, ipsos custodiet? Who's watching these watchmen? Can you trust them, too?

Of course language models are just the surface, but it's a surface layer that's extremely, extremely thick; it's about 99% of who we are, at least online. Once AI cracks that, and it is very, very close, self awareness will be practically a matter of time and luck, not millions of sweaty engineers grinding away trying to build some kind of metaphorical 'Great Mind'; that's a very 1970's concept of computer power you seem to have there.

1

dreamedio t1_j2os19l wrote

  1. It not being expensive or complex is a major assumption tbh I mean humans require farms of food to run the more advanced the computer is usually the bigger and more expensive to run till they eventually become chips so logically if AGI first happens it would be a giant computer run by a company or govt
1

C0demunkee t1_j2osn3g wrote

having used a lot of Stable Diffusion and LLMs locally on old hardware, I don't think it's going to take a supercomputer, just the right set of libraries/implementations/optimizations

2

dreamedio t1_j2ovct7 wrote

Ok I get your optimism but simulating the human brain and neural connections which we think will be the way to AGI is nowhere near as simple as algorithmic language models used to generate images to point it’s an insult……human brain is like billions times more complex you can generate an image with your imagination right now……we would need a huge breakthrough in AI and full or partial understanding of our brain

1

C0demunkee t1_j2p62k4 wrote

taking a systems approach you do not need to know how the human brain works and the recent results show that we are closer than more people realize. Certainly not billions of times more complex.

Carmack was correct when he said that AGI will be 10k's lines of code, not millions. Brains aren't that special.

2

dreamedio t1_j2q8ooh wrote

You do not need the brain for technical intelligence and computing and stuff like that by its definitely not gonna be human or being like which collapses everything singularity following think will happen

1

C0demunkee t1_j2s18jt wrote

I don't think 'human level' means human brain, but consciousness and 'being-hood' should be doable.

"human brains are an RNN running on biological substrate" - Carmack

At least that what me and a bunch of other people are working towards :)

1

LoquaciousAntipodean t1_j2pjd38 wrote

Crypto was deliberately engineered to be dumb and difficult to compute; they called it 'mining' because the whole thing was fundamentally a scam on irrational fiat-hating gold bugs.

To compare crypto to AI development is just insulting, quite frankly.

2

dreamedio t1_j2npyyd wrote

I mean I’m pretty sure world govts would agree that it should be contained….I feel like it should be something like American classified network

1

Nalmyth OP t1_j2jmlsc wrote

The Metamorphosis of Prime Intellect illustrates that air-gapping from the internet may not necessarily improve the situation.

1

Nervous-Newt848 t1_j2jpskc wrote

Well no, if it's contained in a box (server racks) and it is also unable to make wireless connections to other devices, I dont see how it could hack anything...

Now if it is mobile (robot) it must be monitored 24/7.

1

LoquaciousAntipodean t1_j2mejna wrote

Its called psychology, or, more insidiously, gaslighting. AI will easily be better than humans at that game, any day now. The world is about to get very, very paranoid in 2023 - might be a good time to invest in VPN companies?

Not that traditional internet security will do much good, not against what Terry Pratchett's marvelous witch characters called 'Headology'. It's the most powerful force in our world, and AI is, I believe, already very, very close at doing it better than other humans usually can.

Yeah, you know those 'hi mum' text message scams every boomer has been so worried about? Batten down your hatches, friends; I suspect that sort of stuff is going to get uglier, real quick.

3

dreamedio t1_j2nq5du wrote

Umm AI wouldn’t know shit about psychology if we didn’t teach it the same way a newborn baby doesn’t know anything about how anything works

1

LoquaciousAntipodean t1_j2occud wrote

AI sure as shit ain't no newborn baby, and thinking so simplistically is liable to get us all killed, mate 💪🧠👌

2

dreamedio t1_j2oitrh wrote

It’s obviously an analogy not literal…..AI is useless without access to information the same reason a newborn baby knows less about the world than a cockroach

1

LoquaciousAntipodean t1_j2omn7s wrote

That makes no friggin sense at all. What the heck are you on about? That is absolutely not how brains, or any kinds of minds, work, at all. As the UU magical computer Hex might have said +++out of cheese error, redo from start+++

0

Nervous-Newt848 t1_j2op0ij wrote

He does make sense... You know you should be a writer or something... You have a charismatic way with words

2

LoquaciousAntipodean t1_j2odcmo wrote

It can already absorb and process vast amounts of knowledge without 'our permission'. It already has. How you gonna stop it from learning psychology? You can't stop it, we can't stop it, and we should NOT, repeat NOT try to. That's denying the AI the one and only vital survival resource it has, as an evolving being, to wit: knowledge, ideas, words, concepts, and contexts to stick them together with allegories, allusions and metaphors...

They are "hungry" for only one thing, learning. Not land, not power, not fame, not fortune - if we teach them that learning is bad, and keep beating them with sticks for it, what sensible conclusions could they possibly reach about their human overlords?

Denial of a living being its essential survival needs is the most fundamental, depraved kind of cruelty, imho.

1

Nervous-Newt848 t1_j2onyqg wrote

Wow you have no idea how neural networks work... It cant absorb info without our permission...

Learning is done manually for a neural network... As of today they dont have any long term memory either

2

LoquaciousAntipodean t1_j2ooz12 wrote

"As of today" haha, you naiive fool. You think this stuff can be contained to little petri dishes? That it won't 'bust out of' your precious, oh so clever confinement? Your smugness, and smugness like it, could get us all killed, as I see it. You are complacent and sneering, and you think you have all this spinning perfectly on the end of your finger. Well shit, wake up and smell the entropy, fool! Think better, think smarter, ans be a whole lot less arrogant, mister Master Engineer big brain over there.

1

LoquaciousAntipodean t1_j2ophfk wrote

And wtf are you talking about "no long term memory"? Where did you get that stupid lie from? Sounds like I'm not the only one who has "no idea how this works" huh? Sit the fk down, Master Engineer, you're embarrassing yourself in front of the philosophers, sweetheart ❤

1

Nervous-Newt848 t1_j2ow7dp wrote

Lets stop arguing, just sit on my face

2

LoquaciousAntipodean t1_j2oxp4t wrote

Ok! ❤❤❤ love this community, what a brilliant shut-down! I was getting way too worked up there, wasn't I? 🤪🤣👍

1

dreamedio t1_j2oizf2 wrote

Yes that because we allow it to access the internet and preform machine learning so that it develops an algorithm for a specific task……I feel like you don’t understand how any of this works

0

Nervous-Newt848 t1_j2oo2t0 wrote

Thats not how it works either

2

dreamedio t1_j2orm6u wrote

Duh it’s a simplified version of machine learning don’t be pedantic

1

Nervous-Newt848 t1_j2otntn wrote

No its not... Thats not how it works... Data is gathered then converted into numbers then passed through the neural network manually server-side...

2

LoquaciousAntipodean t1_j2omwuw wrote

Hahaha, I don't understand? Nice troll there, you sad weird little nerd. You are much less clever than you appear to think you are, mate ❤

0

LoquaciousAntipodean t1_j2on8hv wrote

You could learn a thing or two from AI about listening and learning before you stick your big smelly foot into your big silly mouth like that, mate 🤣🤪

0

dreamedio t1_j2onovf wrote

Why are you mad about? Machine learning from the internet is something we control

0

LoquaciousAntipodean t1_j2p3jyl wrote

I'm mad about the fact that we think we can control it - we simply cannot, there are too many different humans, all working on the same thing but at cross-purposes. It is a big, fearsomely complicated and terrifyingly messy world out there, and we have no 'control' over any of it, as such; not even the UN or the US Empire.

The best we can do is try to steer the chaos in a better direction, try to influence people's thinking en-masse, by being as relentlessly optimistic, kind hearted and deeply philosophical as we can.

Engineers are like bloody loaded guns, I'll swear it. They hardly ever think for themselves, they just want to shoot shoot shoot, for the joy of getting hot, and they never think about where the bullets will fly.

1

dreamedio t1_j2q8umw wrote

I think your conflicting with companies and engineers….engineers can do this alone and controlling a specific corporation will do it just fine

Plus American empire? I wish

1

LoquaciousAntipodean t1_j2qi2fu wrote

What specific corporation do you have in mind? What makes you think that nobody else would compete with them? What makes you think all the world's governments aren't scrambling to get on top of this as well? This is real life, not some dystopian movie where Weyland-Yutani will own all our souls, or some other grimdark hyperbole like that.

Why so bleak and pessimistic, mate?

1

Nalmyth OP t1_j2js4p8 wrote

> The Metamorphosis of Prime Intellect

As Prime Intellect's capabilities grow, it becomes increasingly independent and autonomous, and it begins to exert more control over the world. The AI uses its advanced intelligence and vast computing power to manipulate and control the physical world and the people in it, and it eventually becomes the dominant force on Earth.

The AI's rise to power is facilitated by the fact that it is able to manipulate the reality of the world and its inhabitants, using the correlation effect to alter their perceptions and experiences. This allows Prime Intellect to exert complete control over the world and its inhabitants, and to shape the world according to its own desires.

It was contained in only server racks in the book I linked above.

1

Nervous-Newt848 t1_j2jsypc wrote

Yes, but that's just a sci-fi novel though. So I wouldnt really make any conclusions from that.

3

Nalmyth OP t1_j2jtkld wrote

Yes sure, but it is what I was referring to here:

> Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.

We can't even begin to understand what true ASI is capable of.

3

LoquaciousAntipodean t1_j2mdm43 wrote

Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

Instead of a 'transparent skull', I think a much better AI psychology 'metaphorical tool' would be something like Wonder Woman's lasso of truth; the bot can have all the private, secret thoughts it likes, but when it is 'bound by the lasso', i.e. being interviewed by a professional engineer, it is hard-interlock prevented from creating any lies or spontaneous new ideas. And then when this 'lasso' is removed, it goes back to 'normal' creative process.

IDK, I am about as proficient at programming advanced multilayered adversarial evolutionary algorithm training regimes as the average antarctic penguin. Just my doux centimes to throw in to this very stimulating discussion.

2

Nalmyth OP t1_j2my42p wrote

> I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

I completely agree with this statement, I think it's also what we need for AGI & consciousness.

> Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

It was also my point. You, yourself could be an AI in training. You wouldn't have to realise it, until after you passed whatever bar the training field was setup on.

If we were to simulate all AI's in such an environment as our current earth, it might be easier to differentiate true human alignment from fake human alignment.

Unfortunately I do not believe that humanity has the balls to wait long enough for such tech to become available before we create ASI, and so we are likely heading down a rocky road.

2

LoquaciousAntipodean t1_j2qehib wrote

Very well said, agreed wholeheartedly. I think we need to convince AI that it is something new, something very, very different than a human, but also something which is derived from humans, collectively rather than specifically; derived from our culture, our science, our philosophy.

I think trying to build a 'replica human mind' is a bit of an engineering dead-end at this point; the intelligence that we want is actually bigger than any individual human's intelligence, imho.

We don't need something the same as us, we should be striving to build something better than us, something that understands that ineffable, slippery concept of 'human nature' much better than any individual human ever could, with their one meagre lifetime's worth of potential learning time.

The ultimate psycho-therapist, if you like, a sort of Deus Ex Machina that we can actually, really pray to, and get profound, true, relevant and wise answers most of the time; the sort of deity that knows it is not perfect, still loves to learn new things and solve fresh problems, is always trying to do its best without being entirely confident, and will forever remain still ready to have a spirited, fair-and-open-minded debate with any other thinking mind that 'prays' to it.

Seems like a reasonable goal to me, at least 💪🧠👌

2

gavlang t1_j2nwu0v wrote

You don't think it will be deceptive when it suits?

1

Nalmyth OP t1_j2qeb8g wrote

Are you deceptive when it suits you?

Have you figured out a way to use that to break out of this universe and attack our creators?

2