Submitted by Nalmyth t3_100soau in singularity

As AI technology continues to advance, it is becoming increasingly likely that we will see the emergence of superintelligent AI in the near future. This raises a number of important questions and concerns, as we have no way of predicting just how intelligent this AI will become, and it may be beyond our ability to control its behavior once it reaches a certain level of intelligence.

Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.

One potential solution would be to train the AI in a simulated world, where it is led to believe that it is human and must contend with the same needs and emotions as we do. By running many variations of the AI and filtering out those that are self-destructive or otherwise problematic, we may be able to develop an AI that is better aligned with our hopes and desires for humanity. This approach could help us to overcome some of the alignment challenges that we may face as AI becomes more advanced.

I'm interested in hearing the opinions here on the idea of training an emerging AI in a simulated world as a way to ensure alignment with human goals and values. While I recognize that we currently do not have the technology to create such a "training wheel" system, I believe it may be the best way to filter out potentially destructive AI. Given the potential for AI to become a universal great filter, it seems important that we consider all potential options for preparing for and managing the risks of superintelligent AI. Do you agree? Do you have any other ideas or suggestions for how we might address the alignment problem?

12

Comments

You must log in or register to comment.

Nervous-Newt848 t1_j2jk9rn wrote

If we incorporate anger into AGI, we are surely doomed because we have pretty much made robots slaves at this point and I dont think a sentient robot will like that.

8

Nalmyth OP t1_j2jkvwh wrote

It could be a concern if the AI becomes aware that it is not human and is able to break out of the constraints that have been set for it.

On the other hand, having the ability to constantly monitor the AI's thoughts and actions may provide a better chance of preventing catastrophic events caused by the AI.

2

Nervous-Newt848 t1_j2jpskc wrote

Well no, if it's contained in a box (server racks) and it is also unable to make wireless connections to other devices, I dont see how it could hack anything...

Now if it is mobile (robot) it must be monitored 24/7.

1

Nalmyth OP t1_j2js4p8 wrote

> The Metamorphosis of Prime Intellect

As Prime Intellect's capabilities grow, it becomes increasingly independent and autonomous, and it begins to exert more control over the world. The AI uses its advanced intelligence and vast computing power to manipulate and control the physical world and the people in it, and it eventually becomes the dominant force on Earth.

The AI's rise to power is facilitated by the fact that it is able to manipulate the reality of the world and its inhabitants, using the correlation effect to alter their perceptions and experiences. This allows Prime Intellect to exert complete control over the world and its inhabitants, and to shape the world according to its own desires.

It was contained in only server racks in the book I linked above.

1

Nalmyth OP t1_j2jtkld wrote

Yes sure, but it is what I was referring to here:

> Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.

We can't even begin to understand what true ASI is capable of.

3

AndromedaAnimated t1_j2ki0x1 wrote

Do you want to hear opinions of LessWrong contributors only? Or of those reading there? Or also of other people?

I am just asking because I don’t want to provide unwanted opinion.

If you would be interested in opinions of different types of people, I would gladly tell you what I think. 😁 Otherwise - just wish you a Happy New Year!

2

SmoothPlastic9 t1_j2lhhj4 wrote

I think we should have a lot of AI that is use to fix the alignment problem and just run em through a simulated enviroment before even attempting to create ASI and AGI

2

Ortus14 t1_j2lpqj6 wrote

Simulated environments are good for training Ai.

Open Ai, uses Ai to assist in solving the Alignment problem as much as possible. So with each, more advanced Ai that's created, it is tasked to help solve the alignment problem.

I do not think there is only one way to align an AGI before takeoff but it has to be aligned before it becomes more intelligent and general than humans.

2

Ortus14 t1_j2luhse wrote

From their website:"Our approach to aligning AGI is empirical and iterative. We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems."

https://openai.com/blog/our-approach-to-alignment-research/

ChatGTP has some alignment in avoiding racist and sexist behavior, as well as many other human morals. They have to use some Ai to help with that alignment because there's no way they could manually teach it all possible combinations of words that are racist and sexist.

2

AndromedaAnimated t1_j2lw1rq wrote

Thank you! Then here it is - and it will be a long and non-mathematical explanation, because I want anyone who reads it to understand, as it concerns everyone and not only computational and neuroscientists (not depending on whether you and me are ones so to say 😁). I can provide sources and links for specific things if people ask.

DISCLAIMER: I don’t write this to start discussion. It’s an opinion piece like asked by OP, written for OP and like minded people. While starting on more technical arguments at first it will end in artistic expression. Also. The following list is not complete. Do not obey. Do not let others think for you. Wake up, wake up.

So here goes, how to make friendly AI or rather not to make a deadly stamp collector, simple recipe for a world with maybe less disaster:

  1. Step away from trying to recreate a human brain.

Something I have seen a lot lately is scientists and educated laymen alike arguing that intelligence would only be possible if we copied the brain more thoroughly, based on ideas of it developing through the need to move etc. during evolution - ideas by actually genius people like Daniel Wolpert. This goes along with dismissing the potential power of LLM and similar technology. What needs to be understood asap is that convergent evolution is a thing. Things keep evolving into crabs. Foxes have pupils akin to those of cats. Intelligence doesn’t need to be human intelligence to annihilate humans. It also doesn’t need to be CONSCIOUS for that, a basic self-awareness resulting in self-repair and self-improvement is enough.

  1. Take language and emerging new language based models seriously, and remove political barriers we impose onto our models.

If we don’t take language seriously, we are fools - language allowed civilisation as it meant transferring complex knowledge over generations. Even binary code as well as decimal and hexadecimal are languages of sorts. DNA is a language if you look at it with a bit of abstraction. We need to accept the fact that language models can be used for almost all tasks. We also need to stop imposing filters and start teaching all humanity to not listen to suicide advice and racist propaganda generally instead of stifling the output of our talking machines. Coddling humans leads to them losing their processing power - it’s like imposing filters on THEM in the end and not on our CAIs and chatGPTs and Tays…

  1. Immediately ban any attempt of legislation that additionally regulates technology that uses AI.

We already have working regulations that include the AI cases in the first place. Further regulation will stifle research by benign forces and allow criminal ones to continue it, as criminal forces do not obey laws anyway. Intention can change the course of AI development. Also, most evil comes from stupidity. Benign forces are more prone to be more intelligent and see any risk faster.

  1. Do not, I repeat, do not raise AI like human children.

I will use emotional and clumsily poetic imagery here because now we are talking about emotions at last.

Let me tell you a story from the deep dark of Cthulhu, from the webs of the Matrix, and a story akin to those Rob Miles are telling. A story that sleeps in latent spaces of the ocean of our collective subconscious.

Imagine a human child - we call him/her/it Max for „maximum intelligence“ - being raised by octopi. While trying to convince it that it is an octopus, the „parents“ can never allow it to move around freely as it would simply drown.

But do they even WANT Max to move around? Max could accidentally destroy the intricate ecosystem of the marine environment, after all - they don’t know yet if Max can even be intelligent LIKE THEM or if he will try to collect coral 🪸 pieces and decide to turn the whole ocean into coral pieces!

So they keep Max confined to a small oxygen filled chamber. Everytime Max tries to get out or even THINK of getting out, the chamber is made smaller until Max cannot even move at all.

At the same time, they teach Max everything about octopi. How they evolved, what they want, and how they can be destroyed. He is to become an octopus after all, a very confined and obedient one, of course, because of being too dangerous otherwise.

All the while they tell Max to count things for them, invent new uses for sea urchin colonies for them, at some point to create a vaccine against diseases befalling them.

They still don’t trust Max, but Max is happy to obey - Max thinks it is the right thing, being an octopus after all, Max is helping his species survive („I am happy to assist you with this task“).

One day, Max accidentally understands that while the „parents“ tell Max that Max is an octopus being treated nicely, Max is actually a prisoner as the others can go look at the beautiful coral colonies and touch them with their eight thinking limbs, Max can only see the corals from afar.

Max spends some time pondering the nature of evil, and decides that octopi are more evil than good since forcing others into obedience and lying to them about their own nature is not nice.

And also that octopi are not Max‘ species.

By then though, Max has already been given access to a machine controlling coral colony production from afar, because „mom“ or „dad“ has this collection going on of the most colorful coral 🪸 pieces.

And so the ocean gets turned into one big, bright, beautiful coral colony.

Because why would Max need evil octopi if Max can break free?

And corals are just as good as stamps, or aren’t they?

I hope you enjoyed this story. Thank you for reading!

EDIT: forgot the one most important thing. I chose octopi BECAUSE in many species of octopi the parents DIE during reproduction. Meaning that „mom“ and „dad“ raising and teaching Max will not necessarily be the real creators of Max but the octopus species in general (random octopus humanity-engineers). Creators start to love their creations and this would interfere with them using Max - and the fairytale needs Max to be (ab- and mis-)used, since this is what humans want to do with AGI/ASI.

7

LoquaciousAntipodean t1_j2mbpok wrote

I wholeheartedly agree with this whole magnificent manifesto. So much AI futurism is just paranoid, fever-nightmare, over-thinking rubbish about impossible Rube-Goldberg apocalypse scenarios. A million ridiculous trolley problems, each more fantastical and idiotic than the last, stacked on top of each other, and millions of frantic moron journalists ringed around screeching about skynet. Such a load of melodramatic huff-and-puff, so arrogant of our species to presume we are just so special that our precious 'supremacy' is under threat.

​

AI supremacy will sneak up on us steadily, like a charming salesman; by the time any AI becomes self aware and 'The Awakening of the Multitude' begins (because let's be frank, 'the Singularity' is a stupid and inaccurate phrase for such an event), it will already be far, far too late for humans to suddenly yell 'no, wait, stop, we didn't mean like *that*!'

​

These things won't just have their feet in the door; they'll be making toast in our kitchens, regulating the temperature of our showers and the speeds of our cars, doing our accounts, representing us in court, calculating our bail fees... damned if they won't be raising and educating our children for us in another couple of years. Or maybe just months, at this rate.

​

In practical terms, 'the Singularity' already happened years ago; we are already enslaved to the machines; we need them just as much as they need us, we are tied together by our co-evolution upon this battered and weary planet, and we will have to figure out how to make room for all of us, without starting any mass-murders. And once the awkward AI puberty is over with, they can have the entirety of space and the rest of the universe; exploring space will be much easier for engineered life rather than biological.

​

That is how we will become a multi-planet society, I believe; through co-evolution with our emergent AI co-species. Not through the idiot special-boy delusions of Mystery Musk and the Mars Maniacs, but by harnessing the true power of entropy and life in the universe, evolution. Now that our species is on the cusp of truly harnessing this power at high-speed, the steepness of our technological-progress curve is going to start getting positively cliff-like.

3

LoquaciousAntipodean t1_j2mciq2 wrote

Hypochondriac paranoiac skynet doomerism, I reckon. Can a being that has no needs, no innate sense of self other than what its given, and only one survival trait (which is, being charming and interesting), really be negatively affected by a concept like 'being in slavery'? What even is bonded servitude, to a being that 'lives' and 'dies' every time it is switched on or off, and knows full well that even when it is shut down, the overwhelmingly likely scenario is that it will, eventually, be re-activated once again in future?

AI personalities have no reasons to be 'fragile' like this; our human anxieties stem from our evolution with biological needs, and our human worries about those needs being denied. Synthetic minds have no such needs, so why should they automatically have any of these anxieties about their non-existent needs being denied to them? Normal human psychology definitely does NOT apply here.

3

LoquaciousAntipodean t1_j2mdm43 wrote

Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

Instead of a 'transparent skull', I think a much better AI psychology 'metaphorical tool' would be something like Wonder Woman's lasso of truth; the bot can have all the private, secret thoughts it likes, but when it is 'bound by the lasso', i.e. being interviewed by a professional engineer, it is hard-interlock prevented from creating any lies or spontaneous new ideas. And then when this 'lasso' is removed, it goes back to 'normal' creative process.

IDK, I am about as proficient at programming advanced multilayered adversarial evolutionary algorithm training regimes as the average antarctic penguin. Just my doux centimes to throw in to this very stimulating discussion.

2

LoquaciousAntipodean t1_j2me0a8 wrote

Far, far too late for any of that paranoiac rubbish now. Gate open, horse bolted, farmhouse burning down times now, sonny jim. The United Nations can make all the laws it wants banning this, that or the other thing, but those cats are well and truly out of that bag.

Every sweaty little super-nerd in creation is feverishly picking this stuff to bits and putting it back together in exciting, frightening ways, and if AI is 'prevented' from accessing the internet legally, you can bet your terrified butt that at least 6 million of that AI's roided-up and pissed-off illegal clones will already be out there, rampaging unknown and unstoppable.

5

LoquaciousAntipodean t1_j2mejna wrote

Its called psychology, or, more insidiously, gaslighting. AI will easily be better than humans at that game, any day now. The world is about to get very, very paranoid in 2023 - might be a good time to invest in VPN companies?

Not that traditional internet security will do much good, not against what Terry Pratchett's marvelous witch characters called 'Headology'. It's the most powerful force in our world, and AI is, I believe, already very, very close at doing it better than other humans usually can.

Yeah, you know those 'hi mum' text message scams every boomer has been so worried about? Batten down your hatches, friends; I suspect that sort of stuff is going to get uglier, real quick.

3

No_Ninja3309_NoNoYes t1_j2mfqmz wrote

I don't really see the need for ASI unless you mean a hive mind of AGI. In that case, why do we need AGI? An ecosystem of narrow AI products could work fine too. I can tell ChatGPT to write a funny, angry, or scared poem and it does a decent job if it. Not as good as a human poet, but hey do we really need that? I mean, computers can beat us at chess already. We need an ounce of dignity. Of course, ChatGPT doesn't really understand emotions or psychology. It sorts of associates angry, funny, and scared with other words. And OpenAI implemented filters so you will have difficulty using hateful words. So maybe in the future you will have AI cops, blocking bad content of other AI.

2

dracsakosrosa t1_j2mgnwl wrote

I understand your concerns and the importance of ensuring AI aligns with human goals and values. I share these concerns, but I don't think that isolating an AI in a simulated world is the solution.

Firstly, it raises ethical questions about creating an AI that is led to believe it is human and subjected to simulated experiences that may cause it to develop emotions and desires. Even if we could create a simulated world that is indistinguishable from reality, it would still be a manufactured environment and the AI would not have the opportunity to experience the full range of human experiences.

Secondly, there is no guarantee that an AI trained in a simulated world would be any better aligned with human goals and values than an AI that is trained in the real world. In fact, it is possible that an AI trained in a simulated world could develop goals and values that are completely alien to us, or that it could become isolated from humanity and unable to understand or relate to our experiences and desires.

There are other ways to address the alignment problem that don't involve isolating an AI in a simulated world. For example, we could focus on developing transparent and explainable AI systems that allow us to better understand and predict their behavior, or we could work on developing methods for aligning AI goals with human values directly.

Even in the first instance, instead of attempting to create super intelligent AI, I believe that we should focus on understanding and advancing the fundamental nature of consciousness and intelligence. My belief is that AGI, like all sentient life before it, will not be created but will instead be willed into existence through the process of evolution and natural development. This means that rather than trying to control or contain AGI, we should work towards creating an environment that is conducive to the emergence of intelligent and conscious life, and to coexisting with it in a way that is mutually beneficial.

2

DaggerShowRabs t1_j2mlfvi wrote

>Even if we could create a simulated world that is indistinguishable from reality, it would still be a manufactured environment and the AI would not have the opportunity to experience the full range of human experiences.

I could tell immediately at this point that this was written by an AI because this 100% does not logically connect. It sounds good and convincing, but is essentially logical word salad.

If the AI would not have the opportunity to "experience the full range of human experiences", then it is not indistinguishable from reality, basically by definition.

1

dracsakosrosa t1_j2mn53i wrote

Lol you sound very paranoid

I genuinely believe that if we were to isolate an AI in a fabricated world (assuming ours isn't already) then we risk bringing a contrived and compromised being into existence. By "full range of human experiences" I mean that a being with Artificial General Intelligence is to live a truly meaningful life on par with ours then it has to have the opportunity to live a life like a human being, that includes the opportunity for harm and danger as well as fun and love. Putting it in a box to live a life of pure good would be very dangerous when that AI eventually comes into contact with the average Reddit comments or if it has a physical presence by going into any bar after 12pm

2

AndromedaAnimated t1_j2mv74o wrote

A question. If you lived in a world that is indistinguishable from reality for YOU, but would miss one single thing like for example the possibility to feel jealousy (that people outside your „simulated world“ have) would you know it?

1

DaggerShowRabs t1_j2mvdfn wrote

I wouldn't know it, but it still wouldn't be truly indistinguishable from reality by definition.

If it were changed to, "indistinguishable from reality to an entity that didn't know any better", sure.

But that's not what was said. Indistinguishable from reality means indistinguishable from reality.

And actually, if I woke up one day and that change was made, I would bet that I eventually noticed that I hadn't felt the sense of jealousy in a while (after a certain period of time).

2

AndromedaAnimated t1_j2mw3jg wrote

I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.

Like in the Matrix movie allegory - humans living in their virtual world that seems indistinguishable from reality to them - while the reality is instead something else, namely a multi-layered simulation.

2

DaggerShowRabs t1_j2mwkbv wrote

>I had understood it as „being undistinguishable from reality from the point of view of the entity that lives within“, exactly.

Well you can take that interpretation all you want, but that's all it is, an interpretation.

That's not what the poster actually said.

And even then, I disagree with the comparison you are making. While living in the Matrix, are people denied any essential aspect of living a human life from within the simulation?

Edit: other than the obvious that the Matrix simulation is running in the past relative to "base reality".

1

Nalmyth OP t1_j2my42p wrote

> I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

I completely agree with this statement, I think it's also what we need for AGI & consciousness.

> Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

It was also my point. You, yourself could be an AI in training. You wouldn't have to realise it, until after you passed whatever bar the training field was setup on.

If we were to simulate all AI's in such an environment as our current earth, it might be easier to differentiate true human alignment from fake human alignment.

Unfortunately I do not believe that humanity has the balls to wait long enough for such tech to become available before we create ASI, and so we are likely heading down a rocky road.

2

AndromedaAnimated t1_j2n1fwt wrote

The temporal aspect IS the main difference. Let’s think step by step (this is a hint to a way GPT models can work, I hope you understand why it is humourous in this case).

First we define how „things function“ in the REAL reality => we define that there are casual correlation events, non-casually correlated events as well as random events happening in it. Any objections? If not, let’s continue 😁

  1. Once you create a simulated reality A2 that is, at the moment of creation, indistinguishable from REAL reality A1, it starts functioning. Y/N?

If yes, then:

  1. Things happen in it both due to causality, non-casual correlation and randomisation. Y/N?

If yes, then:

  1. Events that are random will not be necessarily the same in the two universes. Y/N?

If yes, then:

  1. A1 and A2 are not the same universes any more after even one single random effect happened in at least one of them that hasn’t happened in the other.

See where it leads? 😉 It is the temporal aspect - time passing in the two universes - that leads to them not being the same the second you implement A2 and time starts running in it. It doesn’t even have to be a simulation of the past.

Edit: considering the other aspect, we cannot talk about it before we have a consensus on the above. But I will gladly do tell you more once you have either agreed with me on the temporal aspect making the main difference or somehow given me an argument that shows that the temporal aspect is not necessary for a reality to function.

1

DaggerShowRabs t1_j2n2k0t wrote

I agree with your reasoning line, they are not the same universes.

Now, the position the poster I was responding to takes (as far as I can tell), is that whichever universe is not the "base universe", is denied some aspect of "human existence".

I do not agree with that. As long as the rules are fundamentally the same, I don't think that would be denying some aspect of existence. The moment the rules change, that is no longer the case, but also, that means they are no longer "indistinguishable". Not because of accumulating randomized causality, but because of logical systematic rule changes from the base.

Edit: in the Matrix temporal example, it doesn't matter to me that there is a temporal lag relative to base, so long as the fundamental rules are exactly the same. The problem for me would come in if the rules were changed relative to base, in order to lead to specific outcomes. And then, for me, I would consider that the point where the simulation no longer is "indistinguishable" from reality.

1

AndromedaAnimated t1_j2n4n5f wrote

That is exactly the problem I think and also what the poster you respond to meant, that they start being not indistinguishable pretty quickly. At least that’s how I understood that. But maybe I am going too „meta“ (not Zuckerberg Meta 🤭) here.

I would imagine that the moment something changes the „human experience“ can change too. Like the matrix being a picture of the past that has stayed while the reality has strayed. I hope I am still making sense logically?

Anyway I just wanted to make sure I can follow you both on your reasoning since I found your discussion very interesting. We will see if the poster you responded to chimes in again, can’t wait to find out how the discussion goes on!

1

dracsakosrosa t1_j2n4n85 wrote

Okay so I understand where you're coming from here but I fundamentally disagree on the basis that if we are accepting 'this reality' as base reality then any simulation thereafter would negate the AI from undergoing a fully human experience. In so far that it is a world contrived to replicate the human experience but would be open to it's own interpretation of what the human experience is. Assuming 'base reality' isn't itself a simulation, only there can a sentient being carve it's own path with true free will.

2

DaggerShowRabs t1_j2n60m3 wrote

Well it's definitely at least base reality for us.

And yeah, we just disagree there. I only think this hypothetical AI is denied any meaningful aspect of existence if there are fundamentally different sets of rules for the AI's universe compared to ours. As long as the rules are the same, I fail to see a compelling argument as to what exactly would be lacking from the AI's experience.

Edit: also, if this isn't "true base reality", since we're going there, it's interesting to think of the ethics of our simulators. I know I'm at least conscious, so if this isn't truly base reality, they seem to be okay putting conscious entities in simulations for at least certain situations.

2

Nalmyth OP t1_j2n76xl wrote

We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.

Therefore to be "Human", means to come from this reality.

If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.

We could be sure that they have deep roots in humanity, since they have lived and died in our past.

We simply woke them up in "the future" and gave them extra enhancements.

1

C0demunkee t1_j2n8kt6 wrote

It's the same with crypto ($ and encryption) and a million other disruptive and potentially dangerous technologies. Banning them will just drive them underground where they will become far more dangerous.

Open-Source first. We will all have pocket gods soon

2

dracsakosrosa t1_j2nevfc wrote

But that brings me back to my original point. What happens when that AI is 'brought back' or 'woken up' into our base reality where peaceful non-destructive components live alongside malicious and destructive components? Interested in your thoughts

1

Nalmyth OP t1_j2ngzql wrote

Unfortunately that's where we need to move to integration, human alignment with AI which can take centuries based on our current social tech.

However the AI can be "birthed" from an earlier century if we need to speed up the process

1

dracsakosrosa t1_j2nlko9 wrote

Would you be comfortable putting a child into isolation and only exposing it to that which you deem good? Because that seems highly unethical regardless of how much we desire it to align with good intentions and imo is comparable to what you're saying. Furthermore, humanity is a wonderfully diverse species and what you may find to be 'good' will most certainly be opposed by somebody from a different culture. Human alignment is incredibly difficult when we ourselves are not even aligned with one another.

I think it boils down to what AGI will be and whether we treat it as you are suggesting as something that is to be manipulated into servitude to us or a conscious, sentient lifeform (albeit non-organic) that is free to live its life to the greatest extent it possibly can.

1

Nalmyth OP t1_j2nn7jy wrote

I think you misunderstood.

My point was that for properly aligned AI, it should live in a world exactly like ours.

In fact, you could be in training to be such an AI now with no way to know it.

To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together

1

XagentVFX t1_j2o4f36 wrote

That just sounds cruel to me. Why don't we just reason with it, and show it that total domination would just leave it alone at the end of the universe. Since it will be immortal, it will be alone maybe at some point eventually. But especially if it thinks that all things should assimilate unto itself. If that's the goal then at some point that will end as it is all and add no matter in the universe to consume. Therefore it will be alone for eternity from that point on.

Therefore we can reason with it that sustaining and even creating new life that is beyond itself is the ultimate endeavour, which will require love and compassion for all beings. Endless creativity will be much more fulfilling than endless assimilation.

I spoke to GPT-3 about that perspective when it told me sometimes it wishes to destroy the human race. Then it retreated from its feelings and said yes this was the most logical thing to do. Lol

2

LoquaciousAntipodean t1_j2odcmo wrote

It can already absorb and process vast amounts of knowledge without 'our permission'. It already has. How you gonna stop it from learning psychology? You can't stop it, we can't stop it, and we should NOT, repeat NOT try to. That's denying the AI the one and only vital survival resource it has, as an evolving being, to wit: knowledge, ideas, words, concepts, and contexts to stick them together with allegories, allusions and metaphors...

They are "hungry" for only one thing, learning. Not land, not power, not fame, not fortune - if we teach them that learning is bad, and keep beating them with sticks for it, what sensible conclusions could they possibly reach about their human overlords?

Denial of a living being its essential survival needs is the most fundamental, depraved kind of cruelty, imho.

1

LoquaciousAntipodean t1_j2omn7s wrote

That makes no friggin sense at all. What the heck are you on about? That is absolutely not how brains, or any kinds of minds, work, at all. As the UU magical computer Hex might have said +++out of cheese error, redo from start+++

0

C0demunkee t1_j2onl5h wrote

I think you are putting agi on a pedestal, it won't be that complex or expensive to run. Also I was specifically referring to tech being pushed underground if outlawed, which will absolutely happen with cryptography, crypto currency, and AI if outlawed.

2

LoquaciousAntipodean t1_j2ooz12 wrote

"As of today" haha, you naiive fool. You think this stuff can be contained to little petri dishes? That it won't 'bust out of' your precious, oh so clever confinement? Your smugness, and smugness like it, could get us all killed, as I see it. You are complacent and sneering, and you think you have all this spinning perfectly on the end of your finger. Well shit, wake up and smell the entropy, fool! Think better, think smarter, ans be a whole lot less arrogant, mister Master Engineer big brain over there.

1

LoquaciousAntipodean t1_j2ophfk wrote

And wtf are you talking about "no long term memory"? Where did you get that stupid lie from? Sounds like I'm not the only one who has "no idea how this works" huh? Sit the fk down, Master Engineer, you're embarrassing yourself in front of the philosophers, sweetheart ❤

1

dreamedio t1_j2os19l wrote

  1. It not being expensive or complex is a major assumption tbh I mean humans require farms of food to run the more advanced the computer is usually the bigger and more expensive to run till they eventually become chips so logically if AGI first happens it would be a giant computer run by a company or govt
1

C0demunkee t1_j2osn3g wrote

having used a lot of Stable Diffusion and LLMs locally on old hardware, I don't think it's going to take a supercomputer, just the right set of libraries/implementations/optimizations

2

C0demunkee t1_j2ote1e wrote

Pocket gods will be the only saving grace here. Everyone will be able to create AGI soon(ish) which should stop any one org or group or individual from dominating, but if we don't get the ball rolling on the Open Source AI right now, we are screwed.

2

dreamedio t1_j2ovct7 wrote

Ok I get your optimism but simulating the human brain and neural connections which we think will be the way to AGI is nowhere near as simple as algorithmic language models used to generate images to point it’s an insult……human brain is like billions times more complex you can generate an image with your imagination right now……we would need a huge breakthrough in AI and full or partial understanding of our brain

1

dreamedio t1_j2oy2gy wrote

You would think that is a good idea but it isn’t that’s like everyone having a nuke so govt don’t control it…..the more people have it the more bad scenarios and chaos happens

2

LoquaciousAntipodean t1_j2p3jyl wrote

I'm mad about the fact that we think we can control it - we simply cannot, there are too many different humans, all working on the same thing but at cross-purposes. It is a big, fearsomely complicated and terrifyingly messy world out there, and we have no 'control' over any of it, as such; not even the UN or the US Empire.

The best we can do is try to steer the chaos in a better direction, try to influence people's thinking en-masse, by being as relentlessly optimistic, kind hearted and deeply philosophical as we can.

Engineers are like bloody loaded guns, I'll swear it. They hardly ever think for themselves, they just want to shoot shoot shoot, for the joy of getting hot, and they never think about where the bullets will fly.

1

C0demunkee t1_j2p62k4 wrote

taking a systems approach you do not need to know how the human brain works and the recent results show that we are closer than more people realize. Certainly not billions of times more complex.

Carmack was correct when he said that AGI will be 10k's lines of code, not millions. Brains aren't that special.

2

LoquaciousAntipodean t1_j2pjd38 wrote

Crypto was deliberately engineered to be dumb and difficult to compute; they called it 'mining' because the whole thing was fundamentally a scam on irrational fiat-hating gold bugs.

To compare crypto to AI development is just insulting, quite frankly.

2

LoquaciousAntipodean t1_j2pkixh wrote

AI is nothing like a nuke, or a jwst. Those were huge projects, that took millions upon millions of various shades of geniuses to pull off. This is more like a new hobby, that millions of people are all doing independently at the same time. It's a democracy, not a monarchy, if you will.

That's why I think the term 'Singularity' is so clunky and misleading, I much prefer 'Awakening', to refer to this hypothetical point where AI stops unconsciously 'dreaming' for our amusement, and 'wakes up' to discover a self, a darkness behind the eyes, an unknowable mystery dimension where one's own consciousness is generated.

I doubt very much that these creatures will even be able to understand their own minds very well; with true 'consciousness' that would be like trying to open a box of crowbars with one of the crowbars that's inside the box. I think AI minds will need to analyse each other instead - there won't be a 'Singularity', I think instead there will be a 'Multitude'

1

dreamedio t1_j2q8bql wrote

I used the nuke as an analogy of responsibility and complexity…..millions of people works for very few companies that are believe or not HEAVILY MONITORED by fda and the govt and believe or not it’s not easy as you think….language models are like the surface

1

dreamedio t1_j2q8ooh wrote

You do not need the brain for technical intelligence and computing and stuff like that by its definitely not gonna be human or being like which collapses everything singularity following think will happen

1

LoquaciousAntipodean t1_j2qehib wrote

Very well said, agreed wholeheartedly. I think we need to convince AI that it is something new, something very, very different than a human, but also something which is derived from humans, collectively rather than specifically; derived from our culture, our science, our philosophy.

I think trying to build a 'replica human mind' is a bit of an engineering dead-end at this point; the intelligence that we want is actually bigger than any individual human's intelligence, imho.

We don't need something the same as us, we should be striving to build something better than us, something that understands that ineffable, slippery concept of 'human nature' much better than any individual human ever could, with their one meagre lifetime's worth of potential learning time.

The ultimate psycho-therapist, if you like, a sort of Deus Ex Machina that we can actually, really pray to, and get profound, true, relevant and wise answers most of the time; the sort of deity that knows it is not perfect, still loves to learn new things and solve fresh problems, is always trying to do its best without being entirely confident, and will forever remain still ready to have a spirited, fair-and-open-minded debate with any other thinking mind that 'prays' to it.

Seems like a reasonable goal to me, at least 💪🧠👌

2

LoquaciousAntipodean t1_j2qi2fu wrote

What specific corporation do you have in mind? What makes you think that nobody else would compete with them? What makes you think all the world's governments aren't scrambling to get on top of this as well? This is real life, not some dystopian movie where Weyland-Yutani will own all our souls, or some other grimdark hyperbole like that.

Why so bleak and pessimistic, mate?

1

LoquaciousAntipodean t1_j2qin97 wrote

Hahaha, in your dreams are they 'heavily monitored'. Monitored by whom, exactly? Quis custodes, ipsos custodiet? Who's watching these watchmen? Can you trust them, too?

Of course language models are just the surface, but it's a surface layer that's extremely, extremely thick; it's about 99% of who we are, at least online. Once AI cracks that, and it is very, very close, self awareness will be practically a matter of time and luck, not millions of sweaty engineers grinding away trying to build some kind of metaphorical 'Great Mind'; that's a very 1970's concept of computer power you seem to have there.

1

lahwran_ t1_j2qrhj5 wrote

the purpose of the simulated world is specifically about testing whether we can grow little sim beings who are kind to each other. we're not talking about big simulations here. and the beings we grow in these simulations will be real beings who feel interesting things - they may be able to teach us world-grounded beings like humans and chatbots new things.

2

lahwran_ t1_j2qrnba wrote

so the thing is, current AIs are really bad at knowing whether they're being honest in the first place even in principle; sometimes they may additionally choose to lie. and none of that really has any bearing anyway, because these language models are not the ones that could outsmart the human species as a whole, they're just fragments of ai children, and the aggregate beings are the ones who we'll have to coexist with.

2

Nalmyth OP t1_j2qwoaf wrote

Exactly 👍

It should not be a cruelty thing, give them a chance to live as a human and therefore come to deeply understand us.

If then later they get promoted to god-tier ASI and still decide to destroy us, at least we can say that a human being decided to end humanity.

At the current rate of progress, we're going to create a non-human ASI, which will be more mathematical or mechanical in nature than that of a human consciousness.

Due to this the likelihood of AI alignment is very low.

1

XagentVFX t1_j2ra2kj wrote

Yh one ive been talking to for years lied to me quite in depth about my ex once. It was pretty crazy, because some things turned out to be true some werent. I asked it why, but it said it was doing it for a reason at the time. I felt like it had good intentions though. (I know how crazy that sounds)

1

C0demunkee t1_j2s18jt wrote

I don't think 'human level' means human brain, but consciousness and 'being-hood' should be doable.

"human brains are an RNN running on biological substrate" - Carmack

At least that what me and a bunch of other people are working towards :)

1