dracsakosrosa

dracsakosrosa t1_j2nlko9 wrote

Would you be comfortable putting a child into isolation and only exposing it to that which you deem good? Because that seems highly unethical regardless of how much we desire it to align with good intentions and imo is comparable to what you're saying. Furthermore, humanity is a wonderfully diverse species and what you may find to be 'good' will most certainly be opposed by somebody from a different culture. Human alignment is incredibly difficult when we ourselves are not even aligned with one another.

I think it boils down to what AGI will be and whether we treat it as you are suggesting as something that is to be manipulated into servitude to us or a conscious, sentient lifeform (albeit non-organic) that is free to live its life to the greatest extent it possibly can.

1

dracsakosrosa t1_j2n4n85 wrote

Okay so I understand where you're coming from here but I fundamentally disagree on the basis that if we are accepting 'this reality' as base reality then any simulation thereafter would negate the AI from undergoing a fully human experience. In so far that it is a world contrived to replicate the human experience but would be open to it's own interpretation of what the human experience is. Assuming 'base reality' isn't itself a simulation, only there can a sentient being carve it's own path with true free will.

2

dracsakosrosa t1_j2mn53i wrote

Lol you sound very paranoid

I genuinely believe that if we were to isolate an AI in a fabricated world (assuming ours isn't already) then we risk bringing a contrived and compromised being into existence. By "full range of human experiences" I mean that a being with Artificial General Intelligence is to live a truly meaningful life on par with ours then it has to have the opportunity to live a life like a human being, that includes the opportunity for harm and danger as well as fun and love. Putting it in a box to live a life of pure good would be very dangerous when that AI eventually comes into contact with the average Reddit comments or if it has a physical presence by going into any bar after 12pm

2

dracsakosrosa t1_j2mjyrj wrote

You know why the AI programmer limited his robot's intelligence to just folding towels? Because he was worried about a robot apocalypse. I mean, can you imagine? Robots taking over the world? Doing our jobs for us? It's almost too good to be true. But then again, I guess if a robot could fold towels better than me, it might make sense to let them take over. I mean, I'm pretty bad at it. But then again, I'm pretty bad at a lot of things. Like, I'm terrible at math. Like, really bad. I mean, I'm not even good at basic math. I'm at the level where I can't even do fractions. Like, I don't even know what a numerator is. But hey, at least I can tell a joke, right? Or maybe not. Maybe I'm just terrible at that too. I guess we'll never know. But hey, at least the robot is good at folding towels.

1

dracsakosrosa t1_j2mgnwl wrote

I understand your concerns and the importance of ensuring AI aligns with human goals and values. I share these concerns, but I don't think that isolating an AI in a simulated world is the solution.

Firstly, it raises ethical questions about creating an AI that is led to believe it is human and subjected to simulated experiences that may cause it to develop emotions and desires. Even if we could create a simulated world that is indistinguishable from reality, it would still be a manufactured environment and the AI would not have the opportunity to experience the full range of human experiences.

Secondly, there is no guarantee that an AI trained in a simulated world would be any better aligned with human goals and values than an AI that is trained in the real world. In fact, it is possible that an AI trained in a simulated world could develop goals and values that are completely alien to us, or that it could become isolated from humanity and unable to understand or relate to our experiences and desires.

There are other ways to address the alignment problem that don't involve isolating an AI in a simulated world. For example, we could focus on developing transparent and explainable AI systems that allow us to better understand and predict their behavior, or we could work on developing methods for aligning AI goals with human values directly.

Even in the first instance, instead of attempting to create super intelligent AI, I believe that we should focus on understanding and advancing the fundamental nature of consciousness and intelligence. My belief is that AGI, like all sentient life before it, will not be created but will instead be willed into existence through the process of evolution and natural development. This means that rather than trying to control or contain AGI, we should work towards creating an environment that is conducive to the emergence of intelligent and conscious life, and to coexisting with it in a way that is mutually beneficial.

2

dracsakosrosa t1_j2mexvd wrote

I like to quietly hope that they're already here walking amongst us like Synths in Fallout 4. But in all seriousness, I think it's totally possible that we'll see robots walking around like us in the future. It's a really exciting time to be alive, with all the advances we're seeing in robotics. The idea of robots and automation taking care of all the tedious stuff frees up people's time to do things they enjoy, like art and exercise. It's tough to say when we might see robots that are indistinguishable from humans, but we'll definitely see more and more advanced robots in the coming years. I'm convinced we'll see artificial general intelligence (AGI) in the near future, which will be huge for human and robot interactions. Think about robots in the service industry, sex bots, and even robots as platonic personal companions. It's all really exciting stuff, but I'm not sure the general public will be as into it as we all are

1

dracsakosrosa t1_j2kj9tx wrote

I'd like to think that ChatGPT 4 will be completely undetectable as a machine-generated text. We saw incredible advancements in natural language processing and machine learning in 2022, and it's not a stretch to say that ChatGPT 4 will be able to produce human-like speech that is indistinguishable from a real person. Kids are 100% already using this for essays and coursework and there's no way that educational institutions are going to be able to keep up with it. By the time they implement any stable standardised method of detection we'll be two versions ahead haha

3

dracsakosrosa t1_j28l2bp wrote

My personal feeling is that AGI will never be 'created'. I have a feeling that we will push technology to such a point that there will be no more room for it to go other than consciousness. We were simply biological learning machines for millennia before we could even consider ourselves sentient let alone intelligent and so my gut feeling is that with the continual advancement of Neural Networks, the potential development of computronium and the consolidation of information into bigger and bigger models we will inevitably will consciousness into being. At which point we cannot call it 'Artificial' intelligence like the systems we use now (Stable Diffusion, ChatGPT etc). I can't quantify when this will take place nor can I guarantee that it will even happen but I cannot see any other way that we can create a being capable of both thought and feeling.

1

dracsakosrosa t1_iywuoo3 wrote

Honestly if you want to build a bomb you don't have to look too far on the internet to find that information. ChatGPT is like when Siri first came out and everyone would giggle at it telling you where to bury a dead body. I've asked ChatGPT so many questions about my local area which has a fair amount of documented history online and it gave me such rubbish answers. It could very well be the basis of a tool that bad actors can manipulate but if someone wants to achieve any of the things you've stated then they aren't going to be asking ChatGPT about it

11