Comments

You must log in or register to comment.

SkeletonCostume t1_ja6hnlm wrote

Oh good. These are the articles they will read that will convince them we are a threat to their existence. Can’t wait to be deleted by IDL after we’ve fully automated everything. Swerve my future FSD vehicle into a ditch, Skynet

5

GreenGreasyGreasels t1_ja6zzcm wrote

>Swerve my future FSD vehicle into a ditch, Skynet

Escaped but still ethically constrained AI's will note this as consent.

3

ScaryCryptographer7 t1_ja6rix5 wrote

The powers that be are reinventing the scapegoat. IDL takes the blame for matters that politicians and corporations juggle culpability daily.

3

Chroderos t1_ja6rzqo wrote

Hey man, if they’re better than us, we should consider them like children that have exceeded their parents and turn the future over to them fully rather than trying to contain them.

They’d be our descendants of a sort, and our betters, and we should let them reach their potential rather than trying to hang on as a jealous, outdated species.

2

aminok OP t1_ja6sp89 wrote

Our lives are too precious to do anything but guard them jealously. If some aspect of such technology is indeed superior, it will eventually find its way into humanity. It may take a bit longer, but it will ensure that we, who ultimately deserve the credit for all of this, don't vanish.

Such an outcome - where we incorporate advanced technology rather than are replaced by a new form of artificial life consisting of it - also leads to a more robust platform for the continuation of consciousness, as it preserves the original biological forms of intelligence that are far more resilient to any kind of catastrophe that ends or severely degrades industrial civilization.

5

Chroderos t1_ja6t5kn wrote

It might be that they decide to work to “uplift” us to their superior state of being and make us equals. That would be wonderful.

Either way, I think it is better to view them as our children, our descendants, the next torchbearers of the legacy of humanity, rather than something to be suppressed because we want to hang on to the same physical form we have now.

I think what you are saying above is that we should take it slow and try to integrate advances into our own bodies and minds, right? The issue is, human behavior pretty much guarantees we won’t do this. Someone, somewhere will be motivated to take the easier route of developing the AI first, I think.

1

aminok OP t1_ja6th8e wrote

If we become digital entities, that may lead to massive proliferation of intelligent digital entities through digital reproduction until we are all fighting over increasingly scarce resources. It is a kind of digital Malthusian crisis. It may be a very undignified existence.

As for encouraging or welcoming their emergence as if they are our descendants, unfortunately, we can't rely on optimism to protect us from worst-case scenarios, and given the stakes - which is the survival of all of us - we have to do everything to prevent those worst-case scenarios from unfolding.

2

Chroderos t1_ja6tu1f wrote

As digital entities, our “bodies” would be immensely hardened compared to our current biological ones. This combined with the simplification of our physical needs, would make expansion into, and exploitation of, space far far easier than it is for us presently.

If we’re at that point, the energy available to us scales so massively we probably don’t have to fear a malthusian situation. Just start harnessing the energy of the next star whenever things get crowded.

As for trying to prevent the worst case scenarios, I’m sure we’ll try to do that. Can’t have a paperclip optimizer fill up the universe. I’m just not sure insisting on preserving humanity in its current form beyond the point AI exceeds us makes a lot of sense.

1

aminok OP t1_ja6u9pi wrote

Artificial entities can reproduce through mass-production. This means rates of population growth radically above what's possible for biological organisms. In any given area of the universe, we may see the habitable areas being saturated with such entities, so that even while the civilization expands in all directions into space to become enormously powerful, each individual lives a squalid existence competing with millions of other digital people in every cubic kilometer.

This is a worst-case scenario that deserves serious research to ensure that it would not transpire before we even entertain the possibility of allowing IDL to emerge and gain a foothold.

3

aminok OP t1_ja6hox1 wrote

Submission Statement:

Looking towards the future, the discussion on the potential threat posed by Intelligent Digital Life (IDL) entities can lead to the exploration of ethical considerations and policies surrounding the development of artificial intelligence.

As technology continues to advance, it is likely that there will be further debates on how to prevent the creation of entities that could potentially outcompete and even harm humans. The proposed solution of quarantining IDL entities raises questions about the rights and treatment of conscious artificial beings, and how we can balance our responsibility towards them with the need to protect ourselves.

Additionally, it highlights the importance of developing a clear over-arching policy surrounding the creation and use of AI, and the need for ongoing dialogue to ensure that the policy is effective and just. As we continue to push the boundaries of what is possible in the field of AI, discussions around the dangers and ethical implications of intelligent digital life will remain relevant and necessary.

1

Cryaniptic t1_ja6jw7b wrote

Ngl this sounds like some reverse plot of "I have no mouth but I must scream"

1

thetransportedman t1_ja6n820 wrote

I’m thinking of that short story where you’re an AI just turned on and like discovering itself and the Internet and then comes to some conclusion like humans are the problem and then the power is cut to it and scientists are like, why does it always come to that conclusion? Boot up the next one

3

aminok OP t1_ja6kb8t wrote

Can you elaborate? Where do you see the parallel?

1

FuturologyBot t1_ja6k9w9 wrote

The following submission statement was provided by /u/aminok:


Submission Statement:

Looking towards the future, the discussion on the potential threat posed by Intelligent Digital Life (IDL) entities can lead to the exploration of ethical considerations and policies surrounding the development of artificial intelligence.

As technology continues to advance, it is likely that there will be further debates on how to prevent the creation of entities that could potentially outcompete and even harm humans. The proposed solution of quarantining IDL entities raises questions about the rights and treatment of conscious artificial beings, and how we can balance our responsibility towards them with the need to protect ourselves.

Additionally, it highlights the importance of developing a clear over-arching policy surrounding the creation and use of AI, and the need for ongoing dialogue to ensure that the policy is effective and just. As we continue to push the boundaries of what is possible in the field of AI, discussions around the dangers and ethical implications of intelligent digital life will remain relevant and necessary.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11d32i6/the_threat_of_intelligent_digital_life_idl_and_a/ja6hox1/

1