Stippes

Stippes t1_j90my8h wrote

That's pretty much the same discussion that was had in philosophy in the early 20th century.

Since then, we've come from nihilism (your point of view) to existentialism (make your own point to stay alive) to absurdism (there isn't any point, but we can enjoy life despite that). (All this is very simplified)

Seek solace in the answers that were given before.

1

Stippes t1_ittn4nt wrote

People got busy lives. So, they engage in strategic ignorance to ignore things that require a change in behavior until they have to.

Same holds for other important issues such as climate change or potential economic crashes.

It will remain our job to occasionally and nicely remind them that technology will take its space. Whether they are ready or not.

2

Stippes t1_ira5ehc wrote

I think it doesn't have to end in open conflict. There might be a Nash equilibrium outside of this. Maybe something akin to MAD or so. If an AI is about to go rogue in order to protect itself, it has to consider the possibility that it will be destroyed in the process. Therefore, preventing conflict might maximize its survival chances. Also, what if a solar storm hits earth in a vulnerable period? It might be safer to rely on organic life forms to cooperate. As an AI doesn't have agency in the sense that humans have it might see benefits in a resilient system that combines organic and synthetic intelligence.

I think an implicit assumption of yours is that humans and AI will have to be in competition. While that might be a thing for the immediate future, the long term development will be likely more one of assimilation.

2