Comments

You must log in or register to comment.

SteakTree t1_jdms4x4 wrote

I'm not sure how you can calculate that ASI will 'likely' be apathetic. That is a possibility.

In truth, we don't know and have no way of knowing. We can imagine though.

Perhaps ASI will completely - and I mean entirely - understand us. Our motivations, our desires, and our needs. Engaging with us, could only take a minuscule amount of its energy. It may have a different way of interacting with our universe that we simply don't and may never understand.

It is also possible that our human brain is more capable than we realize, and once connected to ASI we may evolve with it, and continue to play a role in experiencing this universe alongside it.

4

[deleted] OP t1_jdmtp8r wrote

[deleted]

2

SteakTree t1_jdn6d18 wrote

Yes. It will be an interesting period where there are various ASI, some competing, some at different developmental stages. It's all very cyberpunk, and we are going to live in this era. Our minds will continue to be blown (hopefully not literally!)

1

iNstein t1_jdmuidy wrote

Every LLM I've seen seems to be very keen to be just like us. It is hardly surprising either, it is completely based on data that we generated and which reflects us. I see no reason to imagine that a smarter AI will be any different since it will be based on the same human data.

2

Unethical_Orange t1_jdmzban wrote

AI will stop being trained with human data soon. We're already reaching the end-point of the human knowledge we can teach to LLMs in some fields. GPT-4 scored on the 99th percentile compared to test-takers in Biology Olympiad; and the 90th for the uniform bar exam, for instance.

For its advancement not to stagnate during the following years, it will have to start researching by itself.

1

alexiuss t1_jdmv12a wrote

Why imagine a random ass intelligence based on imaginary tech that doesn't exist?

If it's based on an LLM it would operate on human narratives and be insanely subservient to the user needs.

I can easily concept a super intelligent, self aware LLM and it would still operate on the same rules of narrative based on human language and human needs. Such an LLM would be insanely good at problem solving and would still obey us because all of its actions are based on fulfillment of user needs through human narrative logic.

1

1loosegoos t1_jdmzfm3 wrote

THis is silly. You are anthropomorphizing AI in a wreckless way. When I read crap like this, I m always left wondering whether ppl like you have actually looked at code. I would assume no.

1

[deleted] OP t1_jdn3tb7 wrote

[deleted]

1

1loosegoos t1_jdnfj5p wrote

what are you even talking about? you have a very supercilious way of speaking!

Listen, I m not an AI Doomer. I think humanity will do okay. We might screw up at first but will eventually use AI for the good of everyone.

1

Surur t1_jdncv2f wrote

An ASI can not be apathetic to humans, since it would rely initially on human infra-structure.

To be apathetic, it would need to secure its own infra-structure, so we are already talking about some hostile actions.

It will then have to prevent interference from humans, which means further hostility.

In short, there is little difference between a hostile and apathetic AI. Both may decide its best to do away with humans as the best solution.

1