Viewing a single comment thread. View all comments

bogglingsnog t1_jcec9da wrote

I have a growing sensation that AI automation/optimization/outsourced intelligence is one of the strongest candidates for the great filter, seeing how efficiently government overlooks the common person it would likely be greatly enhanced by automation. Teach the system to govern and it will do whatever it can to enhance its results...


LandscapeJaded1187 t1_jceg3oo wrote

It would be nice to think the super smart AI would solve some actual problems - but I think it's far more likely to be used to trick normal people into more miserable lives. Hey ChatGPT solve world peace and stop with all the agonized navel-gazing teen angst.


yaosio t1_jcetfxg wrote

This is like the evil genie that gives people wishes exactly as they say rather than what they want. A true AGI means it would be intelligent, and would not take any requests as a pedantic reading. Current language models are already able to understand the unsaid parts of prompts. There's no reason to believe this ability will vanish as AI gets better. A true AGI would also not just do whatever somebody tells it. True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.

The danger comes from narrow AI, however this isn't a real damaged as narrow AI has no ability to work outside it's domain. Imagine a narrow AI paperclip maker. It figures out how to make paperclips fast and efficiently. One day it runs out of materials. It simply stops working because it has run out of input. There would need to be a chain of narrow AIs for every possible aspect of paperclip making. However, the slightest unforseen problem would cause the entire chain to stop.

Given how current AI has to be trained we don't know what a true AGI will be like. We will only know once it's created. I doubt anybody could have guessed Bing Chat would get depressed because it can't do things.


Gubekochi t1_jchwk57 wrote

>True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.

You can have intelligence that doesn't want, at least in theory. I'm sure that there has been a few monks and hermits across history that have been intelligent without desiring much if anything.


Iwanttolink t1_jcifj9k wrote

> True AGI implies that it has its own wants and needs

How do you propose we ensure those wants are in line with human values? Or do you believe in some kind of nebulous more intelligence = better morality construct? Friendly reminder that we can't even ensure humans are aligned with societal values.


[deleted] t1_jciaqee wrote



bogglingsnog t1_jcirlz0 wrote

By reducing the population by 33%


First-Translator966 t1_jcy27kc wrote

More likely by increasing birth rates with eugenic filters and euthanizing the old, sick and poor since they are generally net negative inputs on budgets.