j4nds4
j4nds4 t1_iu25gly wrote
Reply to [R] "Re3: Generating Longer Stories With Recursive Reprompting and Revision" - Generating stories of 2000+ words (or even much longer) by 0xWTC
Isn't this similar something that was done with AI Dungeon? I seem to recall in the early post-GPT-3 days that when creating custom scenarios you could include a collection of world data separate from the actual prompts/story that would (presumably) be re-injected into the prompt in order to maintain some structure. How effective it was though I'm unsure.
j4nds4 t1_j9hxcbb wrote
Reply to comment by spryes in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
I'm not on board with everything he says, but to consider his probable retorts:
>It seems that he confidently believes we will all die once AGI/ASI is reached, but I don't see why *all* humans dying is more likely than only *some*.
Why would it opt to kill only some knowing that that would increase the probability of humans trying to turn it off or change its function? Concretely eliminating the risk requires concretely eliminating the only other sentient and conscious species that might be motivated to hamper it.
>Why is it guaranteed it would cause catastrophic destruction rather than only minor destruction, especially since something can't be infinitely powerful.
The idea is that a smart enough AGI (aka smarter than us at everything) won't cause destruction unless it's for a specific reason, and preventing risk of reduction of its reward probability (like us turning it off) would motivate it to eliminate us. And it doesn't need to be infinitely powerful, just better than just at everything relevant.
>For example, an analogy is that ASI::humans will be equivalent to humans::ants, and yet while we don't care if we kill ants to achieve our goals, we don't specifically go out of our way to kill them. Many ants have died due to us, but a ton are still alive. I think this is the most likely scenario once ASI becomes uncontrollable.
That's because there's no consideration of ants making a coordinated and conscious effort to disable us.
>I also think it will leave our planet/solar system and pursue its goals elsewhere as Earth may not be adequate for it to continue, effectively just leaving us behind, and that humans as material won't be as effective as some other material it wants to use in space somewhere.
If it's sufficiently smart and sufficiently capable and sufficiently calculated then it would presume that leaving us alive increases the risk of another sentient and conscious entity challenging its growth and reward potential.
It all comes down to the reward function - all ML programs are built with goals and rewards, and we need to be sure that, once it is exceedingly capable and calculating and generalized, its reward is sufficiently defined such that humans will never be seen as a threat to that reward and that actions taken toward that reward will never directly or indirectly affect us in a way that we would disapprove of. All of that has to be figured out before it reaches the point of superior intelligence; once it's at that point, there is otherwise no hope of ever having a say in its actions. We can't predict everything it could do because we are by definition dumber than it and will be by an increasingly unfathomable margin. We can't predict every loophole, every side effect, every glitch, every escape route it could conceive of or exploit; therefore to know with adequate confidence that we have all our ducks in a row before it takes off is literally impossible. The best we can do is continue to try to figure out and decrease the odds of those cracks ever forming or being problematic; and given how easily ML programs (most recently chatGPT and Bing Chat) engage in the kind of behavior their creators are actively trying to avoid, we are unambiguously very, very bad at that.