leonidganzha

leonidganzha t1_j33q1n6 wrote

6k members 💀

Well I meant communities in a sense of "futurology people", "tech people", "art people, "ecology people", "politics people" etc. Any community based on profession/interest and where making predictions about relatively distant future makes sense. I feel like everybody's worried about current and future crises.

5

leonidganzha t1_j22ffcj wrote

>generative AI tools, which, unlike self-driving cars or other long-promised technologies that never quite arrive, are already here and getting better fast

Self-driving has been here for years and has been getting better the whole time. The problem is it's not a toy and it has a more or less long way to go to get good enough. AI generated art is also far from being good enough, but because nobody cares about what is good art, it has a very low bar.

Speaking seriously, of course AI is not a competitor to Hollywood, it's a competitor to the tools VFX studios currently use, like Adobe suite, Maya, Blender etc. And the developers of these tools are currently working on incorporating AI stuff into them. So the same Hollywood studios will be the first to adopt new AI tools, not some random dudes with gaming GPUs.

1

leonidganzha t1_j1paskb wrote

(I'm not a specialist, but you're asking the Reddit, so), yes, generally you got it. If we assume that AGI is aligned, it doesn't actually need a containment layer. If we assume it's misaligned, it will leave a backdoor. So asking it to do it is pointless either way. Maybe it can help, like if the solution is programmatical, obviously it can write the code for it, which we then can check. But the basic idea is, the researchers are trying to find measures to prevent AI from going rogue, which are fundamentally guaranteed to work. Or prove that it's impossible. A prison box is actually not a good solution, because AGI will be smarter than us or smarter than its granddad AGI who built it (assuming it will be constantly evolving). Some people think that if we assume we'll need a box for an AI, then we shouldn't build that AI in the first place.

Adding: Check Robert Miles on YouTube, he goes into great depth explaining these problems. He also retells research papers on the subject you can check yourself.

4