leonidganzha
leonidganzha t1_j33q1n6 wrote
Reply to comment by Cosmic___Charlie in Depressing subreddit by CatharticFarts
6k members 💀
Well I meant communities in a sense of "futurology people", "tech people", "art people, "ecology people", "politics people" etc. Any community based on profession/interest and where making predictions about relatively distant future makes sense. I feel like everybody's worried about current and future crises.
leonidganzha t1_j33o4ij wrote
Reply to Depressing subreddit by CatharticFarts
Oh show me any community nowadays with optimistic perspective on the future. In any regard
leonidganzha t1_j2dnvt9 wrote
Reply to In opposite : could you list things cheap today that will be unaffordable in 2030 ? (and why) by salutbobby
Good coffee, agricultural plants especially affected by global warming
leonidganzha t1_j2dhbg7 wrote
OpenAI got a free army of QA testers spending hours to make ChatGPT generate offensive and nsfw content. Just because it was really fun. So they got a lot of valuable human-in-the-loop data out of this, which will help them to develop their LLMs further
leonidganzha t1_j22ffcj wrote
>generative AI tools, which, unlike self-driving cars or other long-promised technologies that never quite arrive, are already here and getting better fast
Self-driving has been here for years and has been getting better the whole time. The problem is it's not a toy and it has a more or less long way to go to get good enough. AI generated art is also far from being good enough, but because nobody cares about what is good art, it has a very low bar.
Speaking seriously, of course AI is not a competitor to Hollywood, it's a competitor to the tools VFX studios currently use, like Adobe suite, Maya, Blender etc. And the developers of these tools are currently working on incorporating AI stuff into them. So the same Hollywood studios will be the first to adopt new AI tools, not some random dudes with gaming GPUs.
leonidganzha t1_j1paskb wrote
(I'm not a specialist, but you're asking the Reddit, so), yes, generally you got it. If we assume that AGI is aligned, it doesn't actually need a containment layer. If we assume it's misaligned, it will leave a backdoor. So asking it to do it is pointless either way. Maybe it can help, like if the solution is programmatical, obviously it can write the code for it, which we then can check. But the basic idea is, the researchers are trying to find measures to prevent AI from going rogue, which are fundamentally guaranteed to work. Or prove that it's impossible. A prison box is actually not a good solution, because AGI will be smarter than us or smarter than its granddad AGI who built it (assuming it will be constantly evolving). Some people think that if we assume we'll need a box for an AI, then we shouldn't build that AI in the first place.
Adding: Check Robert Miles on YouTube, he goes into great depth explaining these problems. He also retells research papers on the subject you can check yourself.
leonidganzha t1_j92xjm4 wrote
Reply to Why are HUMANS so predictable? by LegalCrook
If you take a group of objects and look for similarities, you will find similarities. If you will look for uniqueness, you will find uniqueness. The same applies to stories.