Submitted by TheHamsterSandwich t3_yqkxx7 in singularity
[removed]
Submitted by TheHamsterSandwich t3_yqkxx7 in singularity
[removed]
Oh no lol, I was happy about our shared enthusiasm
I prefer the term “shared vision” :P
I prefer the term Misguided Fanatical Religious Cult.
yeah, and they all think when it happens society will magically not turn into chaos, there won't be any civil wars, and they will all for some reason be given access to the technology, even those living in third world countries, and live forever in a vr utopia maintained by the "sky-daddy" asi that does all the work for the human leeches, and nothing wrong will happen billions of years after that
they think life is a fairy tale or something
More likely, when it happens it will be controlled by either a corporation or China.
Neither scenario will produce the results most are hoping for.
literally the poorest people in India have smart phones, literal magical technology that would've costed millions just a few decades ago and would seem like magic a few decades ago
you can't use snartphones to take over the resources of the whole observable universe, and live forever as a digital mind, so you'd rather sell them
you can do that with ASI, so the first ones that make it are motivated to keep it
I had this thought
that means the creators of ASi will also think this exact thought, and may act on it (if it makes sense to them. It definitely makes sense to me)
good news: I'm too dumb to make an ASI, so don't lose sleep over it lol
Where are the options expressing uncertainty? Who here is really in a position to predict anything with confidence?
Not me! I don't even work in the field! I'm just here to keep an ear to the ground and follow (as best I can) the latest developments!
you're boring af
Well met, my loyal squire. I am gratified that you have found your way home once more.
Hey, found an honest person!
I bet his ear isn't even on the ground!
I stand corrected! Your honesty will never again be questioned!
I think it's 10 to 15 years away
It’s really hard to do
[deleted]
Why do people on this sub seem so much more confident in their predictions than everyone else? It feels like more than 90% people here believe that there's a very good chance of AGI appearing before 2040, while in real life and other subreddits it's probably the complete opposite. What is it that people on this sub know that nobody else in STEM is aware of?
There are three things to unpack here that I think will better answer your question:
Multifaceted exponential growth. The S curves are feeding off the S curves.
> Why do people on this sub seem so much more confident in their predictions than everyone else?
I think it’s because this place (like most subreddits) is an echo chamber.
>What is it that people on this sub know that nobody else in STEM is aware of?
Good question. In my opinion, I think people put too much stock into early stage developments. I think there’s a good chance that most of the products/developments that get posted here daily won’t go that far.
> I think people put too much stock into early stage developments.
Also, I'd say that thinking the Singularity will ever happen pretty much implies belief in the Law of Accelerating Returns. Combine that belief with the excitement over early progress you mention, and it's not surprising that people here are highly confident that AGI will happen any day now.
Personally, I do think we're coming to a head in a lot of different areas of STEM research. It certainly feels like something is building. That said, I work in biotech, so I know how slow actual research can be. FWIW, my guess is AGI around 2045, small error bar into the 2030s, large error bar headed toward 2100.
How the fuck is this place an echo chamber? le intelligent, mature, rational skeptics like yourself are in the majority here. Does the fact that 10 percent of people here disagree with you really get your panties in that much of a god damn twist? Does absolutely everyone here have to be on your side? It doesn't count as an echo chamber if everyone has what you deem to be the correct opinion, huh?
I just go off of what the best experts in the field are saying. Listening too a lot of Lex podcasts and reading articles on it a lot of experts seem to be saying 10-15 years. It's an appeal to authority argument, but I think if anyone knows what they are talking about its the people working on it.
There could be a selection bias happening here, though. Researchers more excited about progress may be more likely to be willing podcast guests than those who are more pessimistic.
This is true. But if we scrutinize and doubt every single thing we hear then we wouldn't believe anything is true. There is a fallacy for every possible argument that can be made.
Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes. Could that change when new information is brought to light? Yes. We should base expectations on existing evidence and change them when that evidence shifts. Hopeless optimism and hopeless pessimism helps no one.
Regardless we should continue to accelerate the coming of AGI as much as possible in my opinion. It's potential uses far outweigh its potential downsides.
>Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes.
Most of what I have read on the subject links back to this article. Those authors quote a 2019 survey of AI researchers with ~45% of respondents believing in AGI before 2060. The 2019 survey results further break that down to only 21% of respondents believing in AGI before 2036.
I'm truly not trying to be argumentative, but I really think that it's less "a lot of AI researchers think AGI will happen in 10-15 years," and more "a lot of Lex's podcast guests think AGI will happen in 10-15 years."
Don't get me wrong, I love Lex as an interviewer, and I think he gets a lot of great guests. Doing some digging: out of 336 episodes, maybe ~120 have had anything substantial to do with AI (based on the listed topics, titles, and guests). Some of those episodes were duplicate guests, and in others the guests were non-experts. (There were a lot more AI people featured in earlier episodes than I remember.) This does represent more data points than the survey I reference by about 4X, but I didn't keep track of all of the predictions given during my initial listens. I'll take your word that the consensus is 10-15 years, but that still isn't a huge data set.
This is true and here's the thing. It happens when it happens. None of us are divination wizards or prophets. We can only try to make guesses based on existing evidence.
What I do see very consistently across the board is people bringing AI timelines down. That makes me more optimistic I think.
Thanks for letting me know that you have absolutely no idea what you're talking about and that I should block you. I hope I never come across one of your absolutely horrid, moronic posts ever again.
To the commenter that blocked me:
I can only see your comment if I'm not logged in, because you chose to run away instead of participate in a conversation. I am, in fact, not a moron, and would have probably changed my way of thinking if you could have shown me how I was wrong. Now, neither of us will get that chance. Have a nice day.
[deleted]
Despite what disingenuous people like nblack would have you believe, "experts" who believe in 2020s AGI are absolutely not uncommon.
[deleted]
I don’t think it’ll be imminent, but I think we’ll have it eventually. The closest thing we have is Gato by Deepmind, and that’s a far cry from AGI.
How so? What's wrong with GATO?
It had memory/performance issues. Some older threads talk about it in more detail.
Oh ok thanks!
So the option jumps from 7 years to multiple decades? I personally think 10 to 15 years if you want to be really strict. Less than 7 years for not so strict but almost just as functional in many applications.
AGI is dramatically more difficult than the deep learning algorithms we have now, no matter how smart of human-like they might seem already today. It’d put money on AGI being at least 2 decades away. Let’s pray to our future basilisk overlords we’ll have solved the containment problem by then.
Lol, I wonder how this poll will look like after gpt-4.
I hope like hell that these fucking morons will eventually relent a bit thanks to 2023 AI. I doubt it, though. Anyone who disagrees with them on anything will always be delusional idiots, they'll always be the cool badasses who tell it like it is and have all the answers. It's always been that way since I got into futurism 11 years ago.
Fuck ‘em
<3
I'm no expert but I think that until we don't have more data to map the human brain from millions of people AGI won't be sentient and will remain as just algorithms that work really well on a single task or a combination of them.
If brain computer interfaces become safe and commercially available to consumers we will have lots of brain data to develop an artificial intelligence that is actually sentient. So I think it will take a couple of decades for that to happen.
The only way I see for AGI to not be possible is if some form of idealism or dualism is the way reality works. If physicalism is fundamentally how things run, then I don’t see any obstacle to why AGI wouldn’t assuredly be in the cards, it would only be a question of when, and whether or not we hold on long enough to not destroy ourselves first.
That said, anyone who voted this year or next year I feel is wildly optimistic. Downvote me if you must.
I don't think AGI needs to be conscious. It only has to be intelligent.
AGI already exists in private hands and is contained in VR environment, isolated network, waiting to be released as defense weapon to counter AGI that will be created by bad actor
2020s.
Honestly, I look forward to the day where this question never has to be asked again. I've been into futurism for almost 11 years, and jesus fucking christs self-proclaimed skeptics and "realists" are physically incapable of not being obnoxious, condescending assholes. If there was a challenge where those people were tasked with giving their opinion without sneering at anyone who disagrees with them, calling them delusional starry-eyed morons, or otherwise insulting them, they would lose in less than 30 seconds.
I don't see why AGI is so hard.
The way I understand it, something can minimally constitute an AGI if it's able to use knowledge gathered from learning to complete one task in order to train itself to complete another task faster than it would if it didn't have that prior training.
AGI has been attempted this year and although the results were inconclusive, I think we might see something approaching it in the near future.
I think the line isn't clear cut for AGI, I think there's going to be gradual improvement, like in language models.
But I really don't know. What do you think ?
Artificial as in made entirely of synthetic materials or partially biologically based?
I think the notion belongs to a pre AI paradigm and AGI is no longer needed, neither it’s going to be pursued.
Thanks for contributing to r/singularity. However, your post was removed since it was too low in quality to generate any meaningful discussion.
Please refer to the sidebar for the subreddit's rules.
My block list is becoming so incredibly long thanks to this sub.
I think 2060s earliest, maybe 2100s.
No.
Okay, why no? Too pessimistic? Or too optimistic?
Much, much, much, much, much too pessimistic. Nobody who knows what they're talking about could have that opinion.
We dont even know what make us inteligent, what make us conscious, AI can try to look inteligent to us but its just not really
I think the whole incentive of making AI is to create a program that can answer questions, without actually understanding the intelligent answers it's producing, otherwise you would be creating a slave
Of course real intelligence is consciousness, the understanding of the answer, not in merely finding the answer
but that's not what A.I research is for. They're trying to imitate consciousness, not recreate it
Think of how alpha go can beat you at chess/go, but it doesn't really understand the moves it is making
It's an imitation of thinking, without actually doing real thinking
Then its already here
When you have an "alpha go" that can do every activity/skill a human knows to do, including an imitation of the internal thinking of that human, then you have something that you can call AGI, or close enough to AGI. I think we won't see a perfect AGI until the next century. There's just too much complexity to a human brain to emulate all of that with the technology we currently have. Like I said, we can't even make a cockroach AGI. A fucking cockroach. The dumbest animal on this planet.
I have a feeling the confirmation bias will be strong with this one...
We can't even recreate the brain of a cockroach
I think it will happen in the mid of the next century, between 2150-2200
but that's just me being pessimistic
If you were being pessimistic, you would say never.
Oh hell no. It's inevitable it will happen. Intelligence really isn't a special magical thing that can't be understood. Even nowadays all it takes to create intelligence is a vagina and a penis. People that think it's impossible must be religious or believe intelligence is something supernatural. I don't believe in that nonsense. Intelligence is just a configuration of atoms, and it can totally be recreated and manufactured.
I'm a pessimist, I'm not delusional
I do believe when it will be created, homo sapiens won't dominate the world anymore. It will be the end of organic intelligent life for sure
Free will and questioning things maybe a human trait meaning AGI could be just a metal box that calculates things in an enormous speed faster than humans
I think I said in one of my threads that the owners of the ASI without consciousness could be the reason why we end up like the dinosaurs. They can have a whole planet to themselves.
Human tribes have always fought for power, influence, resources and dominion in general over what's valuable and what's there to be used and controlled. History repeats itself.
We don’t need to recreate a complicated structure to prove that we can create agi, we only need to find that jump start seed for the structure to manifest itself, just like with a neural net. No one have tried to use a nn to simulate a cockroach brain so far because no one really gives a shit about cockroaches. We however cares about art so we created ai art generator that we ourselves couldn’t exactly understand its inner structure.
You are about as stupid as a human being can possibly be. I need to go take a shower, I feel dirty after reading this. That aforementioned cockroach is miles and miles above you.
tokkkkaaa t1_ivouanl wrote
I love our shared delusion on this sub