Comments

You must log in or register to comment.

tokkkkaaa t1_ivouanl wrote

I love our shared delusion on this sub

31

mewme-mow t1_ivpez97 wrote

Oh no lol, I was happy about our shared enthusiasm

7

OneRedditAccount2000 t1_ivoytdi wrote

yeah, and they all think when it happens society will magically not turn into chaos, there won't be any civil wars, and they will all for some reason be given access to the technology, even those living in third world countries, and live forever in a vr utopia maintained by the "sky-daddy" asi that does all the work for the human leeches, and nothing wrong will happen billions of years after that

they think life is a fairy tale or something

−4

Artanthos t1_ivpvldf wrote

More likely, when it happens it will be controlled by either a corporation or China.

Neither scenario will produce the results most are hoping for.

2

blxoom t1_ivpxdsn wrote

literally the poorest people in India have smart phones, literal magical technology that would've costed millions just a few decades ago and would seem like magic a few decades ago

1

OneRedditAccount2000 t1_ivpzcz8 wrote

you can't use snartphones to take over the resources of the whole observable universe, and live forever as a digital mind, so you'd rather sell them

you can do that with ASI, so the first ones that make it are motivated to keep it

I had this thought

that means the creators of ASi will also think this exact thought, and may act on it (if it makes sense to them. It definitely makes sense to me)

good news: I'm too dumb to make an ASI, so don't lose sleep over it lol

1

ChronoPsyche t1_ivosyeh wrote

Where are the options expressing uncertainty? Who here is really in a position to predict anything with confidence?

12

buddypalamigo19 t1_ivou9bs wrote

Not me! I don't even work in the field! I'm just here to keep an ear to the ground and follow (as best I can) the latest developments!

7

ChronoPsyche t1_ivow56s wrote

Hey, found an honest person!

1

stofenlez557 t1_ivot8cf wrote

Why do people on this sub seem so much more confident in their predictions than everyone else? It feels like more than 90% people here believe that there's a very good chance of AGI appearing before 2040, while in real life and other subreddits it's probably the complete opposite. What is it that people on this sub know that nobody else in STEM is aware of?

5

nblack88 t1_ivp4dac wrote

There are three things to unpack here that I think will better answer your question:

  1. Bias. Many people who believe the singularity will occur also believe it will occur sometime around 2045. It is commonly believed that AGI is a necessary precursor to the singularity, and many of the popular experts in the field believe we'll have AGI of some sort between 2035-2045. There's a member of this subreddit who helpfully chimes in with a list of each expert and their predictions. Wish I could remember their name, so I could tag them. Bias also works in the opposite direction. Negative bias permeates every facet of our culture, because we have a 24/7 news cycle that perpetuates that bias to make money. We believe everything is getting worse, but it's actually getting better in the long-term.
  2. Predictions. We're pretty useless at predicting events 20 years or longer into the future. 10 years is exceedingly hard. I was alive in 96'. I didn't imagine smart phones in 06. I thought it would take longer. There's a lot of evidence people can cite to support their positions for or against the date for AGI. Truth is, nobody knows, so pick whichever one aligns with your worldview, live your life, and see what happens.
  3. Choice. Speaking as someone who believes the technological singularity is coming...it's more fun. Can't tell you when, or how. It just means I live in a more interesting world when I choose to believe we're headed toward this thing. Nobody else in STEM is any better at predicting the future in 20 years than anyone else. So each group could be right or wrong. Probably both.
8

cloudrunner69 t1_ivouzd1 wrote

Multifaceted exponential growth. The S curves are feeding off the S curves.

2

ihateshadylandlords t1_ivov2ns wrote

> Why do people on this sub seem so much more confident in their predictions than everyone else?

I think it’s because this place (like most subreddits) is an echo chamber.

>What is it that people on this sub know that nobody else in STEM is aware of?

Good question. In my opinion, I think people put too much stock into early stage developments. I think there’s a good chance that most of the products/developments that get posted here daily won’t go that far.

1

phriot t1_ivp1rfd wrote

> I think people put too much stock into early stage developments.

Also, I'd say that thinking the Singularity will ever happen pretty much implies belief in the Law of Accelerating Returns. Combine that belief with the excitement over early progress you mention, and it's not surprising that people here are highly confident that AGI will happen any day now.

Personally, I do think we're coming to a head in a lot of different areas of STEM research. It certainly feels like something is building. That said, I work in biotech, so I know how slow actual research can be. FWIW, my guess is AGI around 2045, small error bar into the 2030s, large error bar headed toward 2100.

3

AsuhoChinami t1_ivpziqa wrote

How the fuck is this place an echo chamber? le intelligent, mature, rational skeptics like yourself are in the majority here. Does the fact that 10 percent of people here disagree with you really get your panties in that much of a god damn twist? Does absolutely everyone here have to be on your side? It doesn't count as an echo chamber if everyone has what you deem to be the correct opinion, huh?

1

Russila t1_ivp90e4 wrote

I just go off of what the best experts in the field are saying. Listening too a lot of Lex podcasts and reading articles on it a lot of experts seem to be saying 10-15 years. It's an appeal to authority argument, but I think if anyone knows what they are talking about its the people working on it.

1

phriot t1_ivpb5ya wrote

There could be a selection bias happening here, though. Researchers more excited about progress may be more likely to be willing podcast guests than those who are more pessimistic.

3

Russila t1_ivpdbd9 wrote

This is true. But if we scrutinize and doubt every single thing we hear then we wouldn't believe anything is true. There is a fallacy for every possible argument that can be made.

Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes. Could that change when new information is brought to light? Yes. We should base expectations on existing evidence and change them when that evidence shifts. Hopeless optimism and hopeless pessimism helps no one.

Regardless we should continue to accelerate the coming of AGI as much as possible in my opinion. It's potential uses far outweigh its potential downsides.

1

phriot t1_ivpht4o wrote

>Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes.

Most of what I have read on the subject links back to this article. Those authors quote a 2019 survey of AI researchers with ~45% of respondents believing in AGI before 2060. The 2019 survey results further break that down to only 21% of respondents believing in AGI before 2036.

I'm truly not trying to be argumentative, but I really think that it's less "a lot of AI researchers think AGI will happen in 10-15 years," and more "a lot of Lex's podcast guests think AGI will happen in 10-15 years."

Don't get me wrong, I love Lex as an interviewer, and I think he gets a lot of great guests. Doing some digging: out of 336 episodes, maybe ~120 have had anything substantial to do with AI (based on the listed topics, titles, and guests). Some of those episodes were duplicate guests, and in others the guests were non-experts. (There were a lot more AI people featured in earlier episodes than I remember.) This does represent more data points than the survey I reference by about 4X, but I didn't keep track of all of the predictions given during my initial listens. I'll take your word that the consensus is 10-15 years, but that still isn't a huge data set.

2

Russila t1_ivpiy2i wrote

This is true and here's the thing. It happens when it happens. None of us are divination wizards or prophets. We can only try to make guesses based on existing evidence.

What I do see very consistently across the board is people bringing AI timelines down. That makes me more optimistic I think.

1

AsuhoChinami t1_ivpywk4 wrote

Thanks for letting me know that you have absolutely no idea what you're talking about and that I should block you. I hope I never come across one of your absolutely horrid, moronic posts ever again.

1

phriot t1_ivq1gwt wrote

To the commenter that blocked me:

I can only see your comment if I'm not logged in, because you chose to run away instead of participate in a conversation. I am, in fact, not a moron, and would have probably changed my way of thinking if you could have shown me how I was wrong. Now, neither of us will get that chance. Have a nice day.

1

AsuhoChinami t1_ivq00fz wrote

Despite what disingenuous people like nblack would have you believe, "experts" who believe in 2020s AGI are absolutely not uncommon.

1

ihateshadylandlords t1_ivot18g wrote

I don’t think it’ll be imminent, but I think we’ll have it eventually. The closest thing we have is Gato by Deepmind, and that’s a far cry from AGI.

4

just_thisGuy t1_ivp812j wrote

So the option jumps from 7 years to multiple decades? I personally think 10 to 15 years if you want to be really strict. Less than 7 years for not so strict but almost just as functional in many applications.

2

JoelMDM t1_ivp85tk wrote

AGI is dramatically more difficult than the deep learning algorithms we have now, no matter how smart of human-like they might seem already today. It’d put money on AGI being at least 2 decades away. Let’s pray to our future basilisk overlords we’ll have solved the containment problem by then.

2

KIFF_82 t1_ivpwq5a wrote

Lol, I wonder how this poll will look like after gpt-4.

2

AsuhoChinami t1_ivq0wi3 wrote

I hope like hell that these fucking morons will eventually relent a bit thanks to 2023 AI. I doubt it, though. Anyone who disagrees with them on anything will always be delusional idiots, they'll always be the cool badasses who tell it like it is and have all the answers. It's always been that way since I got into futurism 11 years ago.

2

Otherwise_Day_9643 t1_ivp66bt wrote

I'm no expert but I think that until we don't have more data to map the human brain from millions of people AGI won't be sentient and will remain as just algorithms that work really well on a single task or a combination of them.

If brain computer interfaces become safe and commercially available to consumers we will have lots of brain data to develop an artificial intelligence that is actually sentient. So I think it will take a couple of decades for that to happen.

1

mikeyblaze223 t1_ivp88km wrote

The only way I see for AGI to not be possible is if some form of idealism or dualism is the way reality works. If physicalism is fundamentally how things run, then I don’t see any obstacle to why AGI wouldn’t assuredly be in the cards, it would only be a question of when, and whether or not we hold on long enough to not destroy ourselves first.

That said, anyone who voted this year or next year I feel is wildly optimistic. Downvote me if you must.

1

kaushik_11226 t1_ivpawl6 wrote

I don't think AGI needs to be conscious. It only has to be intelligent.

3

sheerun t1_ivpdk2c wrote

AGI already exists in private hands and is contained in VR environment, isolated network, waiting to be released as defense weapon to counter AGI that will be created by bad actor

1

AsuhoChinami t1_ivpjvw3 wrote

2020s.

Honestly, I look forward to the day where this question never has to be asked again. I've been into futurism for almost 11 years, and jesus fucking christs self-proclaimed skeptics and "realists" are physically incapable of not being obnoxious, condescending assholes. If there was a challenge where those people were tasked with giving their opinion without sneering at anyone who disagrees with them, calling them delusional starry-eyed morons, or otherwise insulting them, they would lose in less than 30 seconds.

1

Professional-Noise80 t1_ivpk1av wrote

I don't see why AGI is so hard.

The way I understand it, something can minimally constitute an AGI if it's able to use knowledge gathered from learning to complete one task in order to train itself to complete another task faster than it would if it didn't have that prior training.

AGI has been attempted this year and although the results were inconclusive, I think we might see something approaching it in the near future.

I think the line isn't clear cut for AGI, I think there's going to be gradual improvement, like in language models.

But I really don't know. What do you think ?

1

SFTExP t1_ivpl98w wrote

Artificial as in made entirely of synthetic materials or partially biologically based?

1

glaster t1_ivpp43p wrote

I think the notion belongs to a pre AI paradigm and AGI is no longer needed, neither it’s going to be pursued.

1

singularity-ModTeam t1_ivpwpnt wrote

Thanks for contributing to r/singularity. However, your post was removed since it was too low in quality to generate any meaningful discussion.

Please refer to the sidebar for the subreddit's rules.

1

AsuhoChinami t1_ivq03pa wrote

My block list is becoming so incredibly long thanks to this sub.

1

turnip_burrito t1_ivpldlc wrote

I think 2060s earliest, maybe 2100s.

0

Disaster_Dense t1_ivpo0h1 wrote

We dont even know what make us inteligent, what make us conscious, AI can try to look inteligent to us but its just not really

0

OneRedditAccount2000 t1_ivps7ja wrote

I think the whole incentive of making AI is to create a program that can answer questions, without actually understanding the intelligent answers it's producing, otherwise you would be creating a slave

Of course real intelligence is consciousness, the understanding of the answer, not in merely finding the answer

but that's not what A.I research is for. They're trying to imitate consciousness, not recreate it

Think of how alpha go can beat you at chess/go, but it doesn't really understand the moves it is making

It's an imitation of thinking, without actually doing real thinking

1

Disaster_Dense t1_ivpsjxw wrote

Then its already here

1

OneRedditAccount2000 t1_ivpu8vp wrote

When you have an "alpha go" that can do every activity/skill a human knows to do, including an imitation of the internal thinking of that human, then you have something that you can call AGI, or close enough to AGI. I think we won't see a perfect AGI until the next century. There's just too much complexity to a human brain to emulate all of that with the technology we currently have. Like I said, we can't even make a cockroach AGI. A fucking cockroach. The dumbest animal on this planet.

1

TheHamsterSandwich OP t1_ivosgzh wrote

I have a feeling the confirmation bias will be strong with this one...

−2

OneRedditAccount2000 t1_ivoy99x wrote

We can't even recreate the brain of a cockroach

I think it will happen in the mid of the next century, between 2150-2200

but that's just me being pessimistic

−2

TheHamsterSandwich OP t1_ivoyvao wrote

If you were being pessimistic, you would say never.

2

OneRedditAccount2000 t1_ivp0hig wrote

Oh hell no. It's inevitable it will happen. Intelligence really isn't a special magical thing that can't be understood. Even nowadays all it takes to create intelligence is a vagina and a penis. People that think it's impossible must be religious or believe intelligence is something supernatural. I don't believe in that nonsense. Intelligence is just a configuration of atoms, and it can totally be recreated and manufactured.

I'm a pessimist, I'm not delusional

I do believe when it will be created, homo sapiens won't dominate the world anymore. It will be the end of organic intelligent life for sure

3

Desperate_Donut8582 t1_ivp6d4k wrote

Free will and questioning things maybe a human trait meaning AGI could be just a metal box that calculates things in an enormous speed faster than humans

1

OneRedditAccount2000 t1_ivp7q4i wrote

I think I said in one of my threads that the owners of the ASI without consciousness could be the reason why we end up like the dinosaurs. They can have a whole planet to themselves.

Human tribes have always fought for power, influence, resources and dominion in general over what's valuable and what's there to be used and controlled. History repeats itself.

3

MassiveIndependence8 t1_ivprern wrote

We don’t need to recreate a complicated structure to prove that we can create agi, we only need to find that jump start seed for the structure to manifest itself, just like with a neural net. No one have tried to use a nn to simulate a cockroach brain so far because no one really gives a shit about cockroaches. We however cares about art so we created ai art generator that we ourselves couldn’t exactly understand its inner structure.

1

AsuhoChinami t1_ivq0btg wrote

You are about as stupid as a human being can possibly be. I need to go take a shower, I feel dirty after reading this. That aforementioned cockroach is miles and miles above you.

1