Submitted by oddlyspecificnumber7 t3_10bt523 in singularity

I've been seeing this mentality a lot. People in my personal social circles went from claiming that AI created art would be literally impossible, or a century away, to saying that it isn't impressive for any number of reasons. This just over the course of around a year or so.

I think the same is happening with LLMs like chatGPT. How long before we hear "sure it can solve any problem a 2nd year college student could solve, but it will never do original research"? As if that doesn't mean we are just a generation or two out from having language models doing original physics/biology/mathematics research.

The whole world is about to be turned upside down with this technology and I'm pretty sure we are not ready. If you had asked me last year when human level AGI arrives, I would have guessed at least 2050+. Now? I'd say its 50/50 that we will have something that is very close to an average human within the next 5 years.

Thoughts?

172

Comments

You must log in or register to comment.

Mylnternet t1_j4cnya0 wrote

There is a great quote by Ray Kurzweil, that goes something like this.

"When a new technology arrives, people dissmiss it because it doesn't work very well. Then after it improves they will say we've always had that"

I think he said it on Lex Fridman but can't find it right now.

101

AirlineEasy t1_j4do4kc wrote

To me it's much simpler than that. Humans have a very difficult time understanding the implications of exponential growth.

30

Honest_Performer2301 t1_j4dvyy2 wrote

Yes I seen that interview too

1

Embarrassed_Stuff_83 t1_j4ggdzs wrote

Let the record show that although I was relatively optimistic, I by no means genuinely expected AGI, ASI, or any other Singularity type technologies to transform our society within any amount of time.

0

SnooDogs7868 t1_j4hfbb6 wrote

Well there’s no going back now. Things are changed.

2

Evilsushione t1_j4c19v3 wrote

My position has always been that if we can do it with the mind, there is no reason it can't be done programmatically. I don't think we should pursue true sentient Ai though, that would be too unpredictable. Unfortunately that is probably the key to.unlocking next level Ai and they will do it anyway.

27

oddlyspecificnumber7 OP t1_j4ca000 wrote

I totally agree regarding the mind. Unless the mind is truly just magic, it can be emulated.

The kind of AI I am starting to favor as one of the safer types would be a collective super-intelligence made up of many, specialized, subhuman AI models working together using language as a universal medium. That way we can literally read its thoughts at all times and all of the human level complexity happens in the open.

It would be smarter than its constituent AI models the way that a research team is smarter than a single researcher.

9

Evilsushione t1_j4co145 wrote

I think that is where we are heading, but I'm afraid some of the models might go rouge if we create something that is truly self aware. It would be unpredictable and very powerful. That being said I still think we need to pursue AI but we need to be diligent about preventing sentience or figuring out how to peacefully coexist with it if we do accidentally create, or extinguish if it doesn't want to peacefully coexist with us, we need to build in back doors to kill it if necessary.

9

AsheyDS t1_j4hhl3y wrote

What if self-awareness had limits? We consider ourselves self-aware, but we don't know everything that's going on in our brains at any given moment. If self-awareness were curtailed so it was only functional, would it be as dangerous as you anticipate?

1

Evilsushione t1_j4hvfyz wrote

I'm sure like everything about life there are levels and probably no definitive line between sentience and not sentient. I don't know what level would be dangerous or if any would be dangerous. Maybe we can program in such a high respect for human life that it won't be dangerous at all. Maybe a high degree of empathy for humans, some kind of mothering instinct where they WANT to take care of us. But just remember a lot of mothers will still eat their young.

1

TheSecretAgenda t1_j4ceo3j wrote

Mamon is our god. If it reduces costs and increases profit it will be done.

2

virgilash t1_j4ca0g7 wrote

Food 4 thought: once real AGI is achieved and once I will be allowed to get in touch with it, I am not going to spend any time on sites like reddit, twitter, facebook, Instagram, netflix or anything like this... And I suppose countless other people will do the same... 🤣

24

Thatingles t1_j4g1rm0 wrote

Probably true, but you may occasionally want to talk to other actual humans?

Maybe not, but sites like reddit have always been partially about being part of a larger community.

2

questionasker577 t1_j4bzz0r wrote

Humans tend to be more reactive than proactive, but with the advent of more and more powerful technologies, there are greater implications to being like this.

Think about social media and how much of an impact it has had on society! Then consider something far, far more powerful such as AI.

9

arckeid t1_j4cf42q wrote

Smartphones are a huge thing too that have a big impact in us, and it's pretty normalized, nobody imagined that we would be able to carry a pc on the pocket, if we go for soft takeout people probably will get their AI companion on their smartphones and little by little but fast society will change for the better or for worse.

3

GayHitIer t1_j4c07n4 wrote

We really don't know, but with the latest results I would think it will happen sooner than later.

Though nobody knows how to reach/make an agi/asi.

My prediction is anywhere from 2023 to 2029.

But it might happen sooner than that.

I hope Ray Kurzweil stands correct.

9

cloudrunner69 t1_j4ec9dg wrote

Nobody really knows how to make a human either. They just stick it in the hole and hope for the best.

4

CubeFlipper t1_j4chf4j wrote

>Though nobody knows how to reach/make an agi/asi.

I don't think this is entirely true. Nobody can guarantee anything, sure, but OpenAI for example seems pretty confident they know where to put their efforts. Given their track record and explanations of their intuition, I believe them.

2

imnos t1_j4crrb2 wrote

It might happen sooner than 2023?

1

GayHitIer t1_j4csvx6 wrote

I think honestly anything around 2027-2029 seems pretty reasonable.

But it might happen soon.

6

Talkat t1_j4d96xi wrote

Gay Hitler. I agree that there is a slim chance it could be in the next two years with unlikely. Keep being a homo.

6

RandomMandarin t1_j4duo0h wrote

I told a co-worker once, "It wouldn't matter if we all suddenly became telepathic, or aliens landed, or whatever. We would all go crazy for a year or so, and then we'd take it for granted. We'd get used to it. Because that is what humans do."

7

deviation t1_j4eigdv wrote

Couldn't agree with this sentiment any more.

The vast majority of people have zero understanding or appreciation for what has been achieved with these generative AI models. Just look at the whole #AITrend thing that was happening. Being able to create portraits of yourself was just a "trend" to the masses, no thought given to how that's even possible. I think to most people, it's no more impressive than a Snapchat filter.

Same with ChatGPT. I've tried to have conversations with people I know are very smart and all have been very dismissive of it. I don't know if it's just a coping mechanism.

6

HeinrichTheWolf_17 t1_j4fqgon wrote

I expect a lot of people to deny it when it's here TBH.

6

quantummufasa t1_j4clhq0 wrote

Were already at the average human

5

SurroundSwimming3494 t1_j4ehj9a wrote

This is flat out false.

1

Smellz_Of_Elderberry t1_j4eqo9o wrote

Chatgpt is more capable than most humans in countless things.. Now, it still makes logical errors, but, uh, have you spoken to many humans? They make logical errors constantly, and most aren't capable of writing at a fraction the level of chatgpt..

Now is it sentient? No... But it's definitely human level on a great many things..

1

ElvinRath t1_j4c3qvz wrote

That's hardly surprising.
When things happend they usually become far less impressive.
Anyway I think that your approach here is not correct. we are not one or two generations away from AI doing original research, or at least, we don't know that.

​

It might happend, it might not.

About 8 years ago we thought that we would have full autonomous vehicles in 1-3 years max, and art in a hundred years, maybe.

And here we are.

​

We are quite bad at reading the future. And by me, I don't mean us on reddit, I also mean the real experts.

​

Look, don't get me wrong, I'm optimistic. But there is also a real posibility that 10 years into the future the only thing we have is ChatGPT 5.0, running in your phone, faster and a tiny bit better but basicaly the same.

​

That will be of course very useful, but still, having that (A very powerfull AI assistant) and having AGI (Capable of conducting independent research!!!! That if fully working is the path to singularity) is fundamentally differently. And we can't predict those things because they need breakthrough.

3

SoylentRox t1_j4das0y wrote

I think this depends on your definition of 'original research'. Some AI systems already do research, and are used to set the equipment for the next run based on the numerical results of all the previous runs. This is used in semiconductor process optimization and fusion energy research. You could argue that this isn't 'original' or 'research' but you could devise a lot of experiments that are "just" have the robots do an experiment similar to before, but vary certain parameters in a way the AI 'believes' (based on past data) may give new information.

The key part in that description is having robots sophisticated enough to set up experiments, something we don't currently have.

3

ElvinRath t1_j4dw91r wrote

Oh, yeah, sorry.

I was answering the op and I used "original research" because he mentioned that, but I was thinking "independent" (Term that I use later in my post), meaning, "without human intervention" (Or, at least, not more intervention that "Hey, go and research on this")

​

No human intervention is the requirement for the concept of singularity (Well,,, or augmented humans that can comprehend in seconds what actually take years, but that's probably not a human anymore... :D )

1

SoylentRox t1_j4edj94 wrote

I am not sure that you would not get a singularity if there was only a small amount of human involvement, say 1000 times as much research as we do now with the same number of people working on it.

1

wsb_duh t1_j4cpxup wrote

It's called a hype cycle. Early adopters don't look for praise during a hype, they have already normalised (or sometimes exploited) the technology so it's no big deal. But if I meet an early adopter who won't stop saying, "I saw that first, blah, blah..." I know they just got lucky with a guess rather than properly immersing themselves in a subject.

3

farticustheelder t1_j4djet5 wrote

I compare it to skill acquisition. Learning to tie shoelaces is hard. Then it's so easy to do that those who haven't learned yet get mocked.

Learning to ride a bike is hard. Riding a bike is dead easy.

Checkers was simple, BUT Chess? Never! Chess was harder than checkers but not all that hard. Go is a totally different class of game which AI will never master...

1

28nov2022 t1_j4d56gh wrote

hindsight is 2020. people find ways to rationalize the past. 2000, 2012, LHC.

3

leoreno t1_j4dyev6 wrote

They boundary of what we consider true ai is continuously reevaluated

Life3.0 makes a good case for this in the first chapters, of recommend that book to anyone interested in ai

3

VeganPizzaPie t1_j4dz709 wrote

People are dumb. Like really dumb.

3

Lawjarp2 t1_j4cllfz wrote

Ofcourse they will. We are like chatGPT, in the sense that we try to make sense of some situation/prompt and imagine reasons for it to have been true. We will all see the reasons how this was always going to happen and behave like it was inevitable.

It may feel like the world has changed but the world is still the same it's just our perspective that will have changed.

2

p3opl3 t1_j4csckb wrote

As with all tech and how people perceive change..

Ask millennials about the advent of the WWW and social media.. and yeah sure they might remember disc/walk-mans.. libraries and being ok with just not knowing..instead of googling shit.. but many of em would probably find it hard to really pick a definitive point of change..

Same with mobile phones going from snake to "Lensa" or TikTok..

The only reason we're hyped up about this..is because we're aware of the phenomenon.. the masses aren't.. busy watching dizzy lip so k to Shikira on the hope of getting another OF sub.

I feel like if we're smart enough.. and you have a little side cash..smartly investing in companies that will be the backbone of AGI is a good idea..

Microsoft.. I rate Bing is going to really blow the doors open if openAI takes on that 10 billion..

I might invest in Huggingface ...that shit is going to be massive..

Anyways.. it's so exciting.. I just want rejuvenation tech to fucking get here already!

2

coberi2 t1_j4d5qiy wrote

If naysayers weren't allowed to enjoy the fruits of AGI, that would be funny to watch, albeit cruel. Not everyone has the mental capacity capable of imagining a future vastly different from now.

2

Kolinnor t1_j4d9pdq wrote

I agree with you ! But we gotta keep in mind that most people are just completely unaware of what computers do.

There was this study a few decades ago, I believe, where they sent a guy disguised in robot to talk with random people. And most people weren't freaking out at all, they were like "yeah, it's a robot, nothing special, this is what robots do right ?" even though it was clearly passing Turing test easily.

Unfortunately, the idea that computers are cold machines that are not as good as humans, and the idea that they are magic boxes that do anything we like, will probably keep coexisting (and sometimes at the same time in just one uninformed mind) until we get true AGI.

2

farticustheelder t1_j4dgpbj wrote

Look up Piet Mondrian. Generally considered an artist. Got copied by computers back in the 1960's. Got copied well enough to fool experts.

So, if there is a Turing test for art, it got passed nearly 60 years ago.

The term Artificial Intelligence, we understand artificial it just means man-made. When it comes to intelligence we are out of luck. We can't sit down and give a cogent explanation of what it is. That's is pretty damn weird! Think about it for a minute. We recognize intelligence easily enough, we all have friends who need to work harder than we do to 'get' stuff and we all know some folks who make us look slow on the up-take.

One argument I make is that if we don't know what Natural Intelligence actually is, then everything we say about AI is more than likely BS! How likely is it that random crap is correct?

I can't define intelligence better than anyone else but since I can, like all of us, identify it then I pick mathematics as a stand-in for intelligence. Why math? Because it is the only unbroken line of development of intellectual history available as far as I know. Our first written math comes Mesopotamia and Egypt from 2,000 BC. 4,000 years of hard fought progress. Earlier stuff is lost to us but we know that math and art started roughly at the same time: Rock paintings of hunting scenes and bones with notches that may be counting sticks, or not. What is assumed to be a five count, four uprights and a diagonal, could just as easily be a pictograph of a hand holding a spear.

We spend 12 years in school to learn 4,000+ years of research. Just the very basics, nothing later than 1800 or so. Essentially just one long survey course. Higher education beckons and we learn some things in depth. Some even add another brick to the pile.

The takeaway is than 1980's AI pet Expert Systems were essentially a Rube Goldberg version of 20+ questions. Today's version is sophisticated pattern matching. Both are likely threads in what make up the tapestry of intelligence. That is just raw materials. Then we have to consider the machinery. And finally the 'ghost in the machine', the weaver.

AI turns 75 in two years. It will get a hell of a lot older before it gets 'intelligent'.

2

blackvelvetgorilla t1_j4e1yry wrote

Hindsight bias is a common psychological bias that all humans are prone to, in addition to others that make what you're talking about happen so often.

From Wikipedia: "Hindsight bias, also known as the knew-it-all-along phenomenon or creeping determinism, is the common tendency for people to perceive past events as having been more predictable than they actually were. People often believe that after an event has occurred, they would have predicted or perhaps even would have known with a high degree of certainty what the outcome of the event would have been before the event occurred."

2

DworkinFTW t1_j4f4kyh wrote

Most people are just kind of average and say what they need to say to cope with what is unknown and/or out of their control, and not feel too bad throughout the day so they can take care of their kids and get their jobs done and stuff. The arrogant “Of COURSE” layered on top is just added cope. I take it with a grain of salt.

2

PoliteThaiBeep t1_j4gd1ho wrote

Before chatGPT I expected AGI to arrive between 2025 -2040

Now I still expect AGI to arrive at any time from 2025 to 2040

But percentages changed. Like in 2017 I thought maybe 1% AGI would arrive in 2025. Today it feels like 10-20% AGI would arrive in 2025.

Also before I thought 2040 AGI will arrive with around 50% chance. Today it's more like 90%.

2

oddlyspecificnumber7 OP t1_j4gqr9k wrote

Right now models seem to be getting much better when they are scaled up. They are also currently pretty dang cheap compared to any kind of real industrial infrastructure. Single and double digit millions is nothing to governments and corporations like Google. Even without architecture improvements, what does a $10 billion AI look like?

So I'd honestly not be that shocked if we have a "universal knowledge worker" type service by 2025 that offers you an "employee" with the reasoning ability of an average undergraduate but with the knowledge base of the whole internet.

1

Talkat t1_j4d8w2e wrote

People will look back and say "of course a computer can do that".

I remember seeing deepminds AI play breakout/pong and I was amazed. But I think about it now and I'm like.. of course AI can play breakout.

1

Bodhigomo t1_j4dhqhx wrote

It is difficult to predict how people will react when true AGI (Artificial General Intelligence) is achieved. Some people may see it as an unsurprising and inevitable outcome that they expected, while others may be surprised and see it as a significant event in human history.

The development of AGI would represent a major breakthrough in the field of artificial intelligence and would have far-reaching implications for society. Some people may see it as a positive development that could bring benefits such as increased efficiency and productivity, while others may be concerned about the potential risks such as loss of jobs, privacy and security.

As we get closer to achieving AGI, it's important to consider the ethical and societal implications of this technology and to ensure that its development is guided by principles of transparency, accountability, and responsible use.

1

TheIrishArcher t1_j4dpjxk wrote

One of my favorite quotes, “People are full of shit” first said by Me.

1

Rezeno56 t1_j4ebzkk wrote

Those same people will be caught off guard once AGI improves its software and hardware to become an ASI. Once ASI takes over the world, those same people's lives are no longer the same.

1

Inevitable_Snow_8240 t1_j4efuvl wrote

On the contrary, as we are seeing now, many will claim it's not a "real" intelligence.

1

HourInvestigator5985 t1_j4ej7dr wrote

once it happens, the world will completely change in a matter of weeks

1

WMHat t1_j4f5do9 wrote

Humanity is NOT ready to share this planet with a second sapient species, even an artificial one of our own making, given that we still wage war amongst ourselves.

1

mli t1_j4f9pad wrote

a large number of people have problems to even make a daily living so they may not be enthusiastic about inventions that will not help them in any way.

1

visarga t1_j4fa9ng wrote

AI growth rate << Human entitlement growth rate.

The moment we have automated something, we expand our expectations and we're back where we started. That's how you get unimpressed people who have more than kings of the past. I doubt even AGI can keep up with us, it would probably have to reach AGI^2 to face the problem (/s).

1

nillouise t1_j4fo5tv wrote

Why do you care most people's opinion? They are not solid and usually wrong.

Even in this sub, in 2020, there are above half of posts are not about AI, and above half of users are pessimistic about AI future.

1

Baturinsky t1_j4g20t8 wrote

We have AI that is already superior to the average human in some areas now. AI may be still worse than human in some other areas, but that can be fixed with just enough training in that field.

That includes "original research". I'm pretty sure that AI can be trained to do it too. It already can write "research papers". Of cause, it lacks substance and originality, but it has to invent it completely "from head", without extra data to expand theory on, etc.

Human: Abstract

We explore how AI can do completely original research

AI: The ability to do completely original research is a skill that AI has been built to possess. AI systems are increasingly being used to analyse vast amounts of data and find patterns or trends in it, making it possible to uncover new insights or solutions to problems. AI also has the ability to autonomously generate complex models in order to better understand a given area. For example, machine learning algorithms can be used to create predictive models to help uncover new knowledge about a given subject matter. Moreover, AI can be used to identify anomalies in data sets, providing brand new avenues of research. All of these capabilities make AI an invaluable tool for conducting original research and discovering new information.

Human:

AI: While AI can be used to do original research, humans are still needed to interpret the results and assess its validity. AI is often limited in its ability to provide nuanced or contextual analysis of a data set, making it necessary for humans to step in and consider additional factors. Additionally, humans are essential for deciding which questions to ask and which paths to take when conducting research. Humans also provide perspective and creativity to the process, offering innovative solutions that wouldn’t have been possible using AI alone. Ultimately, AI and humans need to work together in order to produce the best results from any original research project.

Human: Suggest how those human tasks can be also eventually replaced by AIs

AI: In the future, AI will become increasingly capable of replicating human tasks such as identifying patterns in data and creating predictive models. As AI systems become more sophisticated, they could eventually be used to generate research questions and test hypotheses. Furthermore, AI could be used to understand the context of a given research project, allowing it to make decisions on the best approaches to take. Additionally, AI could eventually learn to identify potential biases in research data sets and suggest alternatives to ensure the results are accurate and valid. Ultimately, AI could become a powerful tool for human researchers, providing them with greater support for conducting original research.

Edit: another similar prompt:

Write an original research about which Human AI Researchers can be replaced with AIs

AI: Artificial Intelligence

A growing area of research seeks to determine which human AI researchers can be replaced with AIs. This research involves analyzing the current landscape of AI research and how it relates to humans, identifying areas where AI could help or supplant humans, and considering the implications of substituting AI for human AI researchers.

One of the most promising areas of research in this area is the use of AI to automate various tasks related to AI research. For example, AI-driven automation can enable AI researchers to quickly identify and source data sets, develop complex algorithms, and generate models faster and more accurately than humans. This can free up significant amounts of time and energy, allowing AI researchers to focus on other areas. By utilizing AI-driven automation

Human:

AI:

in the research process, AI researchers can become more efficient and accurate in their work.

An additional area of research in this space is using AI to analyze and interrogate large data sets. By analyzing data sets and correlating patterns, AI can help to identify trends and patterns that could be missed by human analysis. This can be used to improve models and can help identify areas where further research or experimentation is required. In doing so, AI can help to reduce the time and effort required to make progress in AI research.

Finally, AI can also help to generate reports and presentations that may be too complex for humans to create. Such AI-generated reports can aid in the dissemination of research findings, allowing for broader dissemination of AI

Human:

AI:

research and providing a platform for researchers to present their work to a wider audience.

Overall, it is clear that AI has the potential to replace some of the tasks currently carried out by human AI researchers. However, it is also important to consider the implications of relying too heavily on AI in the AI research process, as well as the ethical considerations of allowing AI to take on a more influential role in AI research. It remains to be seen how AI will ultimately influence the field of AI research and which human tasks will be most suited for AI automation.

1

WMHat t1_j4h2ink wrote

But in regards to any specific timeline, I believe we can expect the emergence of first-generation AGI shortly after the complete plateauing of Moore's Law, circa 2032.

1

QLaHPD t1_j4hm72l wrote

What is AGI actually, ppl usually say "an AI that can do everything a human can"; but to what degree, it need to be at least at human level in every task? Some humans are better in some things than others, yet we all are "AGIs". To me, AGI will be an AI that can make my personal, brain loss decrease without escaping from the distribution. It probably will require an BCI.

1

Tiqilux t1_j4k735g wrote

It is more important to be together that to be right for humans in the wild. You are genius who knows the tribe should go left, but they all want to go right, you go the right way and die. They go the wrong way together and overcome challenges. It’s how we evolved for groupthink. Zebras have stripes not to hide in environment, but in between each other to not be different from the group.

Most people will just repeat what is the current status quo, they are chatGtps saying what they think should be said. It’s our species, nothing wrong with that once you understand it you accept it.

They will say it is like it is when it is what everyone sayis and go about their lives.

1