Submitted by Gortanian2 t3_123zgc1 in singularity

I see discussions on this sub that would lead people to cash out their 401k’s and sit on their hands waiting for ASI to save or enslave us all.

Guys, the singularity hypothesis is just that: A hypothesis. There are some very well-made arguments out there against the plausibility of a hard-takeoff.

A week ago I was having the same thoughts you are. How impossibly lucky am I to be alive during the invention of AGI and the intelligence explosion that will follow? What will I do with myself after nuclear fusion, the cure for aging, and interstellar travel are solved? Should I be worried about AI enslaving or eradicating humanity?

While all of those things would be wonderful (or terrible), it’s important for us to recognize the possibility that it might not happen in our lifetimes, if ever.

At the very least, you should read the articles below in their entirety before blowing your kids’ college funds on Nvidia stock.

I would love to read counter arguments to these. If we can’t come up with good explanations for the contrary, it should raise a red flag. And if this post gets downvoted to hell and we are unable to foster these types of debates as a community, then we are effectively treating the singularity hypothesis as a religion.



You must log in or register to comment.

Ghostof2501 t1_jdx5kxy wrote

Look, I’m not here to be rational. I’m here to be sensationalized.


Queue_Bit t1_jdxle5m wrote

Sure, there could be some theoretical wall that stops progress in its tracks. But currently, there is zero reason to believe that a wall like that exists in the near future. Even if AI only improves by a single factor, so 10x, it will STILL absolutely change the world as we know it in drastic ways.

And here's the funny part. Based on research, we KNOW a 10x improvement is guaranteed already. So, I get that you want to slow the hype and want people to think critically, but the truth is that many of us are. And importantly a greater then 10x improvement is almost certainly a guarantee.

Imagine an AI that is JUST as good as humans are at everything. Not better. Just equal. But, with the caveat that this AI can output data at a rate that is unachievable for a human. This much is certain. We will create a general AI that is as good as humans at everything. Once that happens, even if it never gets better, we will live in a world so different than today that it will be unrecognizable.

If you had asked me this time last year if we were going to see a singularity-type event in my lifetime, I would have been unsure, maybe even leaning towards no. But now? If massive societal and economical change doesn't happen by 2030 I will be absolutely shocked. It looks inevitable at this point.


Gortanian2 OP t1_jdxofbi wrote

Thank you. I completely agree with all of this. The criticism I’m raising is against a literal singularity event. As in, unbounded recursive self-improvement where we will see ASI with godlike abilities weeks after AGI gets to touch its own brain.

But I agree that AGI is going to change the world in surprising ways.


ThePokemon_BandaiD t1_jdyqj9n wrote

if we reach human level AGI, why would it stop there? surely people will set AGIs on the task of self improvement and AI development.


Gortanian2 OP t1_jdythpp wrote

It seems obvious right? Just tell the AI to rewrite and improve its own code repeatedly, and it takes off.

As it turns out, recursive self-improvement doesn’t necessarily work like that. There might be limits to how much improvement can be made this way. The second article I linked gives an intuitive explanation.


ThePokemon_BandaiD t1_jdyuo2r wrote

That article is from 2017, and includes no understanding whatsoever of the theories and technology being used in current generative AI.


ThePokemon_BandaiD t1_jdytnho wrote

Humans are definitely not the theoretical limit for intelligence.


Gortanian2 OP t1_jdywit9 wrote

I agree with you. I’m only questioning the mathematical probability of an unbounded intelligence explosion.


Ok_Faithlessness4197 t1_jdz5m2l wrote

I just read the second article you linked, and it does not provide any scientific basis for the bounds of an intelligence explosion. Given the recent uptrend in AI investment, I'd give 5-10 years before an ASI emerges. Primarily, once AI takes over microprocessor development, it will almost certainly kickstart this explosion.


theotherquantumjim t1_jdz19vm wrote

I think AGI (depending on your definition) is pretty close already. As you’ve alluded to, we may never get ASI. I’m not sure that matters really. Singularity suggests a point where the tech is indistinguishable from magic e.g. nanotech, ftl travel etc. I don’t think we need that kind of event to fundamentally re-shape society, as others have said


Ok_Tip5082 t1_jdyufwc wrote

When you're in the elbow it's really hard to tell if the growth is logistic, exponential, or hyperbolic.


Dustangelms t1_jdzeymt wrote

10x improvement of what, precisely? Are you speaking figuratively or is there a certain objective metric you have in mind?


Queue_Bit t1_jdzlxht wrote

I mean that we've used about 1/10th of the high quality training data.

Which means that even with zero improvement in algorithms or methodology. And assuming that improvement is linear. And assuming no new data is created. It means that LLMs will get about 10x better. And who knows what that looks like.


Crackleflame35 t1_je0ugbz wrote

>some theoretical wall

Ever heard of this thing called climate change? AI needs power to run and what do you think will be prioritized during a period of extended brownouts--household ACs or supercomputer server banks/processor farms? This is all a pipe dream because AI hasn't got here in time to create solutions for the mass transfer of organic carbon to CO2 that humans have done along the way to enabling AI. At the very least it'll be a very tight balance between powering the AIs to help us solve the problems while the problems are getting worse and worse.


BigZaddyZ3 t1_jdx67sp wrote

Both of your links feature relatively weak arguments that basically rely on moving the goal on what counts as “intelligence”. Neither one provides any concrete logistical issues that would actually prevent a singularity from occurring. Both just rely on pseudo-intellectual bullshit (imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)

You could even argue that the second link has already been debunked in certain ways tbh. Considering the fact that modern AI can already do things that the average human can not (such as design a near photorealistic illustration in mere seconds), there’s no question that even a slightly more advanced AI will be “superhuman” by every definition. Which would renders the author’s arrogant assumptions irrelevant already. (The author made the laughable claim that superhuman AI was merely science fiction 🤦‍♂️🤣)


SkyeandJett t1_jdx7bwx wrote

You said it better than I could. Two articles with vague musings on the metaphysical nature of intelligence don't really do much to refute the coming Singularity.


Gortanian2 OP t1_jdxdpev wrote

Thank you for your response. The logistical issues I see in these articles that get in the way of unbounded recursive self-improvement, which is thought my many to be the main driver of a singularity event, are as follows:

  1. The end of moore’s law. This is something that the CEO of Nvidia himself has stated.
  2. The theoretical limits of algorithm optimization. There is such a thing as a perfect algorithm, and optimization beyond that is impossible.
  3. The philosophical argument that an intelligent entity cannot become smarter than its own environment or “creator.” A single person did not invent chatGPT, is instead the culmination of the sum total of civilization today. In other words, civilization creates AI, which is a dumber version of itself.

I do not believe these arguments are irrefutable. In fact, I would like them to be refuted. But I don’t believe you have given the opposition a fair representation.


BigZaddyZ3 t1_jdxfpvo wrote

Okay but even these aren’t particularly strong arguments in my opinion :

  1. The end of Moore’s law has been mentioned many times, but it doesn’t necessarily guarantee the end of technological progression. (We are making strong advancements in quantum computing for example.) Novel ways to increase power and efficiency within the architecture itself would likely make chip-size itself irrelevant at some point in the future. Fewer, better chips > more, smaller chips basically…

  2. It doesn’t have to perfect to for surpass all of humanity’s collective intelligence. That’s how far from perfect we are as a species. This is largely a non-argument in my opinion.

  3. This is just flat out Incorrect. And not based on anything concrete. It’s just speculative “philosophy” that doesn’t stand up to any real world scrutiny. It’s like asserting that a parent could never create a child more talented or capable then themselves. It’s just blatantly untrue.


greatdrams23 t1_jdy192q wrote

Quantum computing is a long way away. You cannot just assume that or any other technology will give what is needed.

Once again. I look for evidence that AGI and singularity will happen, but see no evidence.

It just seems to be assumed singularity will happen, and therefore proof is not necessary.


BigZaddyZ3 t1_jdy1xyf wrote

Depends on what you define as a ”long way” I guess. But the question wasn’t whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Moore’s law. There are other aspects that determine how powerful a technology can become besides the size of its chips.


drhugs t1_je5fefn wrote

> the size of it’s chips

If it's its it's its, if it's it is it's it's.


Gortanian2 OP t1_jdxkjna wrote

  1. Very strong counter argument. Love it.

  2. Again, strong, but I would argue that we don’t know where we are in terms of algorithm optimization. We could be very close or very far from perfect.

  3. I would push back and say that the parent doesn’t raise the child alone. The village raises the child. In todays age, children are being raised by the internet. And it could be argued that the village/internet as a collective is a greater “intelligence agent” making a lesser one. Which does bring up the question of how exactly we made it this far.


SgathTriallair t1_jdxq29b wrote

Every single day people discover new things that they didn't learn from society this increasing the knowledge base. There are zero examples of an intelligence being limited by what trained it.


Gortanian2 OP t1_jdxrco9 wrote

The first sentence is true and I agree with you. The second sentence is not. Feral children, those who were cut off from human contact during their developmental years, have been found to be incapable of living normal lives afterwards.


SgathTriallair t1_jdy07jz wrote

But those feral children are smarter than the trees that "trained" them. I didn't say that teaching has no value but it doesn't put a hard cap on what can't be learned.

Let's assume you are correct. IQ is not real but we can use it as a stand in for overall intelligence. If I have an IQ of then I can train multiple intelligences with an array of IQ but the top level is 150. That is the top though, but the bottom. So I can train something from 1-150.

The second key point is that intelligence is variable. We know that different people and machines have different levels of intelligence.

With these two principles we would see a degradation of intelligence. We can simulate the process by saying that intelligence has a variability of 10 points.

Generation 1 - start at 150, gen 2 is 148.

Gen 2 - start 148, gen 3 is 145.

Gen 3 - start 145, gen 3 is 135...

Since variation can only decrease the intelligence at each generation society will become dumber.

However, we know that in the past we didn't understand quantum physics, we didn't understand hand washing, and if you go back far enough we didn't have speech.

We know through evolution that intelligence increases through generations. For society it is beyond obvious that knowledge and capability in the world increases over time (we can do more today than we could ten years ago).

Your hypothesis is exactly backwards. Intelligence and knowledge are tools that are used to build even greater knowledge and intelligence. On average, a thing will be more intelligence than the thing that trains it because the trainer can synthesize and summarize their knowledge, pass it on, and the trainee can then add more knowledge and consideration on top of what they were handed.


shmoculus t1_jdxyp11 wrote

Also on 1. you might say the brain is a type of computer, and it has nothing to do with Moore's law. Imagine we can replicate a similar system using synthetic neurons.


shmoculus t1_jdxzbfk wrote

What thought or real experiment would invalidate 3? You have to understand intelligence first to put system wide constraints on it like that, I don't think we can make those assertions.

You also have human evolution which came about in a low intelligence environment and rapidly gained intelligence, so I'm not sure why that would be different for machines.


SgathTriallair t1_jdxprnf wrote

Moore's law is basically the principal that there use of tools allows one to build better tools. Technology has an exponential curve. It's possible that we run out of the ability to build smaller chips in the current style but 3D chips, light based computing, and quantum computing are examples of how we may be able to take the next step.

There is no good basis for s philosophical arguments that dumb things can't create smart things. We only have a single data point and that is humans. Inorganic matter (or if you want to skip that then single celled organisms) eventually became us. We weren't guided by something smarter than us but arose from dumb materials. ChatGPT has also demonstrated multiple emergent behaviors that were not built into it.


SuperSpaceEye t1_jdxhwi6 wrote

  1. Yeah, moore's law is already ending, but it doesn't really matter for neural networks. Why? As they are massively parallelizable, GPU makers can just stack more cores on a chip (be it by making chips larger, or thicker (3d stacking)) to speedup training further.
  2. True, but we don't know where is that limit, and it just has to be better than humans.
  3. I really doubt it.

Ok_Tip5082 t1_jdyvidj wrote

We're still going to be limited by fab capacity, rare earth minerals, energy, and maintenance technicians.

Supply chain still rules above all. Trade needs to exist until/unless post scarcity hits.


Ok_Tip5082 t1_jdyuuwy wrote

Energy is still finite, and AI uses an absolute fuck ton compared to the human brain. I don't see a practical way to scale it up with current technology that wouldn't also allow for genetic engineering to make us compete just as well, but more resiliently.

Also, We literally just had a 10-100x carrington event miss us in the last two weeks. That shit would set us back to the industrial era at best, above-human-AI or not.

If it turns out AGI can figure out a way to get infinite energy without destroying everything, hey, problem solved! No more conflict! Dark forest avoided!


Jeffy29 t1_jdzbcp7 wrote

>(imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)

I really get the sense lot of the time reading the doubters is that they think nobody else even considering all the problems and challenges. All these PhD researchers are just mindless idiots chasing some fad. Just reeks of an immense hubris.

>The author made the laughable claim that superhuman AI was merely science fiction

The thing is, this thing doesn't even need to be superhuman. I am not sure how many people know of John Von Neumann but he should have been as famous as Einstein and he was arguably even smarter. His Wikipedia page reads like piece of fiction, you look at his huge list of things he is known for and at the end of the list you have (+93 more), what... His contribution to mathematics and computer science is beyond immense, it's very likely we would have been right now quite a bit behind in number of fields if he didn't exist. Now imagine if instead of person of this kind of brilliance being born once or twice a century, we could instead have million of them, on-demand at all times. If it wouldn't result in a singularity, it would be something very close to it.


WikiSummarizerBot t1_jdzbdxc wrote

John von Neumann

>John von Neumann ( von NOY-mən; Hungarian: Neumann János Lajos [ˈnɒjmɒn ˈjaːnoʃ ˈlɒjoʃ]; December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. He was regarded as having perhaps the widest coverage of any mathematician of his time and was said to have been "the last representative of the great mathematicians who were equally at home in both pure and applied mathematics". He integrated pure and applied sciences.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)


acutelychronicpanic t1_jdxrwiq wrote

Given how much has changed, I'm not sure how relevant any pre-GPT3 or even pre-GPT4 opinions are. Even my own opinion 6 months ago looks hilariously conservative and I'm an optimist.

I don't think anyone should be out there making life-changing decisions, but its hard to ignore what's happening.


TitusPullo4 t1_jdxk5df wrote

What’s interesting to me is the shift in perspectives - ten years ago both skynet and the singularity were clearly hypotheses or conspiracy theories, now field leaders aren’t mincing words when they describe them as very real risks.


Gortanian2 OP t1_jdxp2f9 wrote

It’s truly fascinating. And I agree that it is a possible risk. But I don’t think people should start living their lives as if it is an absolute certainty that ASI will solve all their problems within the next couple decades.

My point is that people should consider both possibilities: either the singularity will happen, or it won’t. And there are well thought-out arguments for both sides even if we disagree with them.


TitusPullo4 t1_jdxpkrh wrote

>I don’t think people should start living their lives as if it is an absolute certainty that ASI will solve all their problems within the next couple decades

I 100% agree. People should be very skeptical of anyone selling that narrative - it means they want you to be complacent whilst they earn all of the money. Whoever's earning the money has the power. The odds of UBI ever happening are low - or at least far from guaranteed and we should act under the assumption that it won't happen.


TopicRepulsive7936 t1_jdzhe60 wrote

>...or at least far from guaranteed and we should act under the assumption that it won't happen.

That's horribly self fulfilling.


WonderFactory t1_jdy7v00 wrote

We actually don't need AI to develop much beyond where it is at the moment for crazy advances in medicine and technology over the next decade. Just applying ML where it is now to thousands of different applications will lead to crazy breakthroughs. Imagine thousands and thousands of Models like Alpha Fold and what they will bring to scientific advancement. There was a defusion model that can literally read people's minds using brain MRIs posted here yesterday. That's crazy sci-fi stuff already happening. Things are already happening that a year ago I wouldn't have thought would be possible in my lifetime.


norby2 t1_jdyp7k3 wrote

Maybe Alzheimer’s will be solved.


Ok_Faithlessness4197 t1_jdz68d9 wrote

I think it's unlikely Alzheimer's won't be solved.

!Remindme 10 years


TheSecretAgenda t1_jdxxoye wrote

There is certainly a danger of Singulatarianism devolving into some sort of Millerite cult.

I usually advise trying to live one's life normally with an awareness that certain job categories will be easily replaced by automation in the not-too-distant future.


DaCosmicHoop t1_jdxo6bx wrote

Honestly, forget the far future super crazy amazing stuff.

Even if the world in 50 years is only abit better than the world of today, it's still something to be excited about.

Even in the least optimistic scenarios, I'll still be able to get a graphics card better than a 4090 from the toy in a Mcdonald's happy meal.


eve_of_distraction t1_jdymcst wrote

The least optimistic scenarios are significantly less optimistic than that. 😬


DaCosmicHoop t1_jdyywd7 wrote

"Everyone dies but then realizes they actually are trapped inside 'I have no mouth but I must scream' and spend eternity there."


Imherehithere t1_jdygf6l wrote

People in this sub already treat it as a certainty as a way of coping with their depressing life. It gives them hope that their current way of life will change for the better with new advancements in ai technology.


NanditoPapa t1_jdyxb5h wrote

Kind of like the 2.2 billion Christians in the world hoping things will be better in Heaven. Except they're treated as "normal" while people excited about the positive view of Singularity often get shit for it in this sub. The main difference is the AI crowd have demonstrable proof that their version might actually happen. That was your point, right?


Gortanian2 OP t1_jdyxz07 wrote

I don’t believe they’re treated as “normal,” but it’s almost impossible to refute something like faith.

There’s absolutely nothing wrong with being excited about the real possibility of a better future.


NanditoPapa t1_jdyyeko wrote

I think the 63% of Americans that call themselves Christians are absolutely treated as normal. When 77% of adult Americans say they believe in angels, that's normalizing. If someone espouses their faith, nobody bats an eye.

Anyway, at least we both agree that being excited for a possible fact-based optimistic future is a good thing.


YaAbsolyutnoNikto t1_jdzx3n4 wrote

Wait, angels? Aren’t Americans protestant? Aren’t angels and saints a catholic thing?

Yes, I’m completely ignorant on this topic.


Imherehithere t1_jdz7s6q wrote

Yeah, to add to your point, not just Christians but also Muslims and basically anyone who believes in afterlife... I think they are not normal for believing in completely unfounded and unscientific fairy tales, but they are likely just as vulnerable as people on this sub. They may have lost a loved one or at a desperate situation themselves, so they are not emotionally ready to rationally argue about afterlife. I believe ai will significantly improve our lives and cure Manu diseases but the singularity will not happen in 100 years at least.


NanditoPapa t1_jdzezb1 wrote

I agree AI will largely be beneficial. I'm not sure on the timeline. It could be 5 could be never (if we blow ourselves up first). Anyone who says they know when Singularity will happen... doesn't know when Singularity will happen.


Sure_Cicada_4459 t1_jdzmhl5 wrote

You could apply this line of reasoning to any expected improvement one is expecting in the future, that's just... silly.


prion t1_jdz18zc wrote

Whether we reach AGI or not the implications of what we already have are staggering once they are fully deployed.

All customer service jobs - GONE

Dramatically less need for assistances in medical and law.

Dramatically less need for child care, teaching, farming, yard work, house keeping, elder care, etc.

We are going to see a dramatic decrease in needed humans for gainful employment.

I would like to point out that neither business nor government have a plan to replace those jobs that are going to disappear in the next 10-20 years nor do we have any industries that are able to scale up and absorb available workers.

The impact on individual lives as well as the economy due to decreased consumption and massive defaults on car, home, and personal loans as well as the increase in homelessness and the stress on the social safety net will create a perfect storm if something is not put in place to redistribute the economic power of those businesses who replace human labor with automated labor.

And to be honest, they need to. It will dramatically most all aspects of the businesses that implement it. BUT,

Humans have to be cared for and must be considered first before the enrichment of business due to the ability to eliminate human labor. This is not negotiable. It can't be.

The outcome will be massive civil unrest if we try to do it any other way.

Massive civil unrest that will lead to civil war and if the US goes into civil war we will be fighting an unwinnable war on three fronts. Russia will invade from the East, and China from the west. Meanwhile our security forces will be fighting an internal war against their friends, their families, their neighbors, and their fellow citizens of the nation. And I'm betting that few in our military will be willing to kill people who are homeless and starving just so a minority class can get even richer.

Most people are not that heartless.


problematikUAV t1_jdyr7qb wrote

Clicking on your links requires thought and reading and those don’t seem like things the singularity will require of me


Comfortable_Slip4025 t1_jdyzi9p wrote

The "singularity" is an approximation. On a long enough timescale, current human advancements are already a near-singularity.


submarine-observer t1_jdxxts6 wrote

FDVR may be im possible, but AI is definitely taking my job. :(


No_Ninja3309_NoNoYes t1_jdz3zf1 wrote

Well, AFAIK there's nothing in history that resembles the singularity. We have evolution, but it took a long time. We can claim that computers are faster than evolution, but that's just something we think is plausible. Humans reproduce every 20+ years, so if computers can do it faster, we're going to witness a cyber evolution.

And add quantum computers. Quantum computers have the potential ability to search exponentially growing space just by adding logical qubits. There's no equivalent in nature of quantum computers AFAIK. This doesn't rule out quantum computers.

The other option is biological computers using neural tissue. This doesn't seem as spectacular as quantum computers but still could potentially beat human evolution. I mean, this is not a religious argument. I'm not trying to prove the existence of God. There could be a path to AGI or not. It's more of an engineering question like can you break the sound barrier?

By the way I don't believe in the literal Singularity. There are many hard limits IMO that would prevent it.

TLDR Singularity might be achievable, but maybe not the literal Singularity. Technology could be developed to get us close.


_psylosin_ t1_jdz3znn wrote

There’s honestly no point in drastically changing your life, even if a hard limit stopped further development right this second our lives are going to change In literally unimaginable ways, and sooner than we think


CertainMiddle2382 t1_jdz4ra4 wrote

Recursive self improving is trivially exponential (no « singularity » aka limit though).

Singularity comes from our need to extrapolate the past to anticipate the future.

And if the future comes too fast, thr hypothesis is that we won’t be able to do that anymore.

Those exponentials often appear in recursive things, like population biology.

« Singularities » don’t really happen because, at one point, things outside the exponential mechanism start to « push » against it. It could be food, it could be space, it could be speed of light…

The fight between an exponential process and its environment leads to the omnious « logistics curve », better known as S curve.

Maybe something, somewhere will push against AI, limiting it as all other exponential stuff is limited.

For various resons, I don’t think it will happen.

IMO, Singularity is inevitable and will expand in the whole universe.


Honest_Performer2301 t1_jdz567n wrote

Logical thing is to not do anything to drastic but, I think even if the (singularity) never comes we will live in somewhat of a utopia. People tend to get the singularity confused with other things, the singularity Is a whole other thing, in fact I don't think everyone will be able to or even want to live in the singularity.


Memento_Viveri t1_jdzn6ts wrote

I don't disagree with much of what is stated in the first paper, but think it sets the wrong goal posts. I have no idea what the author means by a three orders of magnitude increase in intelligence. I am already in awe of the smartest humans. Even if you could produce a machine intelligence that was only as smart as the smartest humans, I struggle to fathom the consequences. The machine intelligences can be reproduced ad infinitum. They don't need to sleep and never die. They can communicate between each other in a nearly instantaneous and unbroken manner. They have access to the sum total of all human knowledge and near instantaneous and inerant recall. An army of Einsteins and von Neumann's in constant, rapid communication that never sleeps, never forgets, and never dies.

What are the abilities of such a creation? I don't need an explosion of intelligence of three orders of magnitude. I believe the existence of even one machine with the intelligence level of a highly intelligent human will shake the foundation of society and have implications that are unimaginable. It will be a turning point in human history. Maybe there will be an explosion of godlike intelligence through self improvement, but I don't think this is a necessary condition for society and life to undergo revolutionary and unimaginable changes as a result of machine intelligence.


Gortanian2 OP t1_je0yq76 wrote

“An army of Einsteins and von Neumann's in constant, rapid communication that never sleeps, never forgets, and never dies.“

I wonder how fruitful those conversations would be if one already knows everything the other one knows. I think it may become something more like an einstein-level intelligence with an army of bodies to explore with. A hivemind.

Thank you for your comment, it has given me new ideas to ponder. And I agree. We would not need unbounded exponential growth to drastically shape our reality.


SlackerNinja717 t1_jdzx94a wrote

I agree. I enjoy this sub, the articles posted and discussions, but sometimes I lament that discourse is making it seem that the singularity will happen in the next 3-5 years, where a person in their late teens might forsake investing in education or working on building a career because they think a major societal overhaul is around the corner. My personal opinion - the level of automation will hit an inflection point in about 50 years or so - and then our economic system will have to be completely adapted to the new landscape.


qrayons t1_je06jmp wrote

I read Chollet's article since I have a lot of respect for him and read his book on machine learning in python several years ago.

His main argument seems to be that intelligence is dependent on its environment. That makes sense, but the environment for an AI is already way different than it is for humans. If I lived 80 years and read a book every day from the day I was born to the day I died, I'd have read less than 30k books. Compare that to GPT models which are able to read millions of books and even more text. And now that they're becoming multimodal, they'll be able to see more than we'll ever see in our lifetimes. I would say that's a drastically different environment, and one that could lead to an explosion in intelligence.

I'll grant that eventually even a self-improving AI could hit a limit, which would make the exponential curve to look more sigmoidal (and even Chollet mentioned near the end that improvement is often sigmoidal). However, we could still end up riding the steep part of the sigmoidal curve up until our knowledge has increased 1000 fold. I'd still call that a singularity event.


Gortanian2 OP t1_je0ww5u wrote

You make an excellent point. Even a basic AGI would be able to absorb an insane amount of knowledge from its environment in a matter of weeks. Thank you for your comment, it has altered my perspective.


SoylentRox t1_jdyhulw wrote

Fine let's spend a little effort debunking this:


Intelligence is situational — there is no such thing as general intelligence.

This is empirically false and not worth debating. Current sota AI use very very very simplistic algorithms and are general, and slight changes to the algorithm result in large intelligence increases.

This is so wrong I will not bother with the rest of the claims, this author is unqualified



Extraordinary claims require extraordinary evidence

- you could have "debunked" nuclear fission in 1943 with this argument and sat comfortably in the nice japanese city of hiroshima unworried. Sometimes you're just wrong.

Good ideas become harder to find

This is true but misleading. We have many good ideas, like fusion rocket engines, flying cars, genetic treatments to disable aging, nanotechnology. As it turns out the implementation is insanely complicated and hard. Sometime AI can do much better than us.


True but misleading. Each bottleneck can be reduced at an exponential rate. For example if we actually have AGI right now, we'd be building as many robots and AI accelerator chips as physically can, and also increasing the rate of production exponentially.

Physical constraints

True but misleading, the solar system has a lot of resources. Growth will stop when we have exhausted the entire solar system of accessible solid matter.


Sublinearity of intelligence growth from accessible improvements

True but again misleading, even if intelligence is sublinear we can make enormous brains, and there are many tasks, mentioned above, we as individual humans are too stupid to make short term progress on, so investors won't pay to develop them.

So even if the AGI system has 1 million times the computational power of a human being, but is "only 100" times as smarter and works 24 hours a day, it can still make possible to make working examples of many technologies in short timelines. Figure out biology and aging in 6 months of frenetic round the clock experiments using millions of separate robots. Figure out a fusion rocket engine by building 300,000 prototypes of fusion devices of various scales. And so on.

Human beings are not capable of doing this, no human alive can even hold in their head the empirical results of 300k engine builds and field geometries. So various humans have to "summarize" all the information and they will get it wrong.


Yomiel94 t1_jdyrrw5 wrote

> This is so wrong I will not bother with the rest of the claims, this author is unqualified

I find these comments pretty amusing. The author you’re referring to is François Chollet, an esteemed and widely published AI researcher whose code you’ve probably used if you’ve ever played around with ML (he created Keras and, as a Google employee, is a key contributor to Tensorflow).

So no, he’s not “unqualified,” and if you think he’s confused about a very basic area of human or machine cognition, you very likely don’t understand his claim, or are yourself confused.

Based on your response, you’re probably a little of both.


SoylentRox t1_jdze8ze wrote

I don't care who he is, it doesn't fit measurements he is aware exist.


just_thisGuy t1_jdypu97 wrote

I don’t know who is cashing out 401k, that’s just stupid. I don’t know exactly what singularity is, but within 10 years your life is not going to be the same, in 25 years you might as well believing 200 years from now if not a 1000.


flyblackbox t1_je0867n wrote

Wait, your second sentence invalidates your first. Most can’t access their 401k before 10 years. If your second sentence is true, why would it be stupid to cash out a 401k considering you are 30 years from retirement?


just_thisGuy t1_je0biv6 wrote

I don’t understand why you’d be cashing out 401k early, what does that have to do with singularity being near or just very large changes.


flyblackbox t1_je0dehl wrote

So the money can be used before 30 years in the future.


just_thisGuy t1_je0g7oz wrote

Yes, but why?


flyblackbox t1_je0gsql wrote

Buy a house maybe. Or to have the option of investing in crypto because it’s not permitted in a 401k account.


RLMinMaxer t1_jdyr4ts wrote

You either spend your money expecting a Singularity that never comes, or save your money then watch it become worthless once the Singularity hits.

Screwed either way.


incelo2 t1_jdzmmgw wrote

the second half of this century is gonna be AWESOME I won't make it though...


Griff82 t1_jdzoamx wrote

I'm new to the sub but as a Gen Xer, I've seen great efficiencies develop in my lifetime. the fruit of which did not and will not accrue to the population at large. I expect to watch the same thing happen with AI.


TopicRepulsive7936 t1_jdxb3uf wrote

Noise. Learn to filter it out and get a solid worldview which you seem to lack.


Gortanian2 OP t1_jdxdvs9 wrote

This sounds very much like something a religious person would say. You haven’t refuted the arguments. You’re only ignoring them.