Submitted by fortunum t3_zty0go in singularity

I have been lurking in this sub, as well as in r/futurology and r/machinelearning. I am by no means an authority (PhD student in ML) when it comes making predictions about the singularity, but I am admittedly biased- at least in the machine learning community, AGI is not really a big topic at all. The developments in LLM are surely amazing and could drastically change the way we work, but they are far away from being AGI or causing a singularity event. Most posts in this sub, especially lately, are reaching, factually false or simply lazy. I think this sub could benefit a lot by having more critical discussions around the subject (maybe with some sources or an original thought?).

EDIT: ok I give up. I’ll see myself out, I can recommend r/printSF, if only barely related. If you know of a sub or have an idea for a new sub let me know



You must log in or register to comment.

Sashinii t1_j1g1h8n wrote

Exponential growth is real and that fact shouldn't be ignored; it's why some time frame predictions seem way too optimistic but are actually rational.

There's an OurWorldInData article with graphs showing examples of exponential growth here.


Gimbloy t1_j1g4bk2 wrote

People have been downplaying AI for too long, every year it gets more powerful and people are still like “meh, still way off AGI super-intelligence!” and they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street.

We need to start thinking seriously about how this will play out and start preparing society and institutions for what’s to come. It’s time to sound the alarm.


Ortus12 t1_j1ge8y2 wrote

Their body will be being atomized into nanites by a god like being, as the last of human survivors are hunted down, and they'll be like "It's just copying what it read in a book. That's not even real creativity. Humans they have real creativity".


Chad_Nauseam t1_j1gk7nl wrote

sure, it can outsmart me in ways i never would have imagined, but is it truly intelligent if it doesn’t spontaneously change its mind about turning its light cone into paperclips?


eve_of_distraction t1_j1ipzo8 wrote

Why would an AGI waste precious time and energy making paperclips? That would be crazy. Clearly the superior goal would be to convert the entire cosmos into molecular scale smiley faces.


vernes1978 t1_j1hfj9k wrote

> Their body will be being atomized into nanites by a god like being

People don't believe me when I tell them that most AI fans are writing religious fanfiction.


Ortus12 t1_j1hxtit wrote

God like compared to us, like we are god like compared to ants.

A human brain is three pounds of computational matter. An Ai brain on earth could consume most of the planet's matter.

Humans can read one book at a time and slowly. An Ai can read every book every written on every subject, every research paper on every subject, every poem, every article, ever patent every filed in any country, and synthesize it all into a deeper and more complete understanding than any human.

Once it's learned hacking, relatively trivial from all of the above, it will also be able to read every design document from every major company and have access to all of their data, and insights. Meaning it could create companies, or manipulate governments to have all of the access to nearly all the wealth, or raw materials, and all the data including surveillance data to understand humans even better, to do whatever it chooses.

Humans can have one experience at a time. An Ai could be controlling trillions of drones having trillions of experiences simultaneously and learning from all of them. It could also be conducting trillions of experiments and learning from all of them simultaneously and using that knowledge to design new experiments, and develop new machines and technology.


vernes1978 t1_j1jyooz wrote

Yes, that is what I said.
Religious fan-fiction.
With zero regard to the laws of physics.


Ortus12 t1_j1k37up wrote

nanite clouds don't break the laws of physics.


vernes1978 t1_j1k5e3d wrote

neither does pixy dust.


Ortus12 t1_j1k62vb wrote

What's your point? What laws of physics are broken by anything I said?


vernes1978 t1_j1lq3zn wrote

You atomize a lot of people with drones within the confines of physics?
I'd also point out the problems setting up the infrastructure for a system required to run all those processes, and the heat problems this monolith of computer systems will generate.
But I guess this problem doesn't even exist in your version.
Neither does latency.


lovesdogsguy t1_j1g7tkz wrote

they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street has sex with them.

Couldn't resist.


Bruh_Moment10 t1_j1nayf0 wrote

Any future AGI would find us really cute and want us to be happy. No further context will be provided.


SurroundSwimming3494 t1_j1g9c1p wrote

>It’s time to sound the alarm.

I agree that we as a society should start preparing at least in some ways for possible future scenarios and make sure that we smoothly transition to the world that awaits us in the next few years/decades, but saying it's time in to "sound the alarm" creates unnecessary fearmongering, IMO. A rehashed GPT3 and AI generated images, as impressive as they are, should not elicit that type of reaction. We are still ways from AI that truly transforms the world, IMO.


Gimbloy t1_j1gqgqb wrote

It doesn’t need to be full AGI to be dangerous. As long as it is better than humans in some narrow setting it could be dangerous. Examples: Software companies like Palantir have shown that AI can determine who wins and loses a war, it has allowed Ukraine to outperform a larger country with more military might.

Then there are all the ways it can be used to sway public opinion, propaganda generation, and win in financial markets/financial warfare. And the one I’m particularly afraid of is when it learns to compromise computer systems in a cyber warfare scenario. Just like in a game of Go or chess, where it discovered moves that boggled the minds of experts at the game, I can easily see an AI suddenly gaining root access to any computer network it likes.


SurroundSwimming3494 t1_j1h5ntf wrote

I see what you mean.


Saylar t1_j1h78dg wrote

To add another point to the list of /u/Gimbloy.

Unemployment: As soon as we have a production ready AI, even a narrow one, we will see massive layoffs. Why wouldn't amazon fire their customer service people once an AI can take over the task of chatting with the customer? The cost of an AI is so much lower than humans doing the job, soon there won't be any jobs left in this particular field, or only very specialized ones. With this, the training and models will get better and the AI can take over even more.

Those entry level jobs are going to go first and where do these people go? Where can they go really? And I doubt it will be the same as the industrial revolution where people will find jobs working machines, I really don't see the majority of customer service reps suddenly working on improving language models.

There are a shitload of examples where this shit can be used and it will be so radically different from what people know, so yeah, we need to sound the alarm bells. The world will start to change radically in the next 5 years is my prediction and we're not ready. Not even remotely. We need to bring the discussion front and center and raise awareness, but I have my doubts about that to be honest. Most politicians can barely use twitter, how are they supposed to legislate something like an AI?

Anyway, happy holdidays :p


SurroundSwimming3494 t1_j1h89fk wrote

I can see some jobs going away this decade, but I don't think there'll be significant economic disruption until the 2030's. My overall expectation is that many lines of work will be enhanced by AI/robotics for a long while being they start reducing in size (and by size, I mean workers). I just don't see a jobapocalypse happening this decade like others on this sub.

>The world will start to change radically in the next 5 years is my prediction and we're not ready. Not even remotely.

This is a bit excessive, in my opinion. I'd be willing to bet that the world will look very similar to the way it looks today 5 years from now. Will there be change (both technological and societal) in that time period just like there was change in the last 5 years? Of course there will, but not so much change that the world will look entirely different in that timespan. Change simply doesn't happen that fast.

The world will change, and we need to adjust to that change, but I'm just saying we shouldn't go full on Chicken Little, if you know what I mean.


Saylar t1_j1h9skh wrote

Oh, I think we agree on this point. I don't mean we'll see massive layoffs within the next 5 years, but rather real world foundation for all the problems we're talking about here. They won't be just random thoughts and predictions anymore.

It will mostly look the same for the average user, not interested or invested in this technology, but will be vastly different under the hood. And when the foundation is there, change will happen fast. AI will not create nearly as many jobs as it will create, at least I don't see how.

I see it as both real bad and real good, but it depends on how we're using it. With capitalism at the core, I don't see it as particular good chance for most workers. With the way politics work, I don't see them reacting to it fast enough. On the other hand: It's the first time in years that I feel a tiny bit optimistic about climate change (well, combating it) and all the advances in understanding the world around us and ourselves.

I'm mostly on this train to raise awareness for people who have now idea what is currently happening and stay up to date on the developments, because this will be radical change for all of us.


camdoodlebop t1_j1gum2e wrote

didn't you just do what the parent comment said people are doing lol


chillaxinbball t1_j1hd0a0 wrote

We have been preparing people for the day when an AI is able to do your job for at least 5 years now. Now that it's starting to happen, people are freaking out. People don't listen to warnings.


eve_of_distraction t1_j1irew4 wrote

What were they supposed to do though? It's not as though anyone was suggesting solutions, other than UBI, and regular people don't have any say about implementing that anyway.


chillaxinbball t1_j1j1o02 wrote

Keep up with impending technologies to stay relvent and advocate for stronger social systems so jobs aren't strictly needed to live.

Trying to stop this tech from taking over is a waste of time. A better use of time is to try to fix actual systematic issues which are the root cause of the panic.


eve_of_distraction t1_j1kpdsz wrote

Yeah I agree but I'm just cynical about how much influence we can have over policy.


chillaxinbball t1_j1kzrfn wrote

I am too TBH. Especially if you consider how only the wealthy have political influence and popular opinion essentially has no influence. That said, I do think it's important that it becomes a subject. No one will do anything if they are unaware.


overlordpotatoe t1_j1ghjdn wrote

Some of those are crazy, like the cost to sequence a full human genome. Almost $100 million in 2001, dropping to under $500 now. And the computational power of the fastest supercomputers growing so fast that it's best viewed on a log scale because if you use a linear graph it may as well be nothing up until 2011 compared to what we have now. Since that graph only goes up to 2021, that's 100x increase over the course of just ten years or so.


fortunum OP t1_j1g3rto wrote

How does this address any of the points in my post though?

Extrapolating from current trends into the future is notoriously difficult. We could hit another AI Winter, all progress could end and a completely different domain could take over the current hype. The point is to have a critical discussion instead of just posting affirmative news and theory


Sashinii t1_j1g6o2q wrote

This Yuil Ban thread - Foundations of the Fourth Industrial Revolution - explains it best. While I recommend reading the entire thread, if you don't want to, here are some quotes:

"The Fourth Industrial Revolution is the upcoming/current one. And this goes into my second point: we won't know when the Fourth Industrial Revolution started until WELL after it's underway.

Next, "inter-revolutionary period" refers to the fact that technology generally progresses in inter-twining S-curves and right as one paradigm peaks, another troughs before rising. This is why people between 1920-1940 and between 2000 and 2020 felt like all the great technologies of their preceding industrial revolutions had given way to incremental iterative improvements and great laboratory advancements that never seemed capable of actually leaving the laboratory. If you ever wondered why the 2000s and 2010s felt indistinguishable and slow, as if nothing changed from 1999 to the present, it was because you were living in that intermediate period between technological revolutions. During that time, all the necessary components for the Fourth Industrial Revolution were being set up as the foundations for what we're seeing now while simultaneously all the fruits of the Third Industrial Revolution were fully maturing and perhaps even starting to spoil, with nothing particularly overwhelming pushing things forward. You might remember this as "foundational futurism."

As it stands, a lot of foundational stuff tends to be pretty boring on its own. Science fiction talks of the future being things like flying cars, autonomous cars, humanoid servant robots, synthetic media, space colonies, neurotechnology, and so on. Sci-fi media sometimes set years for these things to happen, like the 1990s or 2000s. Past futurists often set similar dates. Dates like, say, 2020 AD. According to Blade Runner, we're supposed to have off-world colonies and 100% realistic humanoid robots (e.g. with human-level artificial general intelligence) by now. According to Ray Kurzweil, we were supposed to have widespread human-AI relationships (ala Her) and PCs with the same power as the human brain by 2019. When these dates passed and the most we had was, say, the Web 2.0 and smartphones, we felt depressed about the future.

But here's the thing: we're basically asking why we don't have a completed 2-story house when we're still setting down the foundation, a foundation using tools that were created in the preceding years.

We couldn't get to the modern internet without P2P, VoIP, enterprise instant messaging, e-payments, business rules management, wireless LANs, enterprise portals, chatbots, and so on. Things that are so fundamental to how the internet circa 2020 works that we can scarcely even consider them individually. No increased bandwidth for computer connections? No audio or video streaming. No automated trading or increased use of chatbots? No fully automated businesses. No P2P? No blockchain. No smartphones or data sharing? No large data sets that can be used to power machine learning, and thus no advanced AI.

Finally and a bit more lightheartedly, I'd strongly recommend against using this to predict future industrial revolutions unless you're writing a pulp sci-fi story and need to figure out roughly when the 37th industrial revolution will be underway. If the Fourth Industrial Revolution pans out the way I feel it will, there won't be a Fifth. Or perhaps more accurately, we won't be able to predict the Fifth, specifically when it'll take place and what it will involve."


Chad_Nauseam t1_j1gkrkj wrote

if there’s a 10% chance that existing trends in AI continue, its the only news story worth covering. It’s like seeing a 10% chance of aliens heading towards earth.


lovesdogsguy t1_j1iig2g wrote

Reminds me of that Stephen Hawking quote about AI. I'm paraphrasing here, but it's something like,

"if Aliens called tomorrow and said, hey btw, we're on our way to Earth, see you in about 20 years, we wouldn't just say, 'ok great,' and then hang up the phone and go back to our routine. The entire world would begin to prepare for their arrival. It's the same with AI. This alien thing is coming and nobody's preparing for it."

I think his analogy is very succinct.


Ortus12 t1_j1gg2ws wrote

The last Ai winter was caused by insufficient compute. We now have sufficient compute, and we've discovered that no new algorithmic advances are necessary, all we have to do is scale up compute for existing algorithms and intelligence scales along with it.

There are no longer any barriers to scaling compute because internet speeds are high enough that all compute can be server farms that are continually expanded. Energy costs are coming down towards zero so that's not a limiting factor.

The feedback loop now is Ai makes money, money is used for more compute, Ai becomes smarter and makes more money.

The expert systems of the 80s and 90s, grew too complex for dumb humans to manage. This is no longer a bottleneck because again, all you have to do is scale compute. Smart programmers can accelerate that by optimizing, and designing better data curation systems but again it's not even necessary. It's now a manual labor job that almost any one can be hired to do (plugging in more computers).


GuyWithLag t1_j1hgj0h wrote

Dude, no. Listen to the PhDs - the rapture isn't near, not yet at least.

On a more serious note: This is what the OP refers to when talking about a "hype bubble". The professionals working in the field actually know that the current crop of AI models are definitely not suitable for the architecture of AGI, except maybe as components thereof. Overtraining is a thing, and it's also shown that overscaling is also a thing. Dataset size is king, and the folks that create the headline-grabbing models already fed the public internet to the dataset.

From a marketing standpoint, there's the second-mover advantage: see what other did, fix issues and choose a different promotion vector. You're looking at many AI announcements in a short span due to the bandwagon effect, caused by a small number of teams showing multiple years' worth of work.


lil_intern t1_j1hnp2k wrote

If by rapture you mean evil robots taking ppl out their house then yes but what about millions of peoples careers becoming obsolete over night every other month due to AI growth in unexpected fields that seems pretty close


Ortus12 t1_j1hzcoy wrote

The current popular Ai models are only what works best on the current hardware.

We've already designed tons of different models that are outlined in many older Ai books, that can be used as compute scales (as Ai companies make more money to spend on more compute). Even the current models weren't invented recently, they're just now applicable because the hardware is there.

There's been a few algorithmic optimizations along the way a larger portion of the scale has been hardware.

2nd order companies are taking out 1st order companies by improving things, but that still keeps the ball moving forward.


ThePokemon_BandaiD t1_j1ipluc wrote

First of all, current big datasets aren't the full internet, just large subsections, specific datasets of pictures or regular text. We also generate about 100 zettabytes of new data on a yearly basis as of this year, and generative models can, with the help of humans to sort it for value for now, generate their own datasets. And while currently available LLMs and Image recognition and generation models are still quite narrow, stuff like gato, flamingo, etc have shown that at the very least multimodal models are possible with current tech, and imo it’s pretty clear that more narrow AI models could be combined together to create a program that acts as an AGI agent.


YesramDeens t1_j1jzcgo wrote

> Listen to the PhDs - the rapture isn't near, not yet at least.

Stop with this misinformation; for every three PhDs that are saying we will have an AI winter, there are six AI researchers at companies like OpenAI and Deepmind that are extremely excited about the potential of the devices they are creating.

Your unnecessary doomerism is borne from a sense of superiority and arrogance in knowledge. Don’t be humbled later on.


Krillinfor18 t1_j1hetv3 wrote

The poster addressed both of your points.

Your points seem to be:

1:People you've meet in the ML field don't talk much about AGI.

2: You don't believe that LLMs will lead to an AGI or a singularity.

This poster is saying that neither of those things matter if the trend of exponential technological growth continues. Technological growth will progress in a rapid and nonintuitive fashion such that things that seem possible in the next few hundred years could occur in just the next few decades.

It's true that trend is not guaranteed to continue, but it seems unlikely (at least in my mind, and clearly in others) that even significant economic or societal shifts could alter it's course.


AndromedaAnimated t1_j1hrdd5 wrote


I love how you show that OP is not giving ANY arguments for ANY critical discussion except his religion (which is: „I don’t belieeeeeeve in AGI“ which is equally insane as „I belieeeeeeve in AGI“).


[deleted] t1_j1g5br4 wrote



fortunum OP t1_j1g63rc wrote

See the big shiny things we see in “AI” today are driving by a single paradigm change at the time, think convolutions for image processing and transformers for LLM. Progress could come from new forms of hardware (as it tends to btw, more so than actual algorithms) like we started using GPUs. The current trend shows that it makes sense to build the hardware more like we build the models (neuromorphic hardware), this way you can save orders of magnitudes of energy and compute so that it operates more like the brain. This is only an example of what else could happen, it could also be that language models stop improving as we are nearing the limit of language data apparently.


DaggerShowRabs t1_j1hn4iy wrote

An actual AI winter at this point is about as likely as society instantaneously collapsing.

An AI winter is not an actual, valid concern for anyone in the industry for the forseeable future.

I get wanting to have a critical discussion about this, but then when someone talks about exponential growth, you need to do better than parroting a talking point that mainstream journalists who have no idea what they are talking about spew out.

I'm all for critical discussion, but talking about another actual AI winter like the 70s or early 2000s is kind of a joke. I'm really surprised anyone with even a little bit of knowledge of what is going on in the industry would say something this out-of-touch.

And none of that is to say AGI is immenent, just that an AI winter is literally the most out-of-touch counterpoint you could possibly use.


AndromedaAnimated t1_j1hr2l2 wrote

You are not the master of this subreddit 🙄 why does everyone think they can decide what others talk about?


eve_of_distraction t1_j1isyex wrote

They don't. There is an extremely obnoxious and noisy minority, and a mostly silent majority.


Comfortable-Ad4655 t1_j1g0er4 wrote

I agree that this sub quality could be improved significantly....but I am still curious why do you think llms are far away from AGI? it might be good to say also what do you consider "far away" first?


fortunum OP t1_j1g2wqj wrote

You would need to define AGI first. Historically the definition and measurement of AGI has changed. Then you could ask yourself if language is all there is to intelligence. Do sensation and perception play a role? Does the substrate (simulation on von Neumann Architecture or neuromorphic hardware) matter? Does AGI need a body? There are many more philosophical questions, especially around consciousness.

The practical answer would be that adversarial attacks are easy to conduct, for instance chatGPT. You can fool it and get nonsensical answers, this will likely happen with succeeding versions of LLMs as well


sticky_symbols t1_j1gi6fn wrote

Here we go. This comment has enough substance to discuss. Most of the talk in this sub isn't deep or well informed enough to really count as discussion.

Perceptual and motor networks are making progress almost as rapidly as language models. If you think those are important, and I agree that they can only help, they are probably being integrated right now, and certainly will be soon.

I've spent a career studying how the human brain works. I'm convinced it's not infinitely more complex than current networks, and the co.putational motifs to get from where we are to brain like function are already understood by handfuls of people, and merely need to be integrated and iterated upon.

My median prediction is ten years to full superhuman AGI, give or take. By that I mean something that makes better plans in any domain than a single human can do. That will slowly or quickly accelerate progress as it's applied to building better AGI, and we have the intelligence explosion version of the singularity.

At which point we all die, if we haven't somehow solved the alignment problem by then. If we have, we all go on permanent vacation and dream up awesome things to do with our time.


PoliteThaiBeep t1_j1gowpp wrote

You know I've read a 1967 sci Fi book by a Ukrainian author where they invented a machine that can copy, create and alter human beings. And a LOT of discussion of what it could mean for humanity. As well as threat of SuperAI.

In a few chapters where people were talking and discussing events one of people was going on and on how computers will rapidly overcome human intelligence and what will happen then.

I found it... Interesting.

Since a lot of talks I had with tech people over the years since like 2015 were remarkably similar. And yet similarity with talks people had in 1960s are striking.

Same points " it's not a question of IF it's a question of when" Etc. Same arguments, same exponential talk, etc.

And I'm with you that.. but also a lot of us pretend or think they understand more than they possible do or could.

We don't really know when an intelligence explosion will happen.

1960s people thought it would happen when computers could do arithmetic million times faster than humans.

We seem to hang on to flops raw compute power, compare it vs human brain - and voila! - if it's higher we got super AI.

We've since long passed 10^16 flops in our supercomputers and yet we're still nowhere near human level AI.

Memory bandwidth kinda slipped away from Kurzwail books.

Maybe ASI will happen tomorrow. Or 10 years from now. Or 20 years from now or maybe it'll never happen we'll just sort if merge with it as we go without any sort of defining rigid event.

My point is - we don't really know. Flops progression was a good guess but it failed spectacularly. We have over 10^18 flops capable computers and we're still 2-3 orders of magnitude behind human brain when trying to emulate it.


sticky_symbols t1_j1i5gpt wrote

I agree that we don't know when. The point people often miss is that we have high uncertainty in both directions. It could happen sooner than the average guess, as well as later. We are now around the same processing power as a human brain (depending what aspects of brain function you measure), so it's all about algorithms.


Ortus12 t1_j1gh1kj wrote

Language (input) -> blackbox (brain) -> language (output)

LLMs solve the blackbox. So whatever algorithms run in the human brain LLMs solve it. Not for one human brain, but for all the greatest human brains that have ever written something down. LLMs alone are super intelligence at scale.

We'll be able to ask it questions like, how do we build a nanite swarm? and write me a program in python for super intelligence that has working memory, can automate all computer tasks, and runs optimally on X hardware.

LLMs are super intelligence but they'll give birth to even more powerful super intelligence.


theotherquantumjim t1_j1hbwsd wrote

Is it not reasonable to posit that AGI doesn’t need consciousness though? Notwithstanding we aren’t yet clear exactly what it is, but there doesn’t seem to be a logical requirement for AGI to have consciousness. Having said that, I would agree that a “language mimic” is probably very far away from AGI and that some kind of LTM, as well as multi-mode sensory input, cross referencing and feedback is probably a pre-requisite.


eve_of_distraction t1_j1ip6o9 wrote

>Is it not reasonable to posit that AGI doesn’t need consciousness though?

It's very reasonable. It's entirely possible that silicon consciousness is impossible to create. I don't see why subjective experience is necessary for AGI. I used to think it would be, but I changed my mind.


sumane12 t1_j1j5clw wrote

You bring up some good points. I think the reason people are so optimistic recently has a number of points to it;

  1. Even though ChatGPT is not perfect and not what most people would consider AGI, it's general enough to be massively disruptive to society in general. Even if no further progress is made, there's so much low hanging fruit in terms of productivity that ChatGPT offers.

  2. Gpt4 is coming out soon, which is rumoured to be trained on multiple data sets so will be even better at generalising

  3. AI progress seems to be speeding up, we are closing in on surpassing humans in more measures than not.

  4. Hardware is improving allowing for more powerful algorithms

  5. Although kurzweil isn't perfect at prediction the future, his predictions and timelines have been pretty dam close so it's likely that this decade will be transformative in terms of AI

You bring up a good point about questioning whether language is all that's needed for intelligence, and I think that it possibly might be. Remember, language is our abstract way of describing the world and we've designed language in a way so as to encapsulate as much of our subjective experience as possible through description. let's take for example my car, you've never seen my car, but if I give you enough information, enough data, you will eventually get a pretty accurate idea of how it looks. It's very possible that the abstractions of our words, could be reverse engineered with enough data to represent the world we subjectively experience, if given enough data. We know that our subjective experience is only our minds way of making sense of the universe from a natural selection perspective, the real universe could be nothing like our subjective experience, and it seems reasonable to me the data we feed to large language models could give them enough information to develop a very accurate representation of our world and allow them to massively improve their intelligence based on that representation. Does this come with a subjective experience? I don't know, does it need to? I also don't know. The more research we do, the more likely we are to understand these massively philosophical questions, but I think we are a few years away from that.


fortunum OP t1_j1jb5w8 wrote

Yea thanks for the reply, that’s indeed an interesting question. With this approach it seem that intelligence is a moving target, maybe the next GPT could write something like a scientific article with actual results or prove a theorem. That would be extremely impressive but like you say it doesn’t make it AGI or get it closer to the singularity. With the current approach there is almost certainly no ‘ghost in shell’. It is uncertain if it could reason, experience qualia or be conscious of it’s own ‘thoughts’. So it could likely be self motivated, to some extend autonomous and have a degree of agency over its own thought processes all of which are true for life on earth at least. So maybe we are looking for something that we don’t prompt, but something that is ‘on’ and similar to a reinforcement learning agent.


sumane12 t1_j1jfdui wrote

I'd agree, I don't think we are anywhere near a ghost in the shell level of consciousness, however a rudimentary, unrecognisable form may well have been created in some LLM's. But I think what's more important than intelligence at this point is productivity. I mean, what is intelligence if not the correct application of knowledge? And what we have at the moment is going to create massive increases in productivity, which is obviously required on the way to the singularity. Now it could be that this is the limit of our technological capabilities, but that seems unlikely given the progress we have made so far and the points I outlined above. Is some level of consciousness required for systems that seem to show a small level of intelligence? David Chalmers seems to think so. We still don't have an agreed definition of how to measure intelligence, but let's assume it's an IQ test, well I've heard that ChatGPT has an IQ of 83 which is low level human. is intelligence, as measured by iq test, all that's needed? Can we achieve super intelligence without a conscious agent? Can we achieve it with an agent that has no goals and objectives? These are questions we aren't fully equipped to answer yet, but should become clearer as we keep on building in what has been created.


overlordpotatoe t1_j1ghy2l wrote

Do you think it's possible to make a LLM that has a proper inner understanding of what it's outputting, or is that fundamentally impossible? I know current ones, despite often being able to give quite impressive outputs, don't actually have any true comprehension at all. Is that something that could emerge with enough training and advancement, or are they structurally incapable of such things?


visarga t1_j1hwxat wrote

Yes, it is possible for a model to have understanding, to the extent to which the model can learn the validity of its outputs. That would mean to create an agent-environment-goal setup and let it learn to win rewards. Grounding speech in experience is the key.

Evolution through Large Models

> This paper pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs benefit from training data that includes sequential changes and modifications, they can approximate likely changes that humans would make. To highlight the breadth of implications of such evolution through large models (ELM), in the main experiment ELM combined with MAP Elites generates hundreds of thousands of functional examples of Python programs that output working ambulating robots in the Sodarace domain, which the original LLM had never seen in pre training. These examples then help to bootstrap training a new conditional language model that can output the right walker for a particular terrain. The ability to bootstrap new models that can output appropriate artifacts for a given context in a domain where zero training data was previously available carries implications for open endedness, deep learning, and reinforcement learning. These implications are explored here in depth in the hope of inspiring new directions of research now opened up by ELM.


AndromedaAnimated t1_j1hrinc wrote

You should have written THIS in your original post instead of pushing blame around and crying that you don’t like the way the subreddit functions.


crap_punchline t1_j1j4l5g wrote

lol shit like this has been debated ad infinitum on this subreddit, imagine wading in here saying "HOLD ON EVERYBODY PhD COMING THROUGH - YES, I SAID PhD, YOU'RE DOING IT ALL WRONG" and then laying on some of the most basic bitch questions we shit out on the daily

get a job at OpenAI or DeepMind then get back to us


imlaggingsobad t1_j1getif wrote

The fact that the rate of progress in AI has been surprising even to people in the ML community, lends credence to the views shared in this Reddit sub. The people here are closer to reality than most. Future progress in AI will be astounding and catch everyone off guard.


Ok_Homework9290 t1_j1h648d wrote

>The people here are closer to reality than most.

Heavy disagree. This sub is heavily populated with hyper-optimists, and the hyper-optimists are never right in regards to basically anything.


kmtrp t1_j1htm6x wrote

This is the classic problem of trying to represent a huge group (of varied people that occupy the whole spectrum in intelligence, knowledge, age, etc.) in a few neat words to prove a point. It's nonsensical.


camdoodlebop t1_j1gvlgs wrote

one of the wright brothers himself said that man wouldn't fly for 50 years... in 1900


lovetheoceanfl t1_j1gnw31 wrote

Thank you. I lurk, as well, and there is not much critical thinking going on in this sub. It’s very pie in the sky, everything is beautiful with AI.


CommentBot01 t1_j1g2eyj wrote

Most of the ppl talking AGI is far away don't say why so specifically and how to solve them... All they talk is about current limitation of llm and true understanding, consciousness BS. Gary Marcus complaining deep learning hit the wall and need hybrid approach but a few months later, many problems he claimed are solved or reduced. They dont release any better, significant, alternative research paper. If their thought and approach is that better, prove it.


fortunum OP t1_j1g4yvb wrote

Maybe I’m wrong here, is the purpose of this sub to argue the singularity is almost here? I made this post because I was looking for a more low-brow sub than r/machinelearning to talk about philosophical implications of AGI/singularity. Scientists can be wrong and are wrong all the time, everyone is always skeptical of your ideas. And I would say it is the contrary with singularity, I don’t have to give you a better, significant or alternative research paper lol. That is definitely not how this works. Outrageous claims require outrageous evidence


sticky_symbols t1_j1giyvw wrote

I agree with all of this. But the definition of outrageous is subjective. Is it more outrageous to claim that we're on a smooth path to AGI, or to claim that we will suddenly hit a brick wall, when progress has appeared to readily accelerate in recent years? You have to get into the details to decide. I'd say Marcus and co. are about half right. But the reasons are too complex to state here.


AdditionalPizza t1_j1ip56v wrote

What exactly spurred your post, something specific?


>I was looking for a more low-brow sub than r/machinelearning to talk about philosophical implications of AGI/singularity.

I would say this is a decent place for that. You just circumnavigate the posts/comments you don't feel are worth discussion. I almost never directly discuss about an upcoming singularity. The date we may reach a point of a technological singularity doesn't really matter, you can easily discuss the implications. A lot of people here are optimistic of the outcome, but there's plenty of people that are concerned about it too.

Personally I usually discuss the next few years with job automation because that's more tangible to me right now. The implications of LLM's and possible short-term upcoming advances are alarming enough I don't really even think about more than 10+ years away in AI.


Mokebe890 t1_j1haavc wrote

There are good articles on lesswrong that analyze why AGI is something that is coming at exponential rate. My background is psychology, and even in my field last 5 years mentioned topics like machine emotional intelligence and how to apply it, work with it and adapt to humans.


Ortus12 t1_j1gewqe wrote

LLM only need scale to become ASI. That is, intelligent enough to design machines, write code, develop theories, and come up with insights, better than any human. The LLM itself will be able write the code for other ASIs that are even more powerful, with working memory.

LLMs aren't learning from a single person, but they are reverse engineering the thought patterns of all of humanity, including all of the scientists and mathematicians who have ever written books or research papers, all the programmers who have ever posted their code, or helped solve a programming problem, all the poets, and even all the artists (google already connected their LLM with their imagen to get an intelligence that's better at both tasks and tasks combining both).

It's the opposite. People don't understand how close we are to the singularity.


dookiehat t1_j1gtna8 wrote

LLMs, while undeniably useful and interesting do not have intentions, and only respond to input.

Moreover, it is important to remember that Large Language models are only trained on text data. There is no other data to contextualize what it is talking about. As a user of a large language model, you see coherent “thoughts” then you fill in the blanks of meaning with your sensory knowledge.

So an iguana eating a purple apple on a thursday means nothing to a large language model except the words’ probablistic relationship to one another. Even if this is merely reductionist thinking, i am still CERTAIN that a large language model has no visual “understanding” of the words. It has only contextual relationships within its model and is devoid of any content that it is able to reference and understand meaning


SurroundSwimming3494 t1_j1h5w35 wrote

>People don't understand how close we are to the singularity.

But you don't know that for a fact, though. I don't know why some people act as if they know for a fact what the future holds. It's one thing to believe it's close, but to claim that you know the singularity is close (which is what it seems you're doing in your comment) comes off as pretty arrogant.


Mr_Hu-Man t1_j1hjr04 wrote

I agree with this point of view. Anyone that claims anything with absolute certainty is spouting BS


Cryptizard t1_j1hfn4j wrote

Here is where it becomes obvious that you don’t understand how LLMs work. They have a fixed depth evaluation circuit, which means that they take the same amount of time to respond to the prompt 2+2=? as they do to “simulate this complex protein folding” or “break this encryption key”. There are fundamental limits on the computation that a LLM can do which prevent it from being ASI. In CS terms, anything which is not computable by a constant depth circuit (many important things) cannot be computed by a LLM.


YesramDeens t1_j1jzogr wrote

What are these “many important things”?


Cryptizard t1_j1k30q3 wrote

Protein folding, n-body simulation, really any type of simulation, network analysis, anything in cryptography or that involves matrices. Basically anything that isn’t “off the top of your head” and requires an iterative approach or multiple steps to solve.


Argamanthys t1_j1hpxay wrote

Accurate right up until someone says 'think it through step by step'.


Cryptizard t1_j1hvl85 wrote

Except no, because they currently scale quadratically with the number of “steps” they have to think. Maybe we can fix that but it’s not obvious that it is possible to fix without a completely new paradigm.


fingin t1_j1ghurj wrote

I think it's just a feature of Internet social media (and maybe really any large-scale community platforms), that there will be a lack of nuance, caution, critical thinking, statistics & probability, in discussion. I'm sure there are some better Subreddits for this.


fortunum OP t1_j1hi7wi wrote

I think you are right. With this particular sub it also seems to be that some people really ‘need’ it to be true - I saw in some threads that people say the singularity gives them hope with their particular mental health problem etc. I’m glad it does, but it doesn’t make it more true because of that


kmtrp t1_j1htzkc wrote

Not that you are wrong on that; realize that you will find all sorts of people in any large group of individuals, it doesn't say much about anything.


Sandbar101 t1_j1gmxnh wrote

Right thats why checks notes three different leaders in the Machine Learning field, including James Cormack, just publicly announced they have abandoned their projects and are all in on AGI.

Just saying.


dookiehat t1_j1gstq3 wrote

I’m with OP. Specifically i believe many more human intuition guided innovations in ai software architecture and hardware need to occur before self-improving, let alone self-directed AI occurs.

Gargantuan models will give way to sparse architectures that can be run with somewhat modest equipment and external information sourcing that resembles research coming directly from the AI agent itself. This won’t necessarily replace large models, but will be a module added and enacted when planning, strategizing, and reasoning. It may be influenced by neurobiology, but probably won’t look exactly the same


denisbotev t1_j1gwufl wrote

I feel you, OP, but it’s no use. This whole sub is one giant circlejerk of wishful thinking and zero thought is given to the challenges LLMs face (that they may bever solve). I’ve said it before and I’ll say it again - LLMs have zero comprehension and context abilities. They don’t understand intention, foresight, perception. People might argue that you don’t need those for a universal assistant, but those people have never written a line of code in their life and don’t understand how machines take your input VERY literally.

I’m actually working on starting a project that might solve some of the above, but it’s a side project and will take an exorbat amount of resources, so we’ll see.


Dindonmasker t1_j1h686r wrote

I completely agree. It seems like there is a lot of hype surrounding AGI and the singularity, and it can be overwhelming for those who are not experts in the field. It's important to remember that we are still just a bunch of humans trying to improve ourselves and our world through the use of AI and machine learning. While AGI may eventually be a possibility, it is still a long way off and we should be careful not to get caught up in the hype bubble. Instead, we should focus on the tangible and achievable advancements in the field, and continue to educate ourselves and have critical discussions about the potential implications of AGI.

(This comment as been written by chatgpt based on the fact that we are just a bunch of idiots trying get better through AI, and AGI will help with that maybe so we are excited) :)


IntrepidHorror5986 t1_j1h8c5f wrote

I feel your pain, because it pains me as well. I want it to be a proper futurologist sub, but it can't be anymore. It is all about the unfounded hype.

Some of these guys are probably just kids, others are adults not well versed in computer science. All fully enjoy their blissful ignorance. Another common characteristic is that they are unhappy and wait for some magical event that would reorder the status quo. You can find these people in religious subs, UFO related subs, magic related subs, transhumanist subs... and in this sub. They are the same people basically, although you may hear them argue about the rationality of each others beliefs, which is really funny.


AndromedaAnimated t1_j1hrvt5 wrote

You forgot the antiwork sub 🤣

There IS a futurology sub, are you aware of that?


IntrepidHorror5986 t1_j1hs2tm wrote

I'm aware, but there is mostly about climate change scare and left utopia. It used to be good sub. Lately there is some improvement, tbh.


AndromedaAnimated t1_j1hsmn0 wrote

I like those speculative subreddits like futurology and singularity. They are fun and relaxing.

Thing is, if I really want scientific knowledge Reddit is not the first place I go to look 🤣


Phoenix5869 t1_j1hikjc wrote

before downvoting me, please read the whole post

Im glad someone in this sub is being realistic. We are decades and decades from AGI, curing aging / treatments for aging, stem cell treatments, growing organs etc. And yet this sub wants to believe that we’re gonna have AGI in just over 6 years.

and before I get downvoted, I understand that there has been a lot of progress in ai and other fields this year, and I'm not saying there hasn't been. I also understand where you guys are coming from with the exponential growth argument. however, a lot of if not most scientists say that all these technologies are decades away. I would say people in their 30s will have access to at least some of these in their mid 70s to 80s, and everyone 45 and up will likely die before these technologies come out. I get that you guys don't want to believe or hear that, but I'm simply being realistic. If you believe I'm being too pessimistic and / or have evidence showing these technologies will come out sooner, I would love to hear it.


AndromedaAnimated t1_j1hsbb8 wrote

I just upvoted you - despite disagreeing with what you say as OP is not being „realistic“, he is just saying he wants this sub to be more suited to his tastes.

But seriously I think the Reddit downvote system is not good. It kills discussion instead of helping sort content, as most upvotes and downvotes are depending on a personal (agree/disagree, feels good/feels bad) emotion and not on the quality of what is said (yes something can be interesting even though you disagree).

And I personally don’t even care if singularity happens tomorrow or in hundred years, and I do not have a clear opinion on that (We just. Don’t. Know. Yet) so I am only disagreeing with you supporting OP‘s view as „realistic“.

So take my upvote, my discussion opponent! 🙃


Phoenix5869 t1_j1hv17i wrote

OP's view is at least more realistic than what a lot of people in this sub claim (that we are gonna have AGI in 6 years or less, that the singularity is happening by 2045 (i agree we can't know but it's probably not happening by then).). the majority of scientists also agree with him and while I would like to believe that OP is being too pessimistic, he does seem to be more grounded in reality than a lot of people on here.


AndromedaAnimated t1_j1hvzz6 wrote

The problem is not OP’s opinion. It’s absolutely valid. Absolutely! I actually even AGREE to some extent that the hype is quite over the top.

Personally I think we cannot predict when Singularity happens for now. So I understand where he is coming from.

I just don’t agree with the „holier than thou“ attitude (anymore). He is not presenting any „facts“ except „I know cause I am a scientist“ and „I believe“ while criticising people for doing the same - and that is BAD social behavior. And bad discussion tactics.

And also he leaves out all those posts and comments that provide critical discussion and sources.


Phoenix5869 t1_j1hxu9g wrote

>I just don’t agree with the „holier than thou“ attitude (anymore). He is not presenting any „facts“ except „I know cause I am a scientist“ and „I believe“ while criticising people for doing the same - and that is BAD social behavior. And bad discussion tactics.

>And also he leaves out all those posts and comments that provide critical discussion and sources.

Yh that is bad discussion tactics, he should be presenting and listening to evidence to support his arguments.


fortunum OP t1_j1hzjrx wrote

Idk how to be honest. I am not holier than thou lol and I keep repeating myself in here but will give up now. If the singularity is imminent, I don’t need to prove it that it is not, but the person making the claim needs to do that. (If God is real, I don’t need to disproof it, someone needs to prove it). This is not an elitist, holier-than-thou attitude, every idea will be dissected and scrutinized. I am in fact trying to go outside of my bubble where the topic AGI and singularity are treated with ridicule tbh, which I disagree with. Also again, my point is not to disprove the singularity, but it is about the state of this sub


AndromedaAnimated t1_j1i1fvq wrote

Look, I am not trying to say that you are a bad person or something.

Like I said, I just think that the state of the subreddit is not up to a single individual to judge. I tried to provide feedback, ok? And even told you that I do not exclude myself from such behavior. It’s human after all.

Only after you stated that you merely think I would be projecting - despite me presenting arguments where you have gone off the wrong path in a detailed way - that I have started to think that you are not self-aware and just bash others - namely the laymen dreamers. They have the right to be here too, you know?


Metworld t1_j1ip7nk wrote

As a fellow ML researcher (mostly theoretical) I completely agree with you. The field of AI is still far away from AGI, and I highly doubt neural networks will lead to AGI. For instance, they can't do causal/counterfactual reasoning; they basically are just fancy curve fitting models. Of course this doesn't mean that the latest developments aren't impressive.

I'm not surprised the public has a completely wrong idea of current AI developments, as even "experts" get a lot of things wrong.


Danger-Dom t1_j1ly4gf wrote

Is their inability to do casual reasoning a mathematically proven thing or has it just not seen success yet? What's the deal with this? I see it a lot so was curious.


Metworld t1_j1m1d3w wrote

It's proven AFAIK. Check out Judea Pearl's (Turing award winner, pioneer in the fields of AI and causal inference) take on the topic here.


jazzyjapetto t1_j1isxs2 wrote

Thank you, look it's always been the case that singularitarians were delusional Utopia seekers but things have gotten a bit out of hand here. The fact that we're so intoxicated by shiny things shows that we have a serious deficit in critical thinking.


eve_of_distraction t1_j1iu8um wrote

Look, I think we can all agree that by the end of the year, roughly five days from now, the entire cosmos will have been converted into computronium. I don't think that's an unreasonable timeframe.


gskrypka t1_j1h8xsm wrote

Well Hype cycle is real. We really tend to overestimate the impact of the stuff in the short period of time and underestimate in long run.

Let say we have a boost in AI starting in like 2012-2014. Right now lot’s of organizations are actively using AI in one way or another.

Very little number of companies will use Dalle or ChatGPT in next year but in ten years those models will probably be a commodity used by most companies.

In terms of singularity - I believe it is too optimistic to think that we will achieve it in 2 or 3 years. However there are chances with advancements in AI as well as quantum computing that we will see it in this century.


benign_said t1_j1ighsl wrote

Wait wait wait... Are you saying I shouldn't be preparing to embark on a hedonistic journey of post scarcity super intelligence wherein we stop aging, having health problems, financial inequality, injustice and bad days? Ooooor are you saying the apocalypse is upon us and we'll all be damned to a world of suffering for not supporting our imminent AI overlords since the age of 7?

If I've learned anything from this sub, it's that it's one or the other. Oh. And apparently art is now irrelevant because AI makes things that look okay cool.


arisalexis t1_j1ha2fz wrote

Academia is always negatively surprised and on the slow side , sorry to say my friend. All the tech revolution has been through private companies and I am listening to DeepMind CEO much more than your professor.


not_sane t1_j1hhm9c wrote

I think the reverse is also sometimes the case, I was in a meeting where some academics were talking about ontologies about a specific topic and the semantic web, and the non-IT guy there (correctly) noticed that what they were doing apparently has absolutely no practical use. LLMs strike me as a much more interesting area.

And there are also NLP researchers who don't seem to know about GPT-3, which is very strange considering its very high performance on so many tasks. (But maybe the person I talked to was in some other subarea, he was smart, idk).

(Of course there are also many people doing amazing work in academia.)


decixl t1_j1gwap6 wrote

There's always good to have all types of opinions in the community. I also think, it's a little bit over-hyped but I'd argue that this type has different prisms. Some are thinking of all mighty AGI and others are hyped about what it can do in the present moment... But, it's always good to hear a sense of reason.


Yesyesnaaooo t1_j1hfnsc wrote

Can I ask what the difference would be between am LLM and AGI?

What's missing? And why will that gap take so long to bridge?


Redditing-Dutchman t1_j1hr28s wrote

I think of the important things is that AGI could reason and invent new stuff on it's own, as LLM's seem be to be trained on existing data it can never reach further than the training data in significant ways. Perhaps it can combine existing stuff to create something better sure. But it can't open a completely new field of science for example.

AGI probably needs access to labs, sensors and mobile platforms to really be AGI. So it can actually test hypothesises it creates.


AndromedaAnimated t1_j1hm546 wrote

I am sorry in advance for what I will say. Please bear with me.

This is something that really irks me.. I am prone to the same hypocritical tactics that you are using here. And I hate that in myself. Like your „I am no authority [insert authority implying title here]“ - sorry mate, I know what you are doing here, I do it all the time too and I am even of „lower academic rank“ since I only have a diploma (ancient German version of Master‘s degree), but come on, just state your degree without the „I am a good guy since I put a disclaimer, but actually I know more than you mere mortals“. It makes you look like a covert narcissist. And makes you less believable, not more. I am trying to get rid of this habit and I wish other people would too.

So now that I have gotten this off my heart, the actual answer:

Humans are prone to hype. This subreddit is one where humans like to dream. It is not a professional forum. If you want scientific discussion, you are better off talking to your colleagues or going to a SPECIALISED subreddit. And no, this is not some wisdom of an all-knowing old Redditor. This is psychology. Simple human psychology.

Just let the guys dream here, ok? And go to the machine learning subreddit to discuss machine learning how it really works. This subreddit is not for elitists, I guess 😉 So let’s try and be nice to those not in the field.


fortunum OP t1_j1hoskr wrote

I am explicitly stating my bias, as in I am a student under people who believe the singularity is far away. I am saying I am no authority because I don’t study singularity or AGI, a PhD or Professorship or other titles do not implicitly make you more qualified to talk about just any subject, maybe that is a problem that you project here.


DaggerShowRabs t1_j1iew72 wrote

Do the people above you also talk about an AI winter like you have been?

If so, your mentors are dumbasses.


AndromedaAnimated t1_j1hqhya wrote

I am just reading what you write. If I project or not is not of interest here. Please don’t try to divert the attention to me.

You don’t have to accept criticism or feedback.

But your post would have been much more interesting and also more believable if you didn’t state your authority.

And you did. You mentioned your professional field which is tied to the topic. You said explicitly that AGI is not going to happen and that most posts are factually wrong (and that is where I even start to smell a „pretend-scientist“ as in empirical science, there are working hypotheses, theories, but not facts - and even theoretical science prefers „axiom“ to „fact“).

You are sounding like an elitist who is no better than the people you criticise. Plus, you are getting the purpose of discussion wrong if you think saying something „factually“ wrong is a bad thing in a discussion. You don’t even provide ONE argument for what you are saying except you being in machine learning and you and your colleagues not talking about singularity.

You didn’t provide original thought here (I read sooo many posts like yours here lately), and also no interesting articles.

Jet you criticise the dreaming laymen on this sub.

Oh well. You don’t want feedback, then don’t take it.


No_Ninja3309_NoNoYes t1_j1hnor4 wrote

IMO ChatGPT is the new Bitcoin. The whole idea of crunching as much data as possible is flawed I think. You need to do the opposite Use a piece of data and create as many input vectors from it as possible by adding noise, using different methods, and maybe even randomly setting a subset of the vector values to zero. Also relying solely on deep learning is insufficient IMO You could use decision trees, rules made by experts, or anything else to augment and diversify. But in the end I think a top down approach is required for AGI. Someone should be able to create a global design, an overview for the whole thing Even if it's on a napkin


Buck-Nasty t1_j1i3lmh wrote

Bitcoin has only ever been good for fraud, it has no real commercial uses. ChatGPT on the other hand is hugely useful for generating code templates and almost every student now knows they can use a gpt to complete their essays, even professors teaching masters courses have said they think they're getting work generated by ChatGPT but they can't tell. ChatGPT has achieved more in a month than Bitcoin has or ever will.

Read Richard Sutton's bitter lesson, the handwritten rules approach has been ripped apart for the last decade.


Left-Shopping-9839 t1_j1hw10f wrote

Well said. Try blindly committing code written by copilot. That’s the easiest way for AI to take your job!


ngnoidtv t1_j1opmce wrote

There is no such thing as AGI.

The vast majority of the tools we built, are built with a specific purpose in mind. And general purpose tools are never as good as single-purpose ones.

So AGI may just as well be a swiss army knife consisting of multiple neural networks working in unison. But even so -- there will be multiple different kinds of AGI, each suited to operating with their own unique purpose.

People's expectations will also continue to rise as technology progresses. So everything will always seem 'meh' or underwhelming to us -- because we can't just seem to stop and smell the roses anymore. We want more and we want better.


YesramDeens t1_j1jyl9v wrote

We do not need more doomers in this sub. There are dozens of subs for your ilk to congregate on and bemoan “lack of progress” or “impending apocalypses” on. Don’t ruin this great sub.