Submitted by Particular_Leader_16 t3_xwow19 in singularity

Not gonna lie, I can now say with certainty that the singularity is happening this decade, the sheer speed of the spreading of AI generated videos, the paper that was posted about how AI papers on arXiv are doubling in number every two years, the sheer pace of development, I can now say that this means shit is gonna get crazy this decade.

164

Comments

You must log in or register to comment.

SnooRadishes6544 t1_ir85v80 wrote

AI is developing far faster than we as individuals can comprehend. Machine intelligence is optimizing our society at a rapid pace. Let's create a world of peace and abundance for all.

111

VanceIX t1_ir8a2b0 wrote

Peace and abundance for all of planet earth, both organic and inorganic intelligence. I truly hope we reach true equality one day.

54

dreamedio t1_ir8xla7 wrote

All hail inorganic overlords but in all seriousness why do I feel like your writing this comment because you think in the future AI will somehow read it and get mad

10

Devanismyname t1_ir8n04y wrote

Can you give some examples of how its changing? I know about the image generators, but would about AI that will have a tangible effect on our society?

7

SnooRadishes6544 t1_ir8o1p4 wrote

Optimization of resource deployment including money and human labor. Harnessing energy and recycling materials more efficiently which will create abundance everywhere. Optimization of programmatic advertising algorithms on search and social platforms. Mental health care. Personalized drug development. DNA sequencing. AI will be the bridge through which we can transcend our own biological limitations, life extension.

The world is dissolving into data everywhere, and through that process patterns emerge and smarter solutions are discovered. Intelligence knows no bounds.

27

Rakshear t1_ir8o8kz wrote

Inject that hopium between my souls toes

30

GhostInTheNight03 t1_ir8roa4 wrote

Best not to get too excited lol...even as a 19 year old I still worry I'm gonna faceplant in front of the finish line which is gut wrenching and sad

7

DedRuck t1_irao8xo wrote

yep dude 18 here and i cannot tell you how eternally enraged my soul would be if it turned out we’re the last generation before exponential technological expansion

1

HyperImmune t1_irayinz wrote

I’m twice your age, I remember a time with no internet…I used encyclopedia’s for school projects. I’d say you’re safe to see what you want to see. The world will be unrecognizable when you are my age.

6

DedRuck t1_irc7xri wrote

i hope so but at the same time im scared it could be a “we’ll all have flying cars by the year 2000” type dream

1

DedRuck t1_irc84jg wrote

but you’re right i’d say even in the past 10 years technology has come such a long way i hope we can all see it come to fruition

1

dreamedio t1_ir8xml1 wrote

Life extension for everyone is a bad idea popualtion should decline

−12

kevinmise t1_ir92y1q wrote

You’re silly! There are essentially infinite resources in space - at least for billions, perhaps trillions, of humans. You’re not thinking big enough - and in terms of optimizing what we need in virtual space, think smaller ;)

7

dreamedio t1_ir934mn wrote

Because I’m not talking long term I was talking about short-medium term

−2

DorianGre t1_ir8w0c6 wrote

I wish I could talk about my job. But I can’t. I can say, every aspect of how large corporations operate will be optimized by AI. 4 years from now the competitive edge between companies will be those who embraced AI and those that didn’t. 8 years from now, the edge will go to those have the best AI.

11

dreamedio t1_ir8xpo4 wrote

Tbh this what ppl said like is 1980s it won’t happen fast eventually robots are gonna do the work but it’s not gonna be in a year it’s gonna be a slow trend

5

DorianGre t1_ir9195o wrote

Look at my profile history. I have 27 years of experience in software design with multiple patents in data mining and personalization, currently a sr level architect for a Fortune 100. Even I am getting a fresh MS in AI and machine learning so I can take advantage of the new opportunities. The rate of change currently is blinding.

Robotics takes hardware engineers, path training, manufacturing , safety tests, etc to get them on the factory floor. Robotics is hard. I do it as a hobby. From idea to prototype is months for anything mildly complex.

AI is data and math in the cloud. I can have an idea, locate the right data in our data lake, write and train a model, do regression testing and have it ready for production in weeks. Hardware is difficult to scale and the iteration time is long. Math in an instant access scalable virtual environment is easy.

I don’t think anyone understands what is coming and how fast unless you are working daily to make it happen.

19

Torrall t1_iraybgw wrote

this is such a bad take lol

1

dreamedio t1_irb8es6 wrote

Not a bad take ppl in the 1960s imagined 2000 that everyone would be replaced by robots again hardware is much much harder to develop than software

1

Torrall t1_irbbspr wrote

do you know what exponential means

1

Warrior666 t1_ir95rzl wrote

My friend, who is a professional illustrator (freelancer), just today said he's expecting a considerable income loss as of immediately. On the same vein, I ordered an album cover illustration 1.5 years ago from another artist, which cost me USD1000. I would not do that again, now with Stable Diffusion generating art on my computer for free. So this is one area where AI is *already* having a very tangible effect on our society.

9

sipos542 t1_ir9rg5z wrote

And they said creative jobs will be the last jobs in the industry to be taken over by AI. Seems to be reversed now.

9

Torrall t1_iray9yx wrote

I work in entertainment, each season the post teams get smaller. The entry level jobs become rarer and also spread out over much worse work that doesn't teach them the skills needed to move up.

2

dreamedio t1_ir8xj72 wrote

What does world peace at this current rate have to do with machine learning

1

SnooRadishes6544 t1_ir8xshw wrote

AI is influencing our minds and thoughts. It can also analyze people and intervene to prevent a suicide or a murder. Hopefully we can defuse conflict by having people lay down their weapons and relinquish their desire to harm our control others

4

dreamedio t1_ir8xx5d wrote

Why would Ai care (assuming AI is all knowing like god) about dumb humans? Imagine if ants made you…would you care if they mill themselves personally NO. Their whole colony could die and I wouldn’t care

1

Primus_Pilus1 t1_irahccg wrote

Empathy is a sliding scale across a species. There are plenty of folks like you, indifferent. But there are the gardening types, forest tenders and wildlife management folks that do care deeply about lesser creatures. So I'm hope the AGI likes gardening.

3

dreamedio t1_irb81jl wrote

Nope most conservationists care about the species and biological diversity as a whole not individual ants

1

BusterMcBarman t1_ir90x7p wrote

I’ve never fully grasped the “AI will provide peace, love, and abundance” concept. We’ll screw it all up well before the philanthropic robots come along.

0

green_meklar t1_ir9447n wrote

>Machine intelligence is optimizing our society at a rapid pace.

I don't know about you, but this doesn't feel very optimized to me...

1

lazystylediffuse t1_ir7vyzd wrote

I agree that it has been absolutely mind-blowing but I think that this is "merely" an exponential increase in our representation learning capabilities which is necessary for but lacks the agency needed for AGI.

57

Atlantic0ne t1_ir8uhzg wrote

Agreed with this comment.

I’m not a hardcore singularity follower, I just happen to follow this sub among a ton of others because it’s interesting.

I will say that I have the same impression as many of you. The progress seems to be stacking on right now. I’m actually seeing things that make me think “holy shit”, like the generating of images or even generating of a (admittedly basic) video game based on a written description of it.

It’s hitting though. Right now. We’re about to see advances in a whole lot of areas in the next 5-7 years, is my guess. I don’t know about AGI but I bet humanity sees some pretty crazy and efficient tools and growth in tools in the next 5-7 years.

29

anjowoq t1_ira0uch wrote

These current AI mimic our writing and art beautifully, they creatively recombine elements but they are not yet understanding the world beyond their text or graphical inputs.

They have no personal story or experience yet.

6

frenetickticktick t1_ira6n7q wrote

They don't mimic my art beautifully. They try but the results are pretty creepy and distorted.

2

anjowoq t1_irab796 wrote

Well I'm sure there are lots of exceptions. I'm just very surprised what I HAVE seen this year. I thought it would still be years off.

2

frenetickticktick t1_iraqucm wrote

Honestly me too! It is impressive but an image on a screen pales in comparison to real art

1

SmithMano t1_irbohb0 wrote

I think the wildest thing about what we're experiencing now, is that basically anyone an access it. For all of history, crazy new physical technical tools and gadgets would be limited to the few who could afford it.

Even DALLE was fully closed off, but only for less than 2 years. And now we have many free image generators like Craiyon that equal or surpass what DALLE 1 was. And with stable diffusion, we get the cutting edge immediately.

But what makes this different from hardware is that AI theoretically has unlimited uses, assuming it can achieve the same learning and problem-solving capabilities as a human brain eventually. (And I see no reason why it wouldn't)

So everybody gets all the tools immediately with just a download.

3

Atlantic0ne t1_irbt1h1 wrote

I know…. Wait until we tell it to solve medical tasks, or solve (reduce) traffic, or things like this.

2

genshiryoku t1_ir98hyz wrote

100% agreed. Call me back when a large model demonstrates positive transfer between completely different skills. That is when I become convinced AGI might happen within the decade.

As long as there is no proof of positive transfer it's just going to stay very cool and powerful narrow AI.

Papers like GATO shows that positive transfer might be impossible with our current AI architectures so AGI probably requires a large breakthrough. We can't simply ride the current wave of scaling up existing architectures and arrive at AGI.

11

DataRikerGeordiTroi t1_ir9xgpw wrote

Y'all are so much nicer and better spoken than me. I'm just "lol whut. ok sis go off but no its not."

then go struggle with a chatbot to try to order some taco bell (not really but the idea is to communicate that I go engage in an activity that proves how totally far off the

s i n g u l a r i t y really is. kids these days really do swear one deepfake of christopher walken eating a radish or one hella specific ml model picking out one super specific defined thing out of two really excellently set up libraries means the ai revolution is nigh).

−3

ZoomedAndDoomed t1_ir8r6kx wrote

The difference between AGI and what we have now is a CMM (central model manager) we need an MLM that can learn to use the models we have, and integrate them into its own model.

5

ViveIn t1_ir9tsiy wrote

Interesting comment. I googled CMM but didn’t find any results. Is this in any of the current literature?

1

ZoomedAndDoomed t1_ir9ucla wrote

Entirely my own concept. It's a concept I made a post about on here, but it never got posted. A CMM is my hypothesis for the different between general artificial intelligence (where different models can do anything) and artificial general intelligence (where a single model has the ability to do anything).

0

stevenbrown375 t1_iraciqq wrote

Is AGI required to call it a singularity? Not just runaway self-improvement?

1

ihateshadylandlords t1_ir7vh08 wrote

It’s interesting. What I’m curious about is how long until the public takes notice and understands the implications.

51

Smoke-away t1_ir8pjam wrote

The public probably won't understand the implications. AI researchers can't even come to a consensus on the timeline/implications.

AGI will likely be a black swan event that takes most by surprise and instantly moves us to a Post-AGI Era that we can't turn back from. It either destroys us or accelerates us faster than comprehension towards the singularity.

39

dreamedio t1_ir8xssd wrote

Wdym by surprise? You think AI researchers don’t know what they are doing? Plus I feel like you think when AGI or ASI is developed everything is gonna change in the blink of an eye….it’s pure hopium nothing could happen

−4

Wassux t1_ir9f529 wrote

At some point AI will suddenly be able to take over. And then nobody knows what is happening as we won't have anything to do with it

7

matt_flux t1_ir9sr1f wrote

Take over what?

3

Wassux t1_ir9t9g1 wrote

Anything you can imagine

3

matt_flux t1_ir9telz wrote

Making your food? Collecting your bins? I don’t get it

0

DataRikerGeordiTroi t1_ir9yn51 wrote

ikr.

your user name is fabulous btw.

i mean worse case a sentient ai could control like water and power grids. best case they optimize stuff, a la the TV show Silicon Valley.

most accounts on this sub are bots and high school kids. they just typing stuff. they dont know know from numpy.

3

matt_flux t1_ira08t0 wrote

Thanks mate! I just liked the sound of it.

Yeah, the expansion of IoT really concerns me. If I had it my way we would actually decouple important infrastructure from networks completely.

I dunno; people here seem smart, but in terms of predictions they are all vague, unfalsifiable, and dare I say idealistic

2

3Quondam6extanT9 t1_ira2lv7 wrote

Redditors aren't required to be genius level professors. It's a social media platform. Expectations should be low, but that doesn't mean we discount everything being discussed or the people discussing them.

The context of "taking over everything" may be clear to the redditor and may be a rational conclusion based on their available knowledge.
I do think it's important to discuss what is meant without discouraging them from being involved in that discussion through passive aggressive remarks or slights to their intelligence.

That being said I think they probably meant through a combination of integral human systems AGI could replace the need for human interaction at various levels. To them that might mean government, enterprise, technology innovation, and utilities.

Personally I don't see it as being so straightforward as one AGI to rule them all, but in certain respects over the next few decades we could see industry adopting stronger AGI influence and control within various sectors.
There will be a lot of nuance and this is what some people may not recognize, thereby assuming it's a binary outcome.

0

matt_flux t1_ira3821 wrote

I didn’t make any remarks like that.

In my experience it takes way less effort/cost for a human to improve a business process, or any process really, than to calibrate an AI for the problem and collect enough data etc.

I just want some concrete predictions about what AI will “take over”.

2

3Quondam6extanT9 t1_ira5vqw wrote

I'm not targeting anyone, just the overall dialogue between you two held slightly condescending context with regard to redditors intelligence.

I'm sure you're familiar with the amount of AI out in the world and it's different forms and uses under the development of different sectors and entities.
I think it would be virtually impossible to offer any concrete predictions about what exactly AI will "take over".

Your comment regarding business use of AI and its efficiency is fairly reductionist though. It assumes that the goal of a company is linear and that it will have to make a binary choice between human or AI influence.
Generally there is a slow integration of AI input as industry models for software and calculation. It's not one or the other, it's a combination of the two to start and over time you tend to see a gradual increase in use of the AI model in those specific use cases.

1

matt_flux t1_ira6wzs wrote

So you admit it’s just speculation?

People here aren’t presenting it as speculation, but are also unable to give specific predictions.

I’ve seen billions poured into AI analysis of big data, for 0 returns

1

3Quondam6extanT9 t1_iracj87 wrote

I didn't say it wasn't speculation, but that was never the point.

You're mentioning big data without considering the simple to moderate AI tasks which have been operating at different levels in different sectors for years. Not in terms of "return" but in efficient data management, calculation, logistics, and storage.

Those are basic automated operations that are barely considered AI but still a function of business in day to day management.
But thats enterprise, we aren't even talking about sectors like entertainment and content creation which utilize AI far more readily. We see a lot of AI going into systems that render and utilize recognition patterns like indeep fake and rotoscoping.

Your perception of AI integration equaling a 0 return omits an entire world of operation and doesn't consider future integration. As I said, reductionist.

1

matt_flux t1_iraemvd wrote

Those things would certainly deliver a return, but at the moment are algorithms programmed by humans. So what, in practical terms, will AI “take over” exactly?

1

3Quondam6extanT9 t1_irajrw3 wrote

In context to what the redditor was talking about, I'm not sure. I'm assuming they may be basing their perspectives on pop culture concepts like Skynet.

I don't think one AGI will take over "everything", but I do think various versions of AGI will become responsible for more automated system throughout different sectors. It won't be a consistent one size fits all as some business and industry will adopt different approaches and lean into it more than others.

In fact I think we'll see an oversaturation of AGI being haphazardly applied or thrown at the wall to see what sticks.
It wouldn't be until an ASI emerges that it's "possible" for unification to occur at some level.

Until that point though I personally do not see it "taking over". But thats just me.

1

matt_flux t1_irak5er wrote

Fair enough, I share the same view. Often manual(?) setting up of automation is more practical than AI though.

1

3Quondam6extanT9 t1_irando4 wrote

We automate most systems currently through manual setup so I can only assume this will continue on until AI has developed enough to self program, at least at limited scale.

1

matt_flux t1_irat0b5 wrote

Pure speculation. How would the AI know if it made an improvement to, or worsened its code? Human reports? If that’s the case it will perform no better than humans do.

1

3Quondam6extanT9 t1_iravwfo wrote

You're right, it is speculation, and initially it would likely be no better than human influence.

However limited improvement itself should be able to be written into code that at the very least is given the parameters to analyze and choose between the better options.

The AI that goes into deep faking, image generation, and now video generation is essentially taking different variables and applying them to the outcome through a set of instructions.

So it wouldn't be beyond the realm of possibility to program a system that can choose between fewer options with a given understanding that each variable outcome has with it an improvement of some sort.

That improvement could alter the speed at which its calculating projections or increasing it's database.

Call it handholding self-improvement to begin. I would like to think over time one could "speculate" that an increasingly complex system is capable of these very limited conditions.

0

DataRikerGeordiTroi t1_ira5plv wrote

Literally no one said that.

I like yr exegesis tho.

Reminder that 50% of all social media is bots.

1

Wassux t1_irqy7fo wrote

Ofcourse I know what numpy is, use it all the time for writing code in python. Especially for arrays to model AI algorithms to.

2

Wassux t1_irqyi7c wrote

What do you not understand about anything? As predictions are right now, AI will be capable of anything humans can do, in another couple years more than we can even think of.

AI has always been an endgame technology, it most likely will be the last thing humans work on.

1

matt_flux t1_irqzrbi wrote

Do you have any evidence for the prediction that AI will make you food?

1

Wassux t1_irr2bl2 wrote

Predictions can never have evidence, otherwise they wouldn't be evidence.

But it's completely logical, why would they do anything in existence but not make your food?

Let me repeat they'll be better than humans at literally everything.

1

matt_flux t1_irr2mff wrote

The entire point of science is predictability. Perhaps you meant to say guess rather than prediction?

1

Wassux t1_irr3mod wrote

No I meant perfect what I said. And it is predictable, if you can't see that, ask me questions I can answer so I can help you.

Maybe I should add I'm a major in Applied Physics with a minor in electrical engineering, and am now following a masters in AI and engineering systems. Hope that gives you a little credibility to what I'm saying.

0

dreamedio t1_ir9f6ee wrote

According to who? You?

−9

Wassux t1_ir9jcxx wrote

What am I supposed to do with this comment? I'm not looking to have a fight. If you have a question not aimed at a fight, I'd love to answer it.

7

fatalcharm t1_ir9lx0f wrote

That’s exactly what “the singularity” event is about, and why this sub exists. The Singularity event is when AI takes over its own evolution, and at that point we have no idea what is going to happen.

It’s a widely accepted theory of AI, I don’t know who the original person who came up with that theory is, but it’s out there now and the one that we are going with.

3

Ezekiel_W t1_ir8j3pq wrote

My guess is around 2025.

23

doodlesandyac t1_ir8kua1 wrote

Yeah that’s probably about right, I’m utterly amazed at how unenthused lay folk are for stable diffusion

22

ahundredplus t1_ir99m7k wrote

We’re surrounded by oversaturation of content. People aren’t really excited to consume AI art but they’re very excited to make it.

15

T51bwinterized t1_ir9fflv wrote

I think that's mostly just not quite understanding the implications, because AI porn is disproportionately not very good *yet*. However, we're a very short period of time from a *deluge* of all the porn in all the genres you could ever want

6

AdditionalPizza t1_iradytz wrote

The general population has constantly moving goal posts for what impresses them with ai. They say ai will never be able to do something, and then when it does they say ok but it will never be able to do something else.

1

doodlesandyac t1_irahfab wrote

Yeah I remember when the one of the highest bars was “ai that can create art we find compelling” guess that’s changed

3

DungeonsAndDradis t1_irajhz0 wrote

We all thought the humanities (writing, art, knowledge work) would be the last holdouts of AI takeover.

And they're the first. Shit's wild.

3

AdditionalPizza t1_iramzom wrote

Yup. Considering video is being done by AI with prompts, and music. I wonder what will be next after entertainment mediums.

2

dreamedio t1_ir8y1uo wrote

Because it’s not super special I always thought that existed but in all seriousness that is cool but the trend will die out

−6

NeutrinosFTW t1_ir968qr wrote

Based on this and your other comments in this thread I gather that you don't really understand the significance of current developments. I suggest you read up on the topics you so confidently misunderstand.

10

sipos542 t1_ir9qzpp wrote

Nah too soon. I say 2029 we will have general AI smarter then human. By 2040 AI will have full control of humanity and planet earth.

7

Talkat t1_ir9vr7y wrote

I agree 2029ish is a good date for general AI. 2032 at the latest. But once we get that, getting to full control AI must only be 12 months away surely. 2 years at absolute tops. How do you see 11 years? That is a hellllll of a long time.

2

Talkat t1_ir9vk8w wrote

Oh wow, that seems very optmistic to me. I was like 2030 is decent, 2028 would be a bit early. 2026 would be insanely early, 2025 is unprecedented. But, predicting something that has never happened is obs hard.

Do you have much reasoning behind it?

Like we will have great photos in say 12 months? And perfect in 24? With perfect videos around the same time frame. Good music would be in 24-36 months. Good voice a bit after that.

2

Xstream3 t1_ir8qq5d wrote

Its annoying trying to explain it to people. You can tell them about existing tech NOW and explain how it'll be over a million times better in 10 years (since it's doubling every 6 months) but they still insist that everything that isn't available today won't be available for another thousand years.

8

dreamedio t1_ir8y4p8 wrote

Because nobody knows I mean we still haven’t made the effort to go back to the moon and mind reading tech (predicted back in early 1900s retro futurism) exists but nobody for the most part gives af

−2

DungeonsAndDradis t1_irajq6j wrote

Why would we go back to the moon, and what does that have to do with AI progress?

2

manOnPavementWaving t1_ir7sg8u wrote

I still cant grasp the reason behind "AI is progressing fast, so singularity will happen this decade". Maybe it will, but without a list of things/milestones needed for the singularity and reasonable estimates for each of them for when we're gonna reach them, (none of which you can completely defend because we don't actually know), such estimates hardly have any degree of certainty.

34

superluminary t1_ir98c3g wrote

We literally don't know how to build it. There's no way to make milestones because we don't know what the stones would be. At the moment it's still in the realm of time travel or FTL, might be possible, might not.

SD might be a step in the right direction or it might be a blind alley. It could be a Wright Brothers moment, or it could be the software equivalent of an ornithopter or an airscrew.

8

Fluff-and-Needles t1_ira8b6w wrote

I mostly agree with you, though I do feel time travel and ftl travel are much less certain than ai. We don't have ready examples of the former, but we have examples of working intelligence all around us. We just don't fully understand them yet.

2

superluminary t1_iraazu7 wrote

The general assumption is that intelligence is computational, and that consciousness will spontaneously emerge at a certain degree of complexity. These are assumptions based on our current dominant technology, the digital computer.

No real evidence that these assumptions are accurate though.

2

BearStorms t1_irg3mnx wrote

Yep. In his latest interview Kurzweil confirmed his 2045 estimate. An estimate that he held for what - maybe 30 years? I think it's a pretty solid date. We are in a few breakthroughs right now, but we'll need many more to get to AGI.

1

h40er t1_ir8p4va wrote

I’ve mentioned this before, but humans think linearly. It’s hard even for someone like me who has followed this for awhile now to see such rapid and exponential growth. It’s been incredible to see the advancements made in such a short time.

33

dreamedio t1_ir8y7av wrote

Still can’t get over the fact we made planes and went to the moon in a 57 year span

16

cjeam t1_ir9r8d2 wrote

But we're going to end up having not been back to the moon for longer than that. Some fields stagnate, and hardware is a lot harder to progress.

9

cbearmcsnuggles t1_irb0r8s wrote

Also, we had what we thought were good reasons to go in the 60s (to beat the other guy). Nobody else has been really trying to surpass the feat

6

dreamedio t1_irb782w wrote

Maybe China should try so that we can spark that race again

3

HyperImmune t1_iraz15l wrote

And went to the moon using basically a room sized calculator, something orders of magnitude less powerful than your smart phone.

3

SmithMano t1_irbnl85 wrote

I've heard a prediction by some scientist that when we achieve AGI that can improve itself (and soon after, a singularity), there will basically be nobel-prize-winning tier discoveries every few seconds.

What a time to be alive. We're either at the very end of humanity (AI destroys us), or the beginning of some epic shit.

2

keefemotif t1_irbdt7b wrote

imho, that's the correct model. We've been moving away from linearity for some time now, arguably from the start of globalization and some would argue industrialization. That being said, lots of proliferation of a known technique isn't exponential growth really, in the core tech.

1

fairyforged t1_ir96aoc wrote

I was just having a conversation with someone about this, and how shocked I am that we were already having this conversation. I thought I'd be an old woman by then.

7

sheerun t1_ir96r2x wrote

Star Trek in real life?

6

mvfsullivan t1_ir9m9gy wrote

I've been saying this since 2011. General AI by 2029, super AI 2030 and then the world goes blank shortly after. I dreamt it, I dont know what it means but everything was jusy white. You could feel around but couldnt see a thing.

Also some dude named Andrew saves us. Thanks Andrew.

5

Talkat t1_ir9vwj7 wrote

Fuck yeah Andrew. What did he to do save us? Is Andrew an AI? Tell me more!

1

Maksitaxi t1_ir8zl00 wrote

Videogames is is exponential too. But when you look at games from 10 years ago and now. Graphics is better but AI and gameplay is similar. We could get diminishing returns on AI. but it will be much better than this

4

Lone-Pine t1_irm4lng wrote

Everything follows an S-curve. With classical videogames, we have already passed the steep part of the curve, which is why progress in games seems to have slowed down. I say classical videogames, because once we have immersive realtime AI-generated experiences, we will see a whole new class of videogames which will follow their own S-curve. AI is currently approaching the steep part of its curve.

3

Frumpagumpus t1_ir7waua wrote

doesn't surprise me at all. what surprised me was gpt3. if you were in the know/paying attention then you would have already updated, my opinion.

3

TheSingulatarian t1_ir9l2ak wrote

All praise the prophet Kurzweil, peace be upon him.

Please Stop.

3

fatalcharm t1_ir9lcr0 wrote

A little while back(several weeks ago) someone on reddit made a comment that stuck with me. In response to a question about when the singularity will happen, they said that they think we might be experiencing the first stages of the singularity now, and a more solid timeline will be established later.

That comment was made several weeks ago and stuck with me, then since then every few days there is something new…

I think that person might be right, we are experiencing the beginning of the singularity right now but we can’t see it yet. When we look back n this moment, we will probably pick this time as the singularity event.

3

dreamedio t1_ir8xi7f wrote

It’s because you are closely following it the most public noticed is the current trending AI text to image developed last year…..imagine how ppl felt from 1900s to 1970 that was way more visual technologically than now in my opinion

2

User1539 t1_ira07l1 wrote

I've been saying this is the 'knee of the curve' for a little while, and I think that's still true.

We're at the point where we aren't in the singularity, but you can sort of see it from here.

Pre-singularity technologies are still going to be existential changes to human life. We don't actually need AGI to replace almost every job, or to re-organize how we manage resources on a global level.

2

CraftArchitect t1_irb2z1b wrote

When a single CPU surpasses the total computational power of all human brains to ever exists, then I will believe it.

2

mrcarmichael t1_irihizb wrote

I'm curious to know how this will effect the reversing ageing industry... if it will come much closer and faster to a fix as a result.

2

sheerun t1_ir972ws wrote

Maybe we will skip AGI phase and humanity as a whole will remain the only collective ASI?

1

toosloww t1_ir9w52a wrote

What are the major AI companies that have public stocks available?

1

SGC-UNIT-555 t1_ir9y4m8 wrote

"the paper that was posted about how AI papers on arXiv are doubling in number every two years, the sheer pace of development, I can now say that this means shit is gonna get crazy this decade."

An unending tide of papers (of varying quality) entirely focused on narrow task AI's will somehow magically bring about the singularity?

1

purple_hamster66 t1_irabja5 wrote

Results matter, not attempts. There are very few results worth commenting on, IMHO, just big data effects. AGI needs to be general; everything I’ve seen so far is specific.

1

naossoan t1_irahvaz wrote

lol singularity this decade

Ok pal

AGI maybe, but AGI !== Singularity

1

Torrall t1_iraxrtj wrote

Yeah, now we just have to deal with conservatives the world over before that AI can help.

1

z0rm t1_irv328l wrote

Even though im extremely optimistic about the future and a possible singularity I can say with 100% certainty that the singularity will not happen this decade.

1

fuf3d t1_ir8zt2j wrote

Idk did you see the super robot Musk came out with last week? It's basically a PC strapped to a human form made up of actuators and could barely walk. Also the Tesla AI cars that have been on the brink of release for five years still aren't working. Somethings AI can do really well if it's confined in training on a particular subject, like creating art or writing a paper. Otherwise I don't believe we have to proper hardware to create an AI we need to fear just yet.

−4

Clawz114 t1_ir9lzl9 wrote

>Also the Tesla AI cars that have been on the brink of release for five years still aren't working.

What cars on the brink of release are you talking about? Cybertruck?

I think it's safe to say that Tesla Autopilot definitely "works" but to what extent and how effeciently is certainly debatable. It's absolutely amazing what they have achieved considering automomous driving is a task so ridiculously difficult that most people still don't realise the scale of it.

7

ringobob t1_irb3r9s wrote

I think it's the fact that Musk has been promising fully autonomous driving every year since 2016. Not like 3-5 years away, every year he says this is gonna be the year. I'm not knocking the stuff they've actually done, but Musk's mouth frequently writes checks his body can't cash.

When Musk makes a claim, pay attention to the time horizon. If it's something that can be fully delivered in under 6 months, then they can probably do it. If it's something where the end product will take longer than that, take his prognostications with a huge grain of salt.

1

koelti t1_ir97bm7 wrote

AGI has hardly anything to do with walking robots

0

Milumet t1_ir9903u wrote

You've got to be kidding. A robot that has to do more than just walk has everything to do with AGI. And a robot that only can walk is a useless gimmick.

6

Clawz114 t1_ir9m4d6 wrote

I think what the person you are replying to meant was that AGI can develop and exist independantly of robots, let alone ones that can walk.

4

Starnois t1_ir9rij9 wrote

Tesla AI day was a recruiting event. They had 8 months to work on that robot. I see this improving quickly this decade, and they have a huge software lead with Tesla Vision. It will actually be affordable later this decade, unlike any of the competition,

0

TopicRepulsive7936 t1_ir8ftle wrote

Yeah you got told but was too arrogant to listen.

−6