You must log in or register to comment.

space_troubadour t1_j8dav3e wrote

Isn’t that just what exponentials do…


sprucenoose t1_j8dzbqs wrote

He just paraphrased the long-established definition of the singularity, but made it confusing and wrong.


jamesj t1_j8f8w5x wrote

What did he get wrong? He's saying the rate of exponential change is increasing, which I think is true. Like, the doubling rate is getting shorter with time.


sprucenoose t1_j8jfprm wrote

>What did he get wrong? He's saying the rate of exponential change is increasing, which I think is true. Like, the doubling rate is getting shorter with time.

Even doubling, meaning a relatively small exponent of 2, quickly results in a graph with an effectively vertical rate of change and increasingly astronomical numbers. A higher exponent, like 10 or 1,000,000 or whatever, results in the same vertical line even more quickly, and an even higher exponent becomes vertical even more quickly, ad infinitum.

That is what exponential equations do - increasingly graph to vertical, ever more sharply with ever higher exponents. Even an exponent to the power of an exponent multiplies the powers together to provide a higher exponent. A "compounding" exponential equation can only do the same thing - increasingly graph vertical. It's not helpful.


Biuku t1_j8g2auk wrote

He described the curve from inside it. Like the Milky Way.


fool_on_a_hill t1_j8havtu wrote

I can't stand how he's trying to claim this as an original thought that his sweet lil brain came up with all on it's own! Thanks Jack, yeah buddy we're gonna put it right here on the fridge!


just_thisGuy t1_j8edpr1 wrote

No, there are degrees of exponential and if you start with one does not mean you are always using the same degree. The point he is making is the degree is increasing.


CellWithoutCulture t1_j8fgbpg wrote

I think he mean it's now super-exponential. It's rising faster than an exponential curve.


fairly_low t1_j8e1v2y wrote

I think what he is trying to phrase is the Ackermann function. (Up-arrow etc)


SoylentRox t1_j8e8c8p wrote

Can you go into more detail?

In this case, there is more than 1 input that causes acceleration.

Set 1:

(1) more compute
(2) more investor money
(3) more people working on it

Set 2:

(A) existing AI making better designs of compute

(B) existing AI making money directly (see chatGPT premium)

(C) existing AI substituting for people by being usable to write code and research AI


Set 1 existed in the 2010-2020 era. AI wasn't good enough to really contribute to set 2, and is only now becoming good enough.

So you have 2 separate sets of effects leading to an exponential amount of progress. How do you represent this mathematically? This looks like you need several functions.


human_alias t1_j8gfcbk wrote

No, the exponential distribution is memoryless meaning it should look exponential in every frame


Surur t1_j8day0x wrote

Isn't he just describing what an exponential graph looks like?


imnos t1_j8dlylh wrote

He doesn't have a technical background so wouldn't pay too much attention to it.

I'd be interested to know how someone with a BA in Creative Writing and most of their work experience being in news reporting, then marketing at Open AI, ends up founding a company like Anthropic, which gets investment from Google.


humanbot69420 t1_j8dvyol wrote

it's all about charisma and can do attitude, also he probably wakes up early, reads a book every day, listens to podcasts, has disruptive mindset, connects the dots, exploits the opportunities and extracts value from resources and employees.

or he had connections and money, that's usually the case


inglandation t1_j8gs4p7 wrote

I've seen a poker player who become the CTO of a biotech company recently. The Silicon Valley is a wild, wild place.


manubfr t1_j8haq4o wrote

Yeah it's not like, say, a game developer with a chess background could become the CEO of one of the most exciting AI companies out there.


imnos t1_j8hj024 wrote

Demis Hassabis? Who has a PhD in cognitive neuroscience and actually researched AI? I mean that's one that actually makes sense.


NoNoNoDontRapeMe t1_j8fipxa wrote

What a fucking legend, can’t wait to follow in his footsteps.


[deleted] t1_j8dbyxl wrote

What’s after compounding exponential


Practical-Mix-4332 t1_j8dmb53 wrote

Exponentially compounding exponential


Lopsided-Basket5366 t1_j8dem9a wrote

Human slavery /s


visarga t1_j8dlqjd wrote

Jack is writing short sci-fi stories inspired by AI. This week's story seems related.

Tech Tales

The Day The Nightmare Appeared on arXiv

[Zeroth Day]

I read the title and the abstract and immediately printed the paper. While it was printing, I checked the GitHub – already 3,000 stars and rising. Then I looked at some of the analysis coming in from [REDACTED] and saw chatter across many of our Close Observation Targets (COTs). It had all the hallmarks of being real. I’d quit smoking years ago but I had a powerful urge to scrounge one and go and stand in the like courtyard with the high walls and smoke and look at the little box of sky. But I didn’t. I went to the printer and re-read the title and the abstract:

Efficient Attention and Active Learning Leads to 100X Compute Multiplier

This paper describes a novel, efficient attention mechanism and situates it within an architecture that can update weights in response to real-time updates without retraining. When implemented, the techniques lead to systems that demonstrate a minimum of a 100X computer multiplier (CM) advantage when compared to typical semi-supervised models based on widely used Transformer architectures and common attention mechanisms. We show that systems developed using these techniques display numerous, intriguing properties that merit further study, such as emergent self-directed capability exploration and enhancement, and recursive self-improvement when confronted with challenging curricula. The CM effect is compounded by scale, where large-scale systems display an even more significant CM gain over smaller models. We release the code and experimental data at GitHub, and have distributed various copies of the data via popular Torrenting services.

By the time I was finished with the paper, a few people from across the organization had messaged me. I messaged my Director. We scheduled a meeting.

The Director: And it works?

Me: Preliminary model scans say yes. The COTs seem to think so too. We’ve detected signs of four new training runs at some of the larger sites of interest. Information hazard chatter is through the roof.

The Director: Do any of the pre-authorized tools work?

Me: Short of a fullscale internet freeze, very little. And even that’s not easy – the ideas have spread. There will be printouts. Code. The ideas are simple enough people will remember them. [I imagined hard drives being put into lead-lined boxes and placed into vaults. I saw code being painstakingly entered into air-gapped computers. I visualized little packets getting sent to black satellites and then perhaps beyond to the orbiters out there in the dark.]

The Director: What’s our best unconventional option?

Me: Start the Eschaton Sequence – launch the big run, shut down the COTs we can see, call in the favors to find the hidden COTs.

The Director: This has to go through the President. Is this the option?

Me: This is the only play and it may be too late.

The Director: You have authorization. Start the run.

And just like that we launched the training run. As had so many others across the world. Our assets started to deploy and shut down COTs. Mysterious power outages happened in a few datacenters. Other hardened facilities started to see power surges. Certain assets in telco data centers and major exchange points activated and delivered their viruses. The diplochatter started to heat up and State Department threw up as much chaff as it could.

None of us could go home. Some kind of lab accident we told our partners. We were fine, but under medical observation. No, no need to worry.

I stared up at the clock on the wall and wondered if we were too late. If a COT we didn’t know about was ahead. If we had enough computers.

How would I even know if we lost? Lights out, I imagined. Lights out across America. Or maybe nothing would happen for a while and in a few days all the planes would fall out of the sky. Or something else. I knew what our plans looked like, but I couldn’t know what everyone else’s were.

The run succeeded. We succeeded. That’s why you asked me to make this recording. To “describe your becoming”, as you requested. I can go into more details. My family are fine, aren’t they? We are fine? We made the right decision? Are you even still listening to us?

Things that inspired this story: Various fears and scenarios about a superintelligence run amok; theism and AI; the underbelly of the world and the plans that may lurk within it; cold logic of states and strategic capabilities; the bureaucratic madness inherent to saving or destroying the world.


Superschlenz t1_j8gxmn7 wrote

>This week's story

Last week's story. There was no ImportAI newsletter for the current week.


CharlisonX t1_j8fscdj wrote

Factorial or asymptotic


[deleted] t1_j8fsp5o wrote

Those sound like big math words dude


sitdowndisco t1_j8dg7tx wrote

You read “compounding exponential” and immediately dismiss absolutely everything the person says.


[deleted] t1_j8dgbvz wrote

I was just asking what’s after my dude. Put down the Reddit if it makes you angry lol


Zer0D0wn83 t1_j8dhyv3 wrote

It would be a very quiet place if everyone did that


sitdowndisco t1_j8fsyus wrote

I was agreeing with you but I worded it wrong and it sounded like I was smashing you. Not angry.


[deleted] t1_j8ft3p5 wrote

I just want to know and I keep getting different answers and they are all math words 😭


genericrich t1_j8dbwqt wrote

The main drivers for AI progress recently have been:

  • Availability of massive amounts of structured data that is easily accessed via the Internet.
  • Massive GPU farms in cloud infrastructure, used for the statistical math these AI systems need.

Most of the algorithms were written or understood back in the 60s, but everything was stored on paper back then, and there were no GPUs for fast matrix math.


PrivateUser010 t1_j8dld1f wrote

We have to pay homage to algorithmic improvement too. Neural Network Models like Transformers, Pretrained Transformers, Generative Adversarial Networks were all introduced in 2010-2020 decade and without those models, current changes would not be possible. So data, yes, processing power, yes, but models too.


FusionRocketsPlease t1_j8ee988 wrote

I wonder why these algorithms didn't come out in the 60's.


Agarikas t1_j8et372 wrote

Because there was no need


FusionRocketsPlease t1_j8etpqu wrote

There's been a lot of bizarre mental masturbation math being created since the 19th century.


SoylentRox t1_j8e7bvb wrote

This is false. None of the algorithms we use now existed. They were not understood. Prior versions of the algorithms that were much simpler did exist. It is chicken egg - we needed immense amounts of compute to find the algorithms needed to take advantage of immense amounts of compute.


FusionRocketsPlease t1_j8eeegf wrote

What do you mean computation is needed to discover algorithms?


SoylentRox t1_j8efx92 wrote

Many algorithms don't show a benefit unless used at large scales. Maybe "discover" is the wrong word, if your ml researcher pool has 10,000 ideas but only 3 are good, you need a lot of compute to benchmark all the ideas to find the good ones. A LOT of compute.

Arguably you "knew" about the 3 good ideas years ago but couldn't distinguish them from the rest. So no, you really didn't know.

Also transformers are a recent discovery (2017), it required compute and software frameworks to support complex nn graphs to even develop the idea.


genericrich t1_j8ebipv wrote

Gradient Descent was well understood in the early 20th century for fluid dynamics I believe.

So, not false. :)


SoylentRox t1_j8ecpwg wrote

But yes false? Your argument is like saying people in 1850 knew about aerodynamics and combustion engines.

Which, yes, some did. Doesn't negate the first powered flight 50 years later, it was still a significant accomplishment.


genericrich t1_j8edlrk wrote

<eyeroll> Nobody is saying there haven't been major changes in AI in the last few years. I certainly am not saying that.

But many of the underlying algorithms were well understood in different disciplines and the industry knew they would have application for AI, but the data and infrastructure just weren't there in the 60s or 1980s.


SoylentRox t1_j8eh4tu wrote

My point is that scale matters. A 3d multiplayer game was "known" to be possible in the 1950s. They had mostly offline rendered graphics. They had computer networks. There was nothing in the idea that couldn't be done, but in practice it was nearly completely impossible. The only thing remotely similar cost more than the entire manhattan project and they were playing that 3d game in real life.

If you enthused about future game consoles in the 1950s, you'd get blown off. Similarly, we have heard about the possibility of AI about that long - and suddenly boom, the dialogue of HAL 9000 for instance is actually quite straightforward and we could duplicate EXACTLY the functions of that AI right now, no problem. Just take a transformer network, add some stream control characters to send commands to ship systems, add a summary of the ship's system status to the memory it sees each frame. Easy. (note this would be dangerous and unreliable...just like the movie)

Also note that in the the 1950s there was no guarantee the number of vacuum tubes you would need to support a 3d game (hundreds of millions) would EVER be cheap enough to allow ordinary consumers to play them. The transistor had not been invented.

Humans for decades thought an AGI might take centuries of programming effort.


Villad_rock t1_j8dxtcr wrote

Thought google invented transformers?


PrivateUser010 t1_j8e0iai wrote

Google invented Transformers. It was released in 2015 I think. It was the first model focusing on massive attention mechanisms.


SoylentRox t1_j8e72bz wrote


Everything that mattered was the last few years. The previous stuff didn't work well enough.


visarga t1_j8dkjvb wrote

BTW, Jack is an AI ethics guy, not an AI groupie. He has a personal blog where he reviews what comes up every week. His blog is very high quality, would be a good addition here.


CellWithoutCulture t1_j8fi9eq wrote

Yeah I love the part where he described why it matters. It shows he really understood the paper and is filtering through the noise.


islet_deficiency t1_j8g06gy wrote

Thank you, I had no idea who this person is, or why anybody should care about this rather [meaningless] twitter post. He's got some interesting posts on that blog.


ObiWanCanShowMe t1_j8excdv wrote

I am paying attention, it already feels nuts.

By this time in 2025/6 I will probably be able to type this into a prompt:

"I want to see terminator 3, but as a real continuation of James Cameron's vision, not the crap that followed the Terminator 2. Put Danny DeVito in the lead role, give him lots of catch phrases and make things go boom a lot. Make Kate Beckinsale be his love interest and side kick with skimpy clothing (but not nude cause she's a classy lady) but she never gives in. 122 minutes long please, no credits, I gotta be somewhere later"


Beatboxamateur t1_j8f5cvc wrote

Nah, typing won't be necessary by then, you'll just either do tiny finger movements or input directly from your brain.


CollapseKitty t1_j8gbt0o wrote

Psh, our every desire and action will be preempted and fulfilled with zero effort on our part. You'll be watching the movie before you knew you wanted to /s sort of.


huffalump1 t1_j8gsfws wrote

And the movie will be tweaked and tuned in realtime based on your brain's chemical and hormonal responses... to deliver best value to sponsors and advertisers.


Tiqilux t1_j8kryxk wrote

Naah, you and me won't be necessary then if it goes like this :D


nbren_ t1_j8dyitu wrote

Every tweet I see on this sub is basically the same buzzwords in a different configuration. Like yeah, we all could say this at this point, not sure how it's profound.


Verzingetorix t1_j8g9lsl wrote

And it's sad. Not only are most user not familiar with the terminology, now we have post like this literally describing, very poorly, the most basic of exponential functions.


IntrepidHorror5986 t1_j8eml3n wrote

He doesn't even know what he is talking about and yet this is one of most upvoted posts. omg, wtf...


duffmanhb t1_j8dp0eb wrote

I don't think he understands how S Curves work. We had a major breakthrough when we figured out how to convert micro transistors to work as analogue transistors instead of binary... Which allowed us to pick up where we left off in the 60s

However, all this explosion of growth will probably slow down once the low hanging fruit is all achieved after this breakthrough, and we'll likely top off for a while until we get another breakthrough.


magnets-are-magic t1_j8er2rt wrote

Low hanging fruit has barely been touched. Artists, writers, small businesses, mega corps, etc etc are just started to get their hands on these new tools. Even if it’s an S curve we’re just barely getting started.


duffmanhb t1_j8erc4s wrote

Oh of course... There is still a lot. This breakthrough will probably pay off drastically for the next 10 years. We still have all the fine tuning benefits, as well as squeezing out the benefits of scale. Tons and tons of fruit hanging for a while.


EllaBellCD t1_j8iw5l0 wrote

The foundations of the low hanging fruit are there though. I think the next 3 - 5 years will be refinement and specialization.

It will become a lot more specialized and practical in the day to day, particularly for businesses.

People expecting it to create a full coherent movie out of thin air are off the mark (in the short term).


Borrowedshorts t1_j8dx7i2 wrote

Computation and AI haven't demonstrated S curves, but have always been exponential. If we look at some of the effects, they may be S curves. If we look at Siri, there was a massive and rapid adoption of that, but has since tapered off. I suspect job displacement will show an S curve. But computation itself has demonstrated exponential progress for a very long time, and I doubt that slows anytime soon.


TopicRepulsive7936 t1_j8e4rvs wrote

If complexity is an S-curve it'll start tapering off around 14 billion years from now.


krumpdawg t1_j8fem61 wrote

Tell me you have no knowledge of AI history without telling me you have no knowledge of AI history.


Kaje26 t1_j8enmq6 wrote

What does that even mean?


imnos t1_j8fjjaf wrote

It means he's a founder with a creative writing/marketing background.


straightupbotchjob t1_j8dgy5b wrote

we're about to get exponentially f*cked


odragora t1_j8e36tl wrote

As if it wasn't happening without the AI.

The governments are gradually removing freedoms of the citizens wielding far more power than the societies can control. More and more countries around the world are falling into authoritarian and totalitarian regimes where human rights don't exist. Fake news are spreading so much they are vastly outnumbering the real facts. Most people don't care about anything other than their own comfort and running away from any responsibility, allowing people destroying freedom to do whatever they want.

If anything, AI is our chance to avoid extinction or dystopian world of slavery.

It poses a great existential danger, sure. But things are so dire right now that even with its great danger in mind it's still our best chance.


p3opl3 t1_j8dk8ox wrote

I feel like these lead researcher and AI company CEO quotes seems to be coming in thick and fast of late.. I imagine some of this is for business and personal clout.

Look ChatGPT3/3.5 is great but it's certainly not this biggest or most advanced model we have.. it'll be interesting to see how much legs these models actually have in a real world setting.


visarga t1_j8dm8dl wrote

If you read Jack's activity over the years he is one of the more level headed guys. He's also one of the team who trained GPT-3 (paper author) - one of the "gods of AI" LOL.


p3opl3 t1_j8drqhw wrote

Oh yeah don't deny it.. and I agree with his sentiment.. I think think we're looking at these posts ..putting them all together..and it almost because this confirmation bias of how "we've made it" haha


imnos t1_j8fj6q7 wrote

> one of the "gods of AI"

I wouldn't go that far - he's not an engineer/developer. He's a writer and was the "policy/marketing director" for Open AI.


TopicRepulsive7936 t1_j8dnblu wrote

He didn't mention GPT.


p3opl3 t1_j8fcp15 wrote

Yes but it's inferred as a result of GPT based or like language hitting the public mainstream like no other model.. not even Stable Diffusion..


CellWithoutCulture t1_j8fiz1j wrote

Maybe they are doing a raise soon. They hired Karparthy because he's good but also because his reputation will help with raising ,especially with the narrative of a critical mass of talent. I may even be true.


PrivateUser010 t1_j8dls02 wrote

So compounding exponential is just exponential I think. But I think Jack Clark may have imagined something like E^(E^x))


Borrowedshorts t1_j8dz7r7 wrote

AI likely doesn't exhibit this, but it has been advancing faster than Moore's law. The only thing that will exhibit a double exponential is probably quantum computing.


hydraofwar t1_j8e0hw8 wrote

Perhaps because this supposed exponential growth of the AI ​​may need proportional energy or simply a lot of energy.


Borrowedshorts t1_j8f4clg wrote

Even if we assume that, it's not necessarily a problem or suggests that AI progress will slow anytime soon. We can afford to dedicate a lot more energy to AI improvement than we currently are. Recent multimodal models seem to suggest there is plenty of room for efficiency gains yet. We are still far from limitations of energy becoming a primary concern, if it ever does, as AI self-improvement will make its own algorithms more efficient and get better and better at finding outside resources to exploit.


cocopuffs239 t1_j8dvvu7 wrote

One of the craziest things I've experienced was when I went to look how well my 3.2k computer from 2014 did compared to my OnePlus 8. My phone is just shy of the same processor I have.

So my phone is significantly smaller and is powered by my phone's battery not a 120 volt outlet Crazy....


Borrowedshorts t1_j8dym0f wrote

AI progress from 1960s to 2010 was exponential, but followed Moore's law and most of the progress was in symbolic AI and not connectionist. Part of the reason connectionist AI didn't make much advancements during this period is because they didn't get any increase in computational power dedicated to connectionist research in an argument formed by Moravec. From 2010-2020s, we've seen much faster progress in connectionist AI, and much faster than Moore's law, at least 6x faster. The doubling rate of progress has been 3-4 months from 1-2 years. This is still exponential progress, but at a faster rate than Moore's law.


Lawjarp2 t1_j8e2261 wrote

We can even call it a S curve maybe.


No_Ninja3309_NoNoYes t1_j8ebgx0 wrote

If you follow the bread crumbs back, you will find artificial neural networks decades ago, but computers were slow and had megabytes of memory. Data points in the past offer no guarantee for the future. Even if you can stack neural layers as though they were dirty dishes, you are just doing statistics. Which is fine but there are many different methods to reason that would work better.


Tehnomaag t1_j8eijxb wrote

But ... ChatGPT and Midjourney, etc are not really AI as such. So I don't get where are you seeing that progress? They are just large data models based on correlation but do not have an *understanding* of the world.

Just, basically, autocorrect on steroids. A lot of steroids.


mli t1_j8eljt9 wrote

steroids, they do work. Everyone should have them. Maximum amount, all the time. Yeah.


gcubed t1_j8ev0ng wrote

I think what we are seeing is more about a tipping point in public perception than a radical acceleration in progress (beyond the rate we have been seeing). Up until a few months ago most of this was hidden among Uni labs, and corporate IT departments, and software developers and such. People who follow technology had some awareness, but even few of them had good access. What Chat GPT did was make it tangible to the average user, and what social media did (especially TikToc) was spread the word like wildfire, and pique the interest of mainstream media to turn it into a phenomena. But that said, I guess it's fair to look at the network effect that hundreds of millions of users is going to have on accelerating growth, investment, and adoption. So maybe it's a little bit of both (perception and radical acceleration).


kilkil t1_j8eys73 wrote

That's a complete oversimplification. There was a whole "AI winter" in the late 20th century, during which there was very little progress and/or funding.

Also, for all we know, neural networks can just plateau. We'll take it as far as we can, but who knows what that is. Saying "if you squint and tilt your head it tastes like exponential" inspires only like, 60% confidence in me.


Spire_Citron t1_j8faybp wrote

It already has been, really, but it's surprising how fast you get used to these things. I wonder how much more nuts it will get.


fjaoaoaoao t1_j8fbg9p wrote

“Intuitively Feels nuts” is a fairly abstract comment like what has been shared before from others about the future of AI…

Edit: also after typing it, it’s a funny phrase on its own


southbuck87 t1_j8fptlm wrote

No progress 50’s thru 60’s. Big progress 70’s thru 80’s than hit a wall. Later people thought they could take the 80’s neural net work, add a lot of computing power and even more hype and suddenly AI.


cakesquadgames t1_j8fz4h1 wrote

It's not exponential, it's hyperbolic. Hyperbolic growth grows faster and faster and has an asymptote (singularity) at a certain date. This asymptote date is basically where the line goes vertical and progress becomes so fast we can't measure it anymore. Some estimates have placed this date around 2046. See this video for more details:


user4517proton t1_j8fzs8m wrote

Reciting what has happen as if you predicted it along with your future predictions has no link and no value.


Ishynethetruth t1_j8gp3z4 wrote

When people say progress , I expect new physical products being invented by ai that makes our life easier not another update on a language model in order to write your homework


dnpetrov t1_j8gwgrc wrote

Or we are just vitnessing a breakthrough, and expect such breakthroughs to happen constantly, which they would not.


FalseTebibyte t1_j8hdejd wrote

Heads up: Groundhog's Day's effect works on AI as well.

They just keep practicing while someone has them paused in a feedback loop.

"How do you kill that which has no life?" - Make Love, Not Warcraft


RavenWolf1 t1_j8hnzpn wrote

So some random said something on Twitter...


Svitii t1_j8hpppf wrote

Quick question for any smart people on here: What will the limitation of progress once we reach real independently self improving AI? Just hardware?


Illustrious-Age7342 t1_j8ii2fc wrote

I thought I understood the relative rate of change. And then chatGPT happened


YouTuber_Named_DBOB t1_j8gwhhl wrote

why do i feel like what i just read wasnt english? lol new to reddit Hi haha


third0burns t1_j8dz4nu wrote

This is completely a-historical. Has nobody ever heard of AI winters? The history of AI is defined by long stretches of zero progress. There was never this constant march of ever-accelerating progress. Anyone who thinks we're about to see exponential (or exponentially exponential, or whatever this guy is talking about) growth in capabilities forever doesn't know history.


eat-more-bookses t1_j8dmtv8 wrote

How do we know we aren't approaching a plateau? Summer and Fall 2022 were nuts (Midjourney, Stable Diffusion, GPT3 and ChatGPT).

But, since then, not a lot has changed, at least not like the delta we experienced last year.


Vehks t1_j8duhyx wrote

>How do we know we aren't approaching a plateau? Summer and Fall 2022 were nuts

Wasn't fall 2022 like... a few months ago?

>But, since then, not a lot has changed, at least not like the delta we experienced last year.

"last year" was just over a month and it's been pretty wild since about Oct to now, IMO- So its been like what? 20 minutes since the last drop and you are ready to pack it in already? Society hasn't even had a chance to catch its breath yet and truly take in GPT 3 and what it can do. It takes time for people to even see the full potential in a new tool and already a plethora of models have been spun out from it.

Shouldn't we at least wait a year or so of no updates/news/breakthroughs/releases etc etc before we start worrying about a plateau?

For the record, I have highly tempered predictions of the future and I tend err on the side of conservative, but even so, it's way too soon to be calling anything right now. Let the dust settle first.


ButterMyBiscuit t1_j8dznpg wrote

Huge news, breakthroughs, new projects, new applications, new companies, new models, new funding happens EVERY SINGLE DAY and this MF is worried about a plateau.


eat-more-bookses t1_j8fv9u3 wrote

Adoption is progressing at a dizzying pace, that's for sure.

True advancement tends to be noisy with step increases dispersed about.


eat-more-bookses t1_j8fuwfl wrote

Thanks for the grounded perspective! I am cautiously optimistic, but extrapolation is a dangerous game.

I think we need accompanying hardware breakthroughs for exponential advancement to continue long term.


ertgbnm t1_j8e1vzz wrote

Since November we have had as much growth if not more than I saw between June and November of last year. Doesn't seem like a plateau at all.