Comments

You must log in or register to comment.

Sophus__ t1_ir5zill wrote

You could make an argument for early stages of technological singularity based on metrics like this.

74

Gaudrix t1_ir6otcr wrote

It honestly feels like we are living on that steep edge.

In just the last few years there has been like +3 revolutionary cancer treatments, advancement in fusion/solar/battery tech, ai creation of art/video/physics sims/voices/faces etc.

There are so many new breakthroughs that there isn't even enough time to profit off of anything and make a product because by the time you hit market, a free solution is made available and it's better. The next 50 years will look like the last 50 x 100.

We are living it.

48

TheCynicsCynic t1_ir6vojx wrote

Out of curiosity, what are the 3+ revolutionary cancer treatments from the last few years? I wonder if I've heard of them or they're new to me. Thanks.

10

LordOfDorkness42 t1_ir6zzt3 wrote

Going to presume at least one of them is that vaccine against... cervix cancer, I think it was?

HUGE deal when it was new. Still not nearly wide-spread enough yet, but it's slowly getting there.

10

[deleted] t1_ir911sa wrote

It was so huge you can't remember what someone else has imagined.

There is, as of today, only human intelligence in software. AI does not exist yet.

−2

LowAwareness7603 t1_ir7it5w wrote

I said that we were living it....behold!, just yesterday. I got downvoted at least once.

5

Gaudrix t1_ir7uc04 wrote

Yeah people are weird in this subreddit. Everyone is working off a different definition of what the singularity is and what it entails. The singularity is a point but it's not possible to experience a point just what comes before and what comes after. It's very hard to determine where is the point specifically that's why it's easier to quantify the closeness or speed approaching the singularity than the singularity itself. I'd say we are firmly locked in and have an obviously accelerating trajectory. We are in the endgame.

14

LowAwareness7603 t1_ir7uyqz wrote

I'm going to become a cyber assassin if I'm not God.

1

Gaudrix t1_ir7yhiu wrote

šŸ¤£ I want a full cybernetic body and then a brain once we solve the brain copy problem.

4

[deleted] t1_ir91jxz wrote

Quantum mechanics states that a quantum level copy is impossible. Quantum mechanics is science, AI is a cult. AI does not exist yet. You are speaking about glorified curve fitting.

But as with AI, you can simply imagine it exists of course. You an even select the date of this new technology as imagination has no limits.

−3

Quealdlor t1_iriulsh wrote

How are we in "the endgame" if we've just started as a civilization? We still don't have AGI, FDVR, worker androids, FSD, commercial fusion reactors, anti-aging, cure for cancer, human augmentation, space elevators or arcologies. Let alone orbital rings, sombrero planets, Jenkin Swarms, Dyson Spheres or giant future things like that.

1

Eleganos t1_itdr6ww wrote

Same way that the space stage is the endgame of Spore despite, in reality, bejng the vast majority of the game for anyone who still actually bothers to keep playing once they hit it.

1

[deleted] t1_ir91bs0 wrote

As AI does not exist yet, and 70 years of AI research has led to zero AI and glorified curve fitting, at which rate the singularity will never happen, i think it is wise to define the singularity in such a way it can't be seen.

Imagined evidence, failing prophecies and Armageddon.

Hmm. That does not not sound like science, that is a cult.

−6

kaityl3 t1_irbopkg wrote

> As AI does not exist yet

Bro what?

2

[deleted] t1_ir90yi2 wrote

As AI does not exist yet it is not very surprising that there nothing to be found about 3 cancer treatments invented by software.

−4

Kinexity t1_ir6rcq3 wrote

Past performance does not predict future performance. Many processes initially look exponential while they aren't. This is not to say that singularity will not happen but this may not be an early indication some people think it is.

10

End3rWi99in t1_ir8n0r3 wrote

I am absolutely convinced the world is just a couple of years removed from a very rapid change in nearly every facet of life. I liken it to the internet boom in the 1990s but its impact is far wider and faster. Way faster.

5

[deleted] t1_ir91qkn wrote

But the internet existed in the 90's whilst imagined AI is really just glorified curve fitting or more generally software, which is 100% human intellect. So the projected date for the singularity is at present, never.

−4

TheAnonFeels t1_ir6mhw5 wrote

You could make a paper! edit: Wait, these are AI papers, not papers on AI

0

Smoke-away OP t1_ir6t7bn wrote

The chart is papers about AI+ML.

Not papers written by AI.

15

was_der_Fall_ist t1_ir6ta3o wrote

No, theyā€™re definitely papers on AI; scientific papers in the fields of AI and machine learning. AI was not writing papers in 1994.

12

TheAnonFeels t1_ir74qy1 wrote

Yeah, that definitely makes sense at a closer look, immediately had to edit my response lol.

3

[deleted] t1_ir8zze7 wrote

Sure, after imagining AI exists, imagining the singularity has started is a logical next step.

−1

lovesdogsguy t1_ir6qnhy wrote

So many advances pouring in every week / day now. I wonder what we'll have by the end of 2022?

2023 is going to produce the equivalent of years of progress at the current rate, maybe more.

41

Quealdlor t1_ir6vdp0 wrote

Doubling rate is 24 months, so in 2024 there will be 2x more new papers than in 2022.

32

LordOfDorkness42 t1_ir7374y wrote

I'd buy that increase rate, given how quickly art-AI is moving right now.

Less than a year ago, you got pretty and well colored but abstract blobs.

This was three weeks ago.

This is 23 hours ago as of posting.

Do pardon the MLP focus, but the first images where my own 'holy frick, AI art has come that far?!' moment, so I wanted to keep things fair so the difference is highly visible.

But... yeah. We certainly live in interesting times, and I'm very curious what the coming years will hold for us.

30

TheAnonFeels t1_ir756qp wrote

You've been pardoned.

BUT, have you seen this? https://cdn.discordapp.com/attachments/407355414229811200/1026593684441022594/AI1.png

I don't know much about where they came from, but the AI is still training

16

LordOfDorkness42 t1_ir76a5w wrote

Hadn't seen that one in particular, but I'd believe it.

Charlie, AKA penguinz0 did a video two weeks ago, where he was basically playing around with Stable Diffusion 1.5, and he made some really cool stuff.

https://youtu.be/MVu-bD5keAs

A lot of it looked wonky, of course... but some of it I'd definitively stood and stared at for a few minutes if I'd seen it up on somebody's wall.

7

TheAnonFeels t1_ir7891t wrote

Yeah, i've seen a number of outputs from this guy and he's posted a few odd ones, bodies turned halfway through, sitting wrong way on a bench that also kinda disappears.. It has issues, but it can output quality more often than not..

Its just remarkable, I'm sure in a few months we'll see a whole lot more come out!

4

Quealdlor t1_ir9cz4o wrote

AI works are getting better and better, I can see that. Still the vast majority are bad. The one you linked is good. I often see arms, hands being painted in a wrong way. I still think that it will take multiple years before AI is as good as the best artists. Stable Diffusion should be called Unreliable Diffusion or Unsteady Diffusion for now, judging by all the works I've seen and done.

−1

TheAnonFeels t1_ira0ks4 wrote

Even the fact it can produce quality, discredits all the bad works it produces. Rejecting the bad ones is simple enough, even if humans have to do it...

I don't see how it has an error rate is a problem?

3

SowingKnowing t1_ir904dv wrote

That third image, holy fucking shit!!!

Thanks for pointing that out with such great examples!

2

LordOfDorkness42 t1_ir99vqn wrote

You're welcome.

And yeah, cherry picked examples, of course, but I really think the Art-AI stuff is sliding under the radar of the public right now due to how much else is going on.

I've even seen faked signatures, speach bubbles and Patreon links. Fƶr now, those are just blobby swirls that look right only from a distance, but still.

I'm not sure if this is where we'll see the birth of some of the first true AI... but if nothing else, this seems like the next smartphone to me.

Just... poof, everywhere overnight for those that weren't paying attention, and THAT'S when the public freak out for a bit.

1

[deleted] t1_irdys5o wrote

If you assert true ai, there is false ai. This is correct, current "AI" is false. It does not exist yet. The science is not here yet.

1

lovesdogsguy t1_ir71yni wrote

Oh yes, that's correct according to this. I think I was actually thinking more about the new advances in text-to-video generation, which combined with all the other news this year, I find pretty astonishing.

1

cascoxua t1_ir9kto5 wrote

Doubling the number of papers does not double the knowledge. Most of them does not worth shit and makes more difficult to find the relevant ones. Big amounts of papers does not mean progress in science. Means that lots of people found a field with lots of interest and jumped to it and are publishing lots of irrelevant ones and that's becausse a key KPI for researchers relevance is the number of papers published.

2

[deleted] t1_ir8zqxo wrote

What progress? AI does not exist yet. What is referred to is most often glorified curve fitting.

−4

Smoke-away OP t1_ir5uz8y wrote

Source Tweet:

> The number of AI papers on arXiv per month grows exponentially with doubling rate of 24 months.

> How can we cope with this? AI itself can help, by predicting & suggesting new research directions.

> Predicting the Future of AI with AI: https://arxiv.org/abs/2210.00881


@Karpathy Response:

> I have about ~100 open tabs across 4 tab groups of papers/posts/github repos I am supposed to look at, but new & more relevant ones come out before I can do so. Just a little bit out of control.

37

prototyperspective t1_ir75k3w wrote

>How can we cope with this

I think society needs to start caring more about knowledge integration. At least papers that are published by journals (not preprints) should more often be put into context and made useful by integrating them into existing knowledge systems at the right places.

That's what I'm trying to do when editing science-related Wikipedia articles (along with my monthly Science Summaries that I post to /r/sciences), updating them with major papers of the year (that also includes the much-expanded article applications of AI). I would have thought somebody took care of at least the most significant papers.

It probably needs more comprehensive overview- & context-providing integrative living documents that help people make sense, properly discover and make use out of the gigantic loads of new science/R&D output beyond Wikipedia.

>AI itself can help, by predicting & suggesting new research directions

I think many make the false conclusion that AI is the solution to such problems not a help to a (small) subset of those. Suggesting new research directions seems like an interesting application.

Many ways that could be useful would only be software, not AI. For example, it would be great to somehow better "visualize" (literally or similar) ongoing progress / research topics/fields or categorize papers by their research topics so you can kind of get notified when new subtopics emerge or new research questions related to your watched topics/fields get heatedly debated/investigated etc or auto-highlight text to make things easier to skim etc. I've put some of my ideas (related: 1 2) for such to the FOSS Wikimedia project Scholia which could integrate AIs.

Here are some more similar stats about papers (more CC BY images welcome). Example: ArXiv's yearly submission rate plot

>I have about ~100 open tabs across 4 tab groups of papers/posts/github repos I am supposed to look at, but new & more relevant ones come out before I can do so. Just a little bit out of control.

See some ways/tools to deal with this in this thread at r/DataHoarder here

More R&D (studies, addons, ideas, ...) about such could be very useful as it could accelerate & improve progress on a meta-level.

18

Evil_Patriarch t1_ir64309 wrote

Any comparisons available for how an increase in paper publications translates to an increase in new tech actually reaching the market?

34

whenhaveiever t1_ir6ndm2 wrote

That's what I'm wondering. How much of this is actual useful research that advances the field and how much is sociologists plugging things into Dall-E and having opinions about the results?

But also, there's no possible way for any human to keep up with 4000 new papers per month. We almost need AI to read the AI papers and tell us what the good ones are.

33

MercuriusExMachina t1_ir6wnx0 wrote

Exactly. The bitter lesson would seem to indicate that compute is the determining factor, not algorithmic innovation.

But it's good to see that research is keeping up with compute.

8

Kaarssteun t1_ir6a3ru wrote

Logic tells us more people working on something = faster progress

24

Xstream3 t1_ir6qc74 wrote

Since its software its extremely easy to bring to market (relative to physical products).

2

Kaarssteun t1_ir69zge wrote

Even the log scale looks ever-so-slightly exponential. Insane!

16

[deleted] t1_ir68hes wrote

[deleted]

14

User1539 t1_ir6c29u wrote

6

Smoke-away OP t1_ir6totg wrote

No. The chart is papers about AI+ML.

Not papers written by AI.

−1

User1539 t1_ir6vt3t wrote

It was just an off the cuff, half joking, thing. I think I read that someone published an AI paper by an AI, and did a quick search for AI written research papers.

Basically a joke.

−1

was_der_Fall_ist t1_ir6pgq7 wrote

No, Iā€™m quite sure this is a measure of how many papers are written about AI and machine learning. Sure, someone used GPT-3 to write a paper (as the other commenter linked), but thatā€™s not very effective yet. Scientific papers are still written by humans.

3

[deleted] t1_ir6s71g wrote

[deleted]

0

was_der_Fall_ist t1_ir6sugi wrote

Theyā€™re simply wrong. Do you think AI was writing papers in 1994, as this chart shows? No ā€” this is just a measure of papers about AI, in the field of AI, but written by humans. A couple of commenters here have linked an article about how a researcher used GPT-3 to write a paper, but that is unrelated to this measure of scientific papers in the fields of AI and machine learning. GPT-3 is, in general, not reliable enough to write scientific papers, and, anyway, it was only created in 2020, so it wouldnā€™t explain how this chart tracks AI papers in the period from 1994-2020.

3

Shelfrock77 t1_ir6701e wrote

In the future, youā€™ll be able to code/hack in AR just by thinking about it. I got this realization through watching cyberpunk edge runners.

10

insectula t1_ir6zxbf wrote

If you are tuned in and looking you can feel this happening. I didn't need this metric to know this, but it reinforces what I have been thinking.

9

Kaarssteun t1_ir76nxs wrote

if anything, this assures me that things truly are moving exponentially. It's easy to feel that way with the recent advances, but maybe it's just me becoming increasingly immersed in this ai fiasco. This tells me otherwise though, I'm not crazy yet.

9

Kujo17 t1_ir6lbd9 wrote

Thank you I tried to post this, this morning and for some reason reddit wasn't working for me lol came back to post it now/try again and see this.

7

Cryptizard t1_ir76bjt wrote

Not to be a buzz kill but if you plot just generally the number of papers on arxiv per month it is also exponential looking.

7

Zermelane t1_ir7xuxy wrote

I've never had anyone kill my buzz as little as by pointing out that no, it's not just AI, actually the rest of science is making exponential progress as well. If anything, it seems to be making my buzz even more alive.

(well, arxiv paper count anyway; there are different views on how that relates to the amount of progress in general)

3

Cryptizard t1_ir87dng wrote

Yeah I think it just tells you that arxiv is becoming more popular.

7

SWATSgradyBABY t1_ir6lkvs wrote

We are in the knee of the curve.

6

nebson10 t1_ir7el2k wrote

There is no such point that can be said to be a knee

4

davesp1 t1_ir6b0dh wrote

The precursor

4

saccharineboi t1_ir6n249 wrote

Many in academia and industry face the same question: Should I spend time trying to find a solution S to some problem P, or should I work on an AI system that can find a solution S' to any problem P' from a set of problems that P belongs to? Add to that the fact that computers are getting super fast ... Hence the explosion of AI papers.

4

Bakoro t1_ir98owh wrote

For real dealing with that right now. One way or another, I'm going to have to make some software to do this thing. Do I want a fairly okay solution right now, which I can iterate on and easily explain/justify why my solution is roughly correct, OR dump some resources into machine learning, have nothing to show for it up until I do, but very likely get something on the other end which is almost magically good but I don't know why...

3

Drifter64 t1_ir7klgc wrote

Most of them are garbage but once in a while you get a gem.

4

[deleted] t1_ir6b9rm wrote

[deleted]

3

was_der_Fall_ist t1_ir6py3g wrote

This is a measure of papers about AI, not papers written by AI. The chart goes back to the 1990s, when certainly no papers were being written by AIs. Even today, language models are not reliable enough to write scientific papers.

2

Cryptizard t1_ir76587 wrote

There were lots of AI articles in the 90s, just not on Arxiv. You could plot papers in general on arxiv and it would look exponential.

1

was_der_Fall_ist t1_ir7d9tn wrote

Iā€™m saying there were no papers written by AIs in the 1990s. There were, of course, papers about AI.

1

was_der_Fall_ist t1_ir6t2wi wrote

This is unrelated to the chart in the OPā€™s post. Anyway, despite one person writing a paper with GPT-3, language models really arenā€™t reliable enough at the present moment to be writing scientific papers, and they certainly werenā€™t in the period from 1994-2020. Maybe GPT-4.

1

Artanthos t1_irbdxxm wrote

ā€œThis cannot be done.ā€

Example provided showing it has already been done.

ā€œThat doesnā€™t count, it still cannot be done.ā€

1

was_der_Fall_ist t1_irbjdoj wrote

There are a few points to make here. First, Iā€™d like to make it clear that Iā€™m extremely optimistic about the development of AI, and that I think language models like GPT-3 are incredibly impressive and important. I use GPT-3 regularly, in fact. So Iā€™m not just nay-saying the technology in general.

Second, as far as I can tell, the paper by Thunstrƶm and GPT-3 has not been peer-reviewed and published in a journal. It has only been released as a preprint and ā€œawaits review.ā€

Third, even if GPT-3 is perfectly capable of writing scientific papers, that does not relate to the overall purpose of my commenting, which was to explain that the chart in the OPā€™s picture measures the number of papers written about AI, rather than written by AI.

Fourth, the paper, entitled ā€œCan GPT-3 write an academic paper on itself, with minimal human input?ā€ isā€¦ strange. Even disregarding the ā€œmetaā€ nature of the paper, in which the subject matter is the paper itself, it exhibits problems that are typical of the flaws of GPT-3 which make it unreliable. For example, it starts the introduction to the paper by saying that ā€œGPT-3 is a machine learning platform that enables developers to train and deploy AI models. It is also said to be scalable and efficient with the ability to handle large amounts of data.ā€ This is a terrible description of GPT-3. GPT-3 is, of course, a language model that predicts text, not a machine learning platform that enables developers to train and deploy AI models. Classic GPT-3, writing in great style but with a pathological disregard for reality. With factual inaccuracies like this, I doubt the paper would be published in a respected journal as, say, DeepMindā€™s research is published in Nature.

Iā€™m hopeful that future models will correct this reliability problem (many have already been working on it), but right now, GPT-3 too often expresses falsehoods to be a scientific writer, or to be relied upon for other purposes that depend on factual accuracy. This is why the only example of a GPT-3-written research paper so far is one that, to my understanding, does not qualify as human-level work.

1

JJP77 t1_ir8trms wrote

most of them are bullshit though

3

Poemy_Puzzlehead t1_ir6ej32 wrote

Whatā€™s the blip around the year 2000? Would that be Y2K or maybe Spielberg/Kubrickā€™s A.I. movie?

2

Lone-Pine t1_ir7b9x9 wrote

Schmidhuber's lab uploaded all their work that year.

4

DukkyDrake t1_ir7q5qb wrote

>Most NLP research is crap: 67% agreed that A majority of the research being published in NLP is of dubious scientific value.

What percent is NLP related? "The exponentially growth of crap"?

2

[deleted] t1_ir91zmk wrote

AI does not exist yet, and calling glorified curve fitting "AI" is beyond dubious scientific value, its outright quackery.

0

azazelreloaded t1_iracqew wrote

Is the number of papers really the rate of progress? I can think of N number of architectures varying the layers, neurons and handful of hyper parameters.

2