Viewing a single comment thread. View all comments

TemetN t1_ix0dajz wrote

  1. Progress on generative audio/video to a similar point to last summer was at in generative images.
  2. Gato 2 (or whatever they call the scaled Gato they're working on) drops, confirms scale is all we need.
  3. Breakthroughs in data (one or more of synthetic, access to more through opening up video content, transfer learning, etc).
  4. Model size begins to grow significantly again.
  5. Further expansion (as in new cities) for robotaxies, I'd particularly watch Waymo.
  6. Rapid increase in competition in cultured meat.
  7. Further integration of generative models into other products.
  8. Something comes out of the investment into public R&D in ML.

There's honestly a lot of other stuff on my bingo card too that I'm less certain of (and to be fair, this stuff is mostly just 'things I think are substantially more likely than not'). But past this I'll also be watching for things like repeatable ignition, early immunotherapy results, a humanoid robotics jump, a quantum tolerant scalability breakthrough in quantum computing, etc.

47

michael_mullet t1_ix0nvq2 wrote

>Gato 2 (or whatever they call the scaled Gato they're working on) drops, confirms scale is all we need.

If scale is all we need, AGI by end of 2023.

It may not be released publicly but will become apparent to those in the industry and copied where possible.

I am not convinced that scale is all we need but would be happy to be wrong.

18

TemetN t1_ix0okxi wrote

I'm (repeatedly) on record as expecting AGI (as in the Metaculus weak operationalization) by 2025. So while I broadly agree with this, I do think it only applies to a relatively specific and closer to the original use of the term, rather than the more volitional use.

11

michael_mullet t1_ix18qed wrote

I think I understand you. Likely scale is all that is needed for a non-volitional AGI, and that may be all that is needed for accelerating technological change. Humans can provide the volitional aspect of the system.

7

-ZeroRelevance- t1_ix1crev wrote

Do we even want a volitional AGI though? A non-volitional AGI seems like all the benefits with none of the problems. Since the main draw of an AGI is the problem-solving aspects, which you don’t need volition for.

Also, it shouldn’t have any problems pretending to be one if we want it to though, given how current language models already make very convincing chatbots. It’s just that in such a case, we’d ultimately stay in control, since a non-volitional AI would have no actual desires for things like self-preservation

9

TemetN t1_ix1fdw6 wrote

This. Plus I think that volition is unlikely to be simply emergent, which means that it's likely to take its own research. And I don't see a lot of call for, or effort at researching in such a direction (Numenta? Mostly Numenta).

5

CosmicVo t1_ix2q8f1 wrote

Scale is indeed not all we need. In fact GPT-4 has less parameters than GPT-3. Or the same. Idk. Anyway the focus is shifting toward trainingdata (e.g. learning rate, batch size, sequence length, etc). They’re trying to find optimal models instead of just bigger ones. Hyperparameter tuning is unfeasible for larger models but result in a performance increase equivalent to doubling the number of parameters.

4

SoylentRox t1_ix0pvgm wrote

do you have a prediction from last year? Didn't this image generation stuff come out of left field this year?

I am wondering if your predictions are way too conservative.

Those of us who survive next year will find out, but the last year has seemed suspicious to me. Too many advances, they work too well. Not empty promises and hype as usual but stuff that is starting to work.

If the singularity hypothesis is correct this pattern is going to continue - progress on AI itself accelerating as AI is chain reacting with itself. And if correct then progress will accelerate until the end.

15

TemetN t1_ix0s6u3 wrote

Kind of and not really? I (along with everyone else) was awaiting DALLE-2, but the explosion did come out of left field. That said, I don't think I had a prediction on that, and my only predictions prior to that were either high level (AGI median 2024) or framed differently (I have a number of predictions on Metaculus from that period for example).

As for whether they're 'too conservative', honestly while it'd be nice, I can't (or at least won't) make predictions without some basis for extrapolation. So things that are out of the blue (such as the aforementioned explosion of image generation models) aren't really likely to show up in that context. I can acknowledge they happen, but they aren't easily modeled generally speaking.

7

was_der_Fall_ist t1_ix1ebvz wrote

I agree that the past year has been “suspicious” and suggests that we may see even faster rates of progress in the coming years. If the singularity hypothesis, as you put it, is correct, 2023 should include even more profound advancements than we’ve seen so far. If it doesn’t, then we’ve got something wrong in our thinking.

2

SoylentRox t1_ix1f1g2 wrote

Agree mostly. One confounding variable is the coming recession may cut funding. I don't know how much gain we are getting from "singularity feedback". What this is as AGI gets closer, AGI subcomponents become advanced enough to speed up the progress towards AGI itself. As concrete examples, autoML is one and the transformer is another and mass ai accelerator compute boards is a third. Each of those is a component that a true AGI will have a more advanced version inside itself, and each speeds up progress.

The other form of singularity feedback as it becomes increasing obvious the AI is very near in calendar time, more money will be spent on it because of a higher probability of making ROI. You might have heard huggingface, a startup that duplicated openais work with stable diffusion but better, has a paper value of a billion dollars basically overnight.

This is similar to how as humanity got closer to a nuke multiple teams were trying in multiple countries.

Anyways if Singularity gain is say 2x, and funding gets cut to 1/4, then in 2023 we will see half progress.

Just as an example. If the gain is 10x the funding cut will be meaningless.

And obviously gain scales to well technically infinity though since the singularity is a physical process it will not be quite that high as the actual singularity happens, and presumably AGIs advance themselves and technology in lockstep until we hit the laws of physics.

That last phase would I guess be limited by energy input and the speed of computers.

3

was_der_Fall_ist t1_ixgc64a wrote

> The other form of singularity feedback [is that] as it becomes increasing obvious the AI is very near in calendar time, more money will be spent on it because of a higher probability of making ROI.

In my thinking, if we are as close to transformative AI as we seem to be from recent trends, the inevitable increase in funding should nullify any effect of an economic recession, so the stagnation of critical research would likely require more catastrophic intervention.

The people in charge of funding AI research (that is, the CEOs and other leadership of all relevant companies) are, almost universally, extremely interested in spending a lot of money on AI research, and they have the funds to do it even in a recession.

1

SoylentRox t1_ixgnzr5 wrote

In theory. In practice, Intel held a layoff for their AI accelerator teams. Amazon let go a lot of Amazon Robotics and Alexa workers. Argo AI closed.

While yeah more pure AI plays like Hugging Face raised on a unicorn valuation.

It seems to be mixed outcomes.

1

-ZeroRelevance- t1_ix1dcmd wrote

Good predictions, I mostly agree with all of them. Don’t forget about GPT-4 though. GPT-3 is still one of the best LLMs out there, so I imagine GPT-4 is going to outshine all competition by far.

2

TemetN t1_ix1f7fl wrote

Still not sure if it'll come out this year - or more precisely I think it's more likely than not to come out this year (if only slightly).

2

-ZeroRelevance- t1_ix1fmtb wrote

Fair enough. Considering OpenAI though, there’s a good chance they’ll say “We’re not going to release this to the public right now, for their own safety” and not let anyone use it for a few months anyways.

3