Viewing a single comment thread. View all comments

gaudiocomplex t1_j4xi8tl wrote

The CEO of Rippling already came out and said that 4 is basically AGI. My guess is he got drunk one night and spilled the beans on Twitter and then deleted the tweet when he realized he pissed off his silicon valley bros.

It's a pretty common belief right now in the right circles that 4 is going to be problematic to society. I think all indications point to 3.5 being a trial balloon for the ways that the common folk will receive it. I've been in tech marketing for quite a long time and my mind could not wrap around the notion of introducing a half-cocked product (to describe the chat as lightweight is generous) when you have another one that is clearly superior only two quarters away.

And then to tease it as though 2022 is going to be a "sleepy year" by comparison? I don't think you need to look into the non-verbal cues here. It's pretty clear that Altman knows what's going on and he's sitting on something big.

What's problematic here is... If this is indeed AGI or an AGI proximate, there's not a lot that they're going to hold back if they're in competition with deepmind. There's too much money at stake to be the kind of careful they need to be.

Another thing that I'm not hearing about right now is if the Department of Defense is involved. It's hard to imagine AGI being privately developed without them putting their thumb on the scale.

Edit: grammar.

52

Magicdinmyasshole OP t1_j4xkbzp wrote

Yeah, I agree it's not that revelatory, but it was kind of cool to become totally convinced through this little exercise.

And I agree. Unless everyone is way dumber or more oblivious than I thought the DOD is heavily involved. There's no way they can afford to just let this happen. I'll admit, though, that I'm a little surprised at how much of this has happened in the public eye. I would have figured the billionaires and state leaders would have swooped in with offers that couldn't really be refused a while ago.

19

gaudiocomplex t1_j4xljx5 wrote

Well another problem here is that they've really just completely destroyed their own moat with 3.5. unless again... They know they have 4 and they're not worried about somebody else getting there in the interim. I don't know if there's much proprietary here for them... That's what's the head scratcher for me.

7

MrEloi t1_j4z9zid wrote

>I would have figured the billionaires and state leaders would have swooped in

I think that they got caught out by OpenAI dumping chatGPT into the open.

Perhaps Altman got sick of the secrecy and decided to do something about it?

Anyway, it looks like the secret is out .. and that OpenAI are getting smacked about the head. That would explain their sudden reluctance to release GPT-4.

6

Direita_Pragmatica t1_j50g6c3 wrote

This.... He decided to open the bottle, so, nobody could use It in the secrecy

3

Yomiel94 t1_j4xzlr9 wrote

This seems like a stretch. GPT might be the most general form of artificial intelligence we’ve seen, but it’s still not an agent, and it’s still not cognitively flexible enough to really be general on a human level.

And just scaling up the existing model probably won’t get us there. Another large conceptual advancement that can give it something like executive function and tiered memory seems like a necessary precondition. Is there any indication at this point that such a breakthrough has been made?

19

[deleted] t1_j4yshql wrote

I'm being naive here, but the way ChatGPT has some type of local/temporary memory within each of the 'tabs' is in some ways its memories...

If there was a way for those 'memories' to be grouped and have a type of soft recollection of each of them, I imagine that would be a pathway to a full agent -- think, perhaps you do >50% of your coding work through GPT directly, and the Agent can see the rest of the work you are doing.

It sees your calendar.

It knows you have done x lines of code on y project and it knows exactly how close you are to completion (based on requirements outlined in your Outlook).

I think it's almost trivial (in the grand scheme) to be hooking ChatGPT into several different programs and achieve a fairly limited 'consciousness' -- particularly if we are simply defining 'consciousness' as intelligence * ability to plan ahead.

Basically it has intelligence *almost* covered; its ability to plan ahead is dependent on calendars in the first instance.

Further on, I believe it will need to have access to all spoken word and experience, but that is just too creepy too soon I think. Otherwise how else will it have sufficient data to be an 'Agent'?

5

theonlybutler t1_j4zd3pg wrote

Yeah I agree, I think the key thing would be the ability of it to fact check itself. Discern whether it's statement is implied to be factual or not (probably a spectrum) and fact check it. If it could this, it'd be a game changer.

2

Bierculles t1_j4zj7h1 wrote

It's a proto AGI, an AI that can communicate on a human level, it is still far away from beeing able to do everything a human can, i think at least, maybe i'm wrong.

1

WaveyGravyyy t1_j4xjnkj wrote

Do you know what 4 can do that 3 can't? I keep hearing all the hype around 4 and I'm really curious what 4 can do better than 3. 3 is already mind blowing lol.

13

gaudiocomplex t1_j4xkj75 wrote

It may be multimodal. And that may have been the difference in achieving some semblance of AGI. That is 100% speculation, but I worked with an NLP for a long time that focused on human level metadata editing of sound files at scale. There is plenty of data out there to feed into the machine.

But on a more certain level, you have to realize that language itself models reality and LLM's when they are able to more accurately model language itself, they're able to produce a more real reality. Some of the things that is doing right now in terms of errors and dumb mistakes, those won't be happening anymore. We will have a lot more difficult of the time sussing out what's real and what's not. The banal ways that it communicates now... I don't think that that will be the case either.

16

Northcliff t1_j4zmasl wrote

It’s 100% definitely not multimodal

The level of making shit up in this sub is astronomical

12

gay_manta_ray t1_j4ymqch wrote

if i had to guess, it's possible it's capable of general abstraction or abstraction in relation to things like mathematics. this could give it the ability to solve hard mathematical and physics problems. if this is true and it's actually correct it would be earth shattering, even if it isn't agi.

7

Northcliff t1_j4zlwtb wrote

> When asked about one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion), Altman called it “complete bullshit.”

> “The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” said the OpenAI CEO. “People are begging to be disappointed and they will be. The hype is just like... We don’t have an actual AGI and that’s sort of what’s expected of us.”

2

[deleted] t1_j4y4494 wrote

[deleted]

−1

gaudiocomplex t1_j4y4cmm wrote

I stopped reading when I realized you're a cunt. So, a few words in. 🤷‍♂️

Edit: ah what the hell I feel like jumping in on at least the first part. I read that much.

It just goes to show you how very little you understand about the world (which also explains the cuntiness, no doubt) when you can't grasp the notion that many Silicon Valley CEOs are quite chummy with each other. They attend the same parties, restaurants, gyms,, the same book club, even. They sit on each other's boards.

At that, Rippling isn't just another HR startup. It's a unicorn. And well engrained in tech culture.

And as such, that offers the C Suite a certain level of access that can provide the kind of information he could get and carelessly post on Twitter.. because who doesn't like breaking a big story?

9

technofuture8 t1_j4ygack wrote

>I stopped reading when I realized you're a cunt. So, a few words in. 🤷‍♂️

What the fuck?

0

[deleted] t1_j4y7j74 wrote

[deleted]

−7

gaudiocomplex t1_j4yae51 wrote

  1. No more of a conspiracy theory than your poor reading of human nature. And:
  2. You don't need credibility if you have ears and an ass in the right place, you stupid fuck. 😂
1

technofuture8 t1_j4ygecv wrote

>And: 2) You don't need credibility if you have ears and an ass in the right place, you stupid fuck. 😂

What the fuck?

−3

gaudiocomplex t1_j4yjsa8 wrote

Just an evocative way of saying I wasn't claiming Sankar knows anything about this space as an SME. I'm saying he's in a very tight circle of people who are in the know and big secrets are hard to keep.

5