AdditionalPizza

AdditionalPizza OP t1_jegl6jr wrote

Well we know the version of LaMDA that Bard uses is not based on the best model they have, for a fact, we know this. Which is why I'm asking the question, what's the point in Bard being released how it is? Pichai recently even reiterated that Bard is weak and not even close to their better models.

It just doesn't make sense. Google is definitely not further behind in general, every preview they have given has been exception except Bard. There's no way Google shows off PaLM-E then winds up like Blockbuster.

Besides Google is so fucking massive, I don't think companies that large can plummet.

0

AdditionalPizza OP t1_jegd5x9 wrote

Hmm. I would have to assume they probably wouldn't really need that data at this point, considering the hoarding they have done already. But, it could be just a basic preliminary test to see how the public interacts with bots like this because people don't always post the darkest stuff they might ask to an AI.

I still question why they would risk so much with such an inferior model though.

3

AdditionalPizza OP t1_jegcex4 wrote

Either you don't know what a straw man is, or I'm really failing to see where I made the case for one because I'm not proposing any kind or argument with anyone specifically...

Example:

Person A: "Animal testing for cosmetics should be banned."

Person B: "You must be against medical research since many medical advancements come from animal testing."

___

But the point is we know Google has a better model than the version of LaMDA used, so why base Bard on it in the first place?

1

AdditionalPizza OP t1_jaedsf6 wrote

I do think that segment missed a lot of crucial points, and focused on very near term issues that will no doubt be overcome relatively easily.

But The hallucination aspect has to be solved and it needs to happen very soon. I think once that is tackled the train won't stop. I think it will be reduced over the coming months to the degree that it becomes a non-issue in most cases fairly soon. Google has a lot riding on that.

We also shouldn't underestimate how much more useful a model with access to the internet will be over the current chatGPT. Recent events will prove very useful.

6

AdditionalPizza OP t1_jaecxy9 wrote

Oh ok, I thought maybe I made a slip in my post somewhere implying that.

But yeah, I think although we will all adapt very quickly to this upcoming shift to how we access the internet, I think in hindsight it will be one of the big moments we remember for the rest of our lives.

5

AdditionalPizza OP t1_jae9nsz wrote

Today it's just an LLM. When the next generation drops, and it's widely implemented across several products and industries, I think we will have a very different definition of "cool and useful." I can't say what all of that will be, but I do believe it starts very soon. Sooner than anyone is comfortable saying out loud. A month, maybe 2? Then from there it's like dominoes, companies adopting an ultra useful AI into their products.

8

AdditionalPizza OP t1_jae8yaq wrote

>I would wager within half a decade a multi-model proto-agi will be available that could do all the cognitive tasks a human can do at least at acceptable (but not necessarily extraordinary) levels. Not within a year, thats bonkers.

Did I imply that in my post somewhere? I don't mean anything that capable within the year, I'm saying a drastic change in the average person's life caused directly by the impact AI will have this year when the "next gen" is in-your-face on search engines and widely used instead of our primitive search today.

10

AdditionalPizza t1_j1ip56v wrote

Reply to comment by fortunum in Hype bubble by fortunum

What exactly spurred your post, something specific?

___

>I was looking for a more low-brow sub than r/machinelearning to talk about philosophical implications of AGI/singularity.

I would say this is a decent place for that. You just circumnavigate the posts/comments you don't feel are worth discussion. I almost never directly discuss about an upcoming singularity. The date we may reach a point of a technological singularity doesn't really matter, you can easily discuss the implications. A lot of people here are optimistic of the outcome, but there's plenty of people that are concerned about it too.

Personally I usually discuss the next few years with job automation because that's more tangible to me right now. The implications of LLM's and possible short-term upcoming advances are alarming enough I don't really even think about more than 10+ years away in AI.

2

AdditionalPizza t1_j1ij4zi wrote

Which differing opinions are you referring to? Optimistic vs pessimistic outcomes of the singularity or believers vs non-believers?

The first former I think is fine, and hopefully can remain civil with discussions. The latter though, if you don't think there's an impending singularity then I don't understand why they'd even be in this sub.

15

AdditionalPizza t1_j1fmu3y wrote

I try to only join the discussions that interest me, more so than the ones that are, well, stupid. Sometimes I find myself in a long drawn out pointless argument with someone, but usually I only keep the debate going if they make reasonable arguments instead of just focusing on one dumb little thing that they think gives them the upper hand.

This goes for all discussions really. Join in, but try to recognize when you're wasting your breath. I struggle with it sometimes, but ultimately it's still nice to engage with similarly minded people.

5

AdditionalPizza t1_j1baqbv wrote

>If we are all able to each have our own ASI

Everything after that doesn't apply. If we each have our own ASI, the world will be so radically different that I highly doubt anyone is going take issue with using another's likeness. There would be no reason for law inside the virtual world if it is entirely your own. Do the simulated beings within your world have their own consciousness?

Impossible to predict.

If we're talking full dive, commercially available content that is supplied by some kind of company within the next decade or 2, then there will probably be a ton of limitations.

14

AdditionalPizza t1_j1aeql5 wrote

The internet and social media as a whole is already that powerful. In the near future, I think we may be better off than we have been in combating "fake news" than we have been in the past 10 years. Reason being, people will be much more reluctant to believe anything online because it will likely be presumed as AI. Right now, not enough people are aware of it. In 2 years everyone and their brother will be more than aware of how pervasive AI on the internet is. Each side of the spectrum will assume the other side is trying to feed them propaganda.

That's my 2 cents anyway.

3

AdditionalPizza t1_j1a2i1v wrote

No, I hear you. I'm just saying I think this sub in general predicts optimism for salvation over pessimism and doom.

When AI attains a certain level of abilities and intelligence, I think it's a wise concern. I mean, well before the ability to allow the possibility of AI to cause havoc. It's just probably not feasible because it's essentially an arms race and no corporation or government will slow progress willingly.

1

AdditionalPizza t1_j19zzlu wrote

I think you might have the wrong sub. A lot of people here want tech/AI to advance as quickly as possible and are quite optimistic about it. There's some people here that fear humanity is doomed, but the majority probably see the singularity and AGI as a sort of salvation to end current suffering.

9