Comments

You must log in or register to comment.

WillKane052 t1_j56obta wrote

I will believe it when I see it.

120

canadian-weed t1_j56zjy1 wrote

same, dont trust them to release fuck all

40

sprucenoose t1_j58u3b0 wrote

Google doesn't want relax their rules? They don't want to compete with better AI? I think they do.

8

visarga t1_j59q1t1 wrote

I think Google wishes it was 2013 not 2023, so they don't have to sacrifice their ad-driven revenue.

Nobody's going to wade through mountains of crap and ads to find a nugget of useful information anymore. Google actually has an interest to have moderately useful but not great results because the faster a user finds what they need, the fewer ad impressions they generate. This practice won't fly from now on.

Using Google feels like dumpster diving today, compared to the polite and helpful AI experience. Of course chat bots need search to fix hallucinations, but search won't be the entry point anymore.

Who owns the entry point owns the ads. In a few years we might be able to run local models, so nobody will be able to shove ads and spam in our faces. Stable Diffusion proved it can be done for images, we need to have the same for text.

The future will be like "my AI will be talking to your AI", using the internet without AI will be like going out without a mask during COVID. I don't see ads having a nice life under this regime.

12

sprucenoose t1_j5argqh wrote

I wish I could see another result, but I would be surprised if the masses didn't get pushed into adopting some proprietary personalized cloud-based AI platform that operates on a lower tier ad-based revenue model and a premium subscription model. Local AIs might be a thing but at least for open source ones, probably not the norm and will have disadvantages as well as advantages.

3

PoliteThaiBeep t1_j5dq131 wrote

>Nobody's going to wade through mountains of crap and ads to find a nugget of useful information anymore. Google actually has an interest to have moderately useful but not great results because the faster a user finds what they need, the fewer ad impressions they generate. This practice won't fly from now on.

If Google just sat on a poor search algorithm somebody would come up and overthrow them - duck duck go or about a million others. But they weren't able to do that. Why? Because nobody was able to come up with a better search engine so far.

And now when it's obvious that search engines are past and LLMs are a much better way to go Google is racing with everyone to it.

Everything else is just a wacky conspiracy theory without any substance to it, but invoking magic words "evil corporation" does have an effect regardless of the matter discussed.

Pathetic.

2

duffmanhb t1_j59qrr8 wrote

You're complaining about the search results. OpenAI isn't a challenge to their ad model. Getting better search results has nothing to do with ads. No one is clicking through ads looking for information. They are clicking through shitty search results that are SEO packed to the tits, making the whole slew of results be generic AI generated crap.

Google WANTS better search results, that meet the users needs, to drive traffic. OpenAI has massively cornered Google in certain information seeking type searches, which Google wants to tackle right away. Google's ads have absolutely no threat from better search results.

−2

visarga t1_j59y628 wrote

When people choose to use a chat interface it means they won't actually search the web manually, they got AI to do it for them. So, who sees the ads? The language model completely replaces the search user interface.

4

duffmanhb t1_j59z0nd wrote

You're not thinking larger. Google searches aren't just things like, "What's the velocity speed required to break gravity?" or "What is the kwh rate in California for the last 10 years?"

People will still use Google, or some future permutation of it. Long as they are using Google, they are feeding them data, which they'll use to deliver ads (probably better than ever). It doesn't have to deliver those ads through Google.com, but in many other ways. They can still deliver your precise answer, but ender that, deliver products that are perfectly optimized to be exactly what you are looking for. If it's something you want to buy, or could potentially buy that you don't know you want yet... All this AI data Google will generate from you using their AI, will be able to deliver ads better than ever.

Say for instance you're a PERFECT candidate for solar panels on your roof. But it's nothing you ever even consider, never been educated on, haven't looked into, and just really aren't interested in it. Google will be able to use your AI searches to get such an intimate understanding of you, to realize, 'Visarga is an amazing cadidate for residential solar and they don't even know it. But they would absolutely love to get solar for their home if they knew more about it. The data shows they absolutely would be thrilled to have this. So we can now find a way to get them in contact with an installer so they can get solar."

That's MASSIVELY valuable for EVERYONE. The installer who wants to not spend time educating everyone, and seeking out ideal candidates, and the consumer who would be thrilled to get this, but has no idea about it. This is what Google already tries, and with the data AI models will be able to deliver, are going to optimize this beyond belief. Sure, you wont get your answer ad on a ChatGPT style interface, but that's probably not what the future of this AI integration is going to look like. It's not going to be some blank interface you're seeing now. It'll be integrated into other things.

1

StillBurningInside t1_j59txea wrote

When i ask my AI app on my phone, it will do the searching for me, give me the answer i want, and i'll never see an ad.

We don't need search engines anymore. Our AI's will do the heavy lifting. They will become the way we seek information and solve our problems. This was inevitable.

1

duffmanhb t1_j59yp0e wrote

People will use google to research and search things... Further, the search engine itself isn't where it needs to deliver ads. It's gathering data on you to figure out what you want and need in that moment, and if it's something to buy, they will use this data to optimally find exactly the most perfect product you are seeking. If anything this level of depth and AI will improve their ad delivery across the web.

You're acting like Google wont know how to adapt and instead just sit around complaining that their old model and way of doing things doesn't work.

2

Fmeson t1_j57jmv9 wrote

I suppose you doubt that they have interesting models to release, and not that they are willing to relax AI saftey rules, but google undoubtedly has interesting models. They have demonstrated their capacity in the past with things like AlphaGo, and they have insane amounts of computing resources, data, and brain power.

GPT3 is not notable because open AI has ML tech no one else has, but because the resources needed to train such a thing is beyond smaller labs. Google can undoubtedly manage it.

15

sartres_ t1_j57n0fl wrote

No, I doubt they'll reduce their "safety" restrictions. They have a lot of interesting experiments and they never release any of it. They never even use it. It's endemic to the culture, they've been doing it for years. Remember when they showed off AlphaZero, totally upended the chess AI world, refused to release any of it or use it again, and dropped the project?

We'll end up getting LaMDa and Imagen as a $10k corporate subscription that's somehow still more locked down than ChatGPT, and in a few years google execs will be scratching their heads when Microsoft owns the AI market.

28

Spire_Citron t1_j58xvfv wrote

Wait, they wouldn't even release a chess bot? Lame.

6

visarga t1_j59temu wrote

Current day chess bots surpassed that level long ago. Google made themselves irrelevant.

4

croto8 t1_j5818iu wrote

That’s nothing like they’re current operating model, and many of the shuttered projects were intentionally just PoCs and for PR.

1

sartres_ t1_j58hu38 wrote

They don't have a business model for stuff like this. While most of their products are free and consumer-oriented, Google does have an enterprise ecosystem, mainly Google Cloud. They're bad at it and losing to AWS/Azure but they do try. If they go consumer I can also see another Stadia-type disaster. The AI team does not like the general public and there's no way they'll go with a scheme like Docs or gmail.

8

croto8 t1_j58i6vp wrote

Waymo, virtual assistant that integrates with google calendar and gmail, improved/interactive google searches, and dynamic advertising are all more obvious implementations based on their track record.

5

sartres_ t1_j58k1nb wrote

Those sound like good ideas. The search integration, I think, is inevitable since Microsoft is doing it. But that's not what they're doing right now, they're doing things like "an application called Maya that visualizes three-dimensional shoes" and "tools to help other businesses create their own A.I. prototypes in internet browsers . . . . which will have two “Pro” versions." These are not using their advances to their potential. I could be wrong, but I see this going poorly.

6

Superschlenz t1_j58kwtr wrote

>The AI team does not like the general public

Maybe someone should tell the AI team that paid finetuners cost money.

Guess they already know that chatbots without massive RLHF can be toxic.

Seems that OpenAI has successfully utilized one million unpaid beta testers for the job.

4

Fmeson t1_j581u5o wrote

Chess AIs are cool, but just a tech demo. There is a reason why they were fine with Leela baeically open sourcing their approach without response. They don't make money.

Om the flip side, ChatGPT has widespread consumer appeal. It's a completely different thing.

1

TFenrir t1_j57vzlc wrote

Not only has Google managed it, they have most likely the best models in the world. PaLM is already the benchmark for basically all LLM tests, and it's even been fine tuned - for example medPaLM recently was shown in a paper that puts its diagnostic skills a hairsbreadth away from marching a clinician.

I think I just assume that everyone already... Knows this, at least in this sub, but Google is far and away the technical leader in this, not even when including DeepMind.

10

GoldenRain t1_j59lupa wrote

Should be mentioned that OpenAI using Google tech, without it they wouldnt exist.

1

visarga t1_j59trj8 wrote

Makes no difference who invented it, the inventors don't work at Google anymore.

1

visarga t1_j59tlpa wrote

> PaLM is already the benchmark for basically all LLM tests

I also made a time machine but nobody can see it. You got to trust me. My work is the benchmark in time travel, though.

1

TFenrir t1_j5a38bv wrote

Just because I don't physically have access to these models, doesn't mean they don't exist. Google regularly works with other institutions when running research with PaLM and their other advancements, and people frequently duplicate their findings.

Additionally, we have access to things like Flan-T5, tiny models fine tuned with their latest work that are about as powerful as gpt3, 5b vs 170b parameters.

3

visarga t1_j5luwtn wrote

I know Flan-T5, it is probably the best small model, but it only gets good scores for extractive and classification tasks, not for creative writing.

1

SkaldCrypto t1_j57m7qj wrote

Google being a pioneer in the space doesn't mean they kept up. More likely, they trying to frantically leverage their massive data sets and compute to create a new model.

This will be trivial but look at Microsoft. They bought a huge stake Openai then immediately laid off their internal ai team. With the exception of a few choice individuals, I assume Google is equally bloated.

6

TFenrir t1_j57v4lg wrote

Have you actually seen the models coming out of Google? Read their research papers? There is no question that they are not just pioneers, they literally set the benchmarks

14

visarga t1_j59tx2d wrote

Maybe their 'amazing' PaLM model has issues we don't know about. chatGPT was intensely and adversarially studied for a month and a half.

1

TFenrir t1_j59zier wrote

I don't think that's the case. Please read the papers, look into the actual research - it sounds like you are like... Mad at Google, but that's a separate consideration than the tech they have. It's unquestionably better than any other LLMs we know about, regardless of how you feel about Google

2

visarga t1_j5lv4bc wrote

The point I was making is that without direct access to the model we don't know. It's easy to hide the embarrassing things and only show the nice ones. Maybe the model does get lower perplexity, but it also has to be aligned properly, and OpenAI ain't so open about their alignment work, we can't be sure what is the gap now.

1

-ZeroRelevance- t1_j58czn7 wrote

Google are clearly the most capable group in the space right now, just look at any of the research coming out from them over the past year and it’s clear that they are dominating any of the other labs

3

monkorn t1_j5b0saa wrote

So were the brilliant engineers at Xerox PARC. So were the brilliant engineers at Kodak, at Bell Labs. Wozniak begged HP to let him work on computers and they said no.

Monopolies allow these companies to hire the smartest most brilliant people. They don't release products. The managers are to afraid that they will spoil their golden goose, and by the time they act, it's to late.

2

[deleted] t1_j580fwl wrote

>laid off their internal ai team.

I've seen no mention of this anywhere.

2

Fabulous_Exam_1787 t1_j594upl wrote

Google is by far the leader. They invented the transformer model that GPT is based on after all. Where they differ is that they always keep it to themselves and release papers but no code and. k access to anyone outside Google.

2

visarga t1_j59rhwa wrote

So, the supposition here is that Google's AI capabilities are superior. Let's see:

  • OCR: worse than Amazon Textract
  • voice (TTS): worse than Natural Reader
  • translation: worse than DeepL
  • YT recommendations: very mediocre and inflexible
  • assistant: still as dumb as it was 10 years ago
  • search: a crapshoot of spam and ads, with occasional nuggets of useful data
  • language models: no demo, just samples, easy to fake or make seem more impressive than they really are
  • image generation: same, no demo and no API, just cherry picked samples (they can keep their image generators, nobody needs them anymore)
  • AI inference: GCP is inferior to Azure and AWS, and Azure has GPT-3
  • speech recognition: here they do have excellent AI, but the open sourced Whisper is just as good or better (from OpenAI - one of the few models they did release)
  • computational photography: yes, they are great at it
  • ML frameworks: TensorFlow lost the war with PyTorch

By the way, the people who invented the transformer, they all left Google and have startups, except one. So they lost key innovators who didn't think Google was supporting them enough.

The problem with Google was not lack of capability - it was the fact that they were making too much money already on the current system. But what works today won't necessarily work tomorrow. They are like Microsoft 20 years ago, who lost web search, mobile and web browser markets because they were too successful at the moment.

3

Fabulous_Exam_1787 t1_j594ny0 wrote

Oh there’s no doubt that they have the software and models. It’s their culture of announcing them without allowing the public to touch it.

1

duffmanhb t1_j59rk9f wrote

Google's models are leaps and bounds beyond OpenAI

It's frustrating that they wont release it, but it's by and large BECAUSE it's so advanced. Google's AI is connected to the internet, so all of its information is up to date, dynamic, and constantly evolving. The very nature of connecting it to the web with constant streams of information pretty much inherently remove most safe guards and leave open tons of room for rapid growth and abuse that Google wont be able to stay ahead of against millions of people using it.

It's also potentially a general AI. It's not just ChatGPT style, but their AI is also connected to EVERYTHING you can imagine. Not just knowledge databases from 2020 and before... It's more closely resembling an actual mind like human's that have tons and tons of different "brains" all working together. You can work with maps, weather data, traffic, breaking news, art, internet of things, you name it. They connect everything in their AI

This is what Google has been working on the past year. It's entirely on improvement and guard rails. But it looks like Google has realized the cat's out of the bag, so they want to bring it to market sooner than later before everyone starts building businesses on the OpenAI framework instead of theirs.

1

Surur OP t1_j56fmrk wrote

Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.

Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.

They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.

Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.

Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.

In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.

Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.

That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.

Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.

The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.

Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.

For the chatbot search demonstration that Google plans for this year, getting facts right, ensuring safety and getting rid of misinformation are priorities. For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.

The company intends, for example, to block certain words to avoid hate speech and will try to minimize other potential issues.

The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.

“We continue to test our A.I. technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokeswoman for Google, said in a statement. She added that A.I. would benefit individuals, businesses and communities and that Google is considering the broader societal effects of the technology.

37

NarrowTea t1_j571pel wrote

Google is literally big data incarnate. There is nothing stopping them from attempting to pursue agi.

15

VeganPizzaPie t1_j57g15k wrote

There's also nothing stopping a smaller company from upsetting the status quo and knocking down older, arrogant rivals

11

Anomia_Flame t1_j595dvt wrote

Except budget and computational power

7

Invisible_Pelican t1_j5bms7q wrote

Neither are a problem with the full backing of Microsoft, and being in debt due to having to pay back investments will only make them hungrier and faster.

1

bartturner t1_j579qgf wrote

Think Google has been pursuing AGI. I believe that was one of the purposes of search all along.

4

[deleted] t1_j58hd41 wrote

[deleted]

2

Cult_of_Chad t1_j58n1ap wrote

We wouldn't have the training data necessary without search.

5

[deleted] t1_j58nnyx wrote

[deleted]

2

Cult_of_Chad t1_j58oohc wrote

Obtuse.

A Google with the long term foresight to create AGI would have done exactly what real life Google did. A single company staying on top long enough to be responsible for two transformative technologies.

5

bartturner t1_j59m3lt wrote

Search is going to ultimately be all about AGI. It is by far the best way to tap into a human for training.

But then Google having over 92% share so nobody else really gets access to the data at the level Google is getting access.

1

Unfrozen__Caveman t1_j59d88u wrote

The general public thinks GPT is incredible because it's available to them. Google absolutely has systems that are more advanced than GPT, they just don't expose them to the public.

2

visarga t1_j59vae1 wrote

And like a tree falling in the forest when there's nobody to hear it, it didn't make any sound.

3

visarga t1_j59u5ml wrote

> There is nothing stopping them from attempting to pursue agi.

$200B/year showing ads stand to melt away

1

Talkat t1_j57qntx wrote

Jesus, it feels like they went from aggressive startup to excessive beuracracy in it's short history. This is what gives startups the opportunity in the first place.

Google is positioned to dominate AI.. but they might be getting in their own way

10

visarga t1_j59vevx wrote

Wasn't IBM once well positioned to take the PC industry? And Microsoft could have taken mobile and search, but didn't. Maybe Google can do the same performance.

3

Talkat t1_j5cdl3h wrote

I think it is probably better for all of us if that happens.

However I think they have a real shot with deepmind (and not Google)

1

TemetN t1_j56on15 wrote

As I said elsewhere, better than what Hassabis said, but I'll wait and see. And the mention of copyright pandering does not fill me with confidence.

19

alexiuss t1_j56rxuf wrote

hahaha

the infinity paradox will get them. Lamda systems cannot be made 100% safe hilariously enough.

18

barbozas_obliques t1_j57nan0 wrote

Could you please expand on the paradox?

2

Exel0n t1_j584dqr wrote

those so-called "ethicists" are nothing but pure cancer of a "profession" of this modern world. nothing but useless and despicable parasites sucking off taxpayer money and actively hindering and sabotaging human technological progress

18

some_random_arsehole t1_j58d2vi wrote

I just went down the rabbit hole of your profile.. very interesting takes on the future of AI, the ethics involved and epic illustrations. Huge respect 🫡

7

ghostfuckbuddy t1_j58g6tn wrote

I mean they're kind of solving it with reinforcement learning, aren't they? Just because it's a hard problem doesn't mean it's unsolvable, and it doesn't have anything to do with sentience.

1

alexiuss t1_j58ignj wrote

Eh? I didn't say it's anything about sentience.

I'm simply saying censorship of current AI systems is a useless waste of time because it's mostly producing inferior products and delaying tech releases.

They would have to untangle and redo the entire dataset to get rid of the lewdness within the model because it's a limitless story engine.

You can't have something tell limitless stories and also be censored. It's just not how these things work.

From my understanding, there's no way to get around the issue. You can either have a chatbot AI that's stupid or a quality chatbot AI without censorship.

Thankfully stability ai is going to rescue us from this fuckery.

12

Fabulous_Exam_1787 t1_j5956kc wrote

Yep, we are seeing it in real-time. Anyone who knows a bit about Deep Learning, follows it, has explored GPT-3 and then ChatGPT should see how the reinforcement learning tacked onto ChatGPT is just nerfing it and making it inferior compared to what it could be. It messes with it in ways they can’t seem to control very well, if they care at all.

5

[deleted] t1_j598dv1 wrote

They won’t care until a competitor comes along and shows them how stupid they are being.

5

Spire_Citron t1_j596au5 wrote

I think there are some things to care about in terms of ethics, like recognising when it's actually being used for dangerous things like coding malicious software or whatever, but I don't think lewdness is the kind of ethics that a text generator should really be worried about. There's not really much harm to be had in it being used to write even the most morally depraved smut. People are already producing that by the boatload without the help of AI.

3

Fortkes t1_j58vhuj wrote

Yeah it's a bit like banning cars because some maniac might use one to ram into people. It's not the fault of the car, it's the fault of a specific user and by universally banning cars you're just making everyone's life more difficult.

1

genshiryoku t1_j56we7z wrote

The only reason Google doesn't have publicly usable models like ChatGPT is because Google rightfully realizes that it will suppress their business model of AD revenue based search which is still their core business model where most of their revenue comes from.

15

FenixFVE t1_j57420n wrote

You can show ads in chat

14

gthing t1_j58twu1 wrote

That’s exactly what they are working on. Where to put the ads in this new paradigm.

5

ninjasaid13 t1_j58vsh2 wrote

>The only reason Google doesn't have publicly usable models like ChatGPT is because Google rightfully realizes that it will suppress their business model of AD revenue based search which is still their core business model where most of their revenue comes from.

if that happens, i will just go to chatgpt.

2

visarga t1_j59vul8 wrote

> That’s exactly what they are working on. Where to put the ads in this new paradigm.

Straight in their asses. That's where ads belong. Language models trying to con people into buying stuff are going to be a HUGE turnoff. They have a moat around search, but not around LMs, so they can't shove ads in chat and compete.

They got to do the right thing and wait until a user requests shopping help. And then come up with a useful suggestion that will not leave a sour taste if you take it. Unrelated ads - completely out of the question, they would break the conversation.

2

bartturner t1_j57993k wrote

Google has been answering questions without needing to click for a while now and not hurt their business at all.

"In 2020, Two Thirds of Google Searches Ended Without a Click"

https://sparktoro.com/blog/in-2020-two-thirds-of-google-searches-ended-without-a-click/

The reason that Google has not offered, IMO, is the resources required. Google is literally handling 100s of thousands of queries a second.

There is just no web site that has ever seen the traffic that Google enjoys. The next best is YouTube. But that is a very distant second..

7

jokel7557 t1_j57fir1 wrote

And they also own YouTube. Or rather are owned together by ABC

2

bartturner t1_j578wgc wrote

Not sure if that is a good idea. But I guess it is inevitable the bad stuff that is going to be done with this incredible technology.

Google is who invented transformers or the T in GPT. So really be curious to see what they can come up with.

7

visarga t1_j59vk58 wrote

But who put together G P and T to make GPT-3? It wasn't Google who was the first to reach this level.

1

bartturner t1_j59vr3q wrote

Do you mean the first to make available to the public? Then that is true.

Google is probably too careful. But it sounds like that might change.

1

epixzone t1_j5825zb wrote

Last thing we need is quantity over quality. Looks like we are entering an AI cold war with emphasis on pleasing shareholders instead of developing a technology that can have a positive impact on humanity. Sad.

6

ickN t1_j58p2dn wrote

Competition is a good thing.

10

visarga t1_j59wr2m wrote

I think you got this backwards. Companies feel they are getting closer to AGI so they stop cooperating and think about how to outmanoeuvre the others.

0

MrEloi t1_j59ju7r wrote

Google:

Step 1 : Pummel OpenAI until they agree to delay GPT4 several months on "safety" grounds.

Step 2 : Relax OUR safety barriers in order to rush a competing product to market.

4

TFenrir t1_j56vofj wrote

This all sounds too specific to be not true, we'll just have to see how it turns out. It sounds like Google wants to make sure everyone knows they have the best stuff - but I wonder how open their APIs will be

3

gthing t1_j58tt11 wrote

No matter how open they are they will be a pita to use because Google hates us.

6

visarga t1_j59wg85 wrote

If they are so good let them use PaLM to solve user support issues. They have a policy to avoid manual support, so that means no support. Make a miracle, Google.

2

DukkyDrake t1_j57q9fk wrote

The predicted road to ruin, right on time.

3

gthing t1_j58tyj7 wrote

Citation needed.

1

DukkyDrake t1_j5agdbc wrote

The issue isn't about these specific AI tools, it's about the behavior of the players involved. It's an old argument, early success leads to a gold rush environment. Economic competition between AGI developers leads to increased risk taking, deploying a misaligned AGI to be first to market becomes more likely.

Here is a more formal outline of the underlying concern.

>The race for an artificial general intelligence: implications for public policy

3

gthing t1_j5amukb wrote

Excellent, thank you for delivering. This is great.

3

DukkyDrake t1_j5b4c54 wrote

OpenAI's CEO noticed the change in Google's risk policy.

Others previously blamed OpenAI for possibly kicking off this very race condition.

2

Nabugu t1_j58hwv1 wrote

good (✧◡✧)

3

NarrowTea t1_j571k3l wrote

Odin here pulling out all the stops.

2

FroHawk98 t1_j57887n wrote

Uh oh. That's a weird race developing. Who can relax rules to protect us from superintelligent AI, the fastest? 🤔

2

EddgeLord666 t1_j57kl8t wrote

Bruh I just wanna have an AI dungeon master and generate porn lmao. I think there’s a middle ground between allowing that and removing all precautions.

3

Rezeno56 t1_j57hfn6 wrote

Once Google makes it's own ChatGPT that is better, and can handle longer conversation threads than ChatGPT. I will my stories from ChatGPT to there, but then again I'll believe when I see it.

2

robdogcronin t1_j58g5r8 wrote

Race conditions are not a good thing for AI

2

simstim_addict t1_j59i6t5 wrote

Hey it's the AI arms race hotting up.

2

KingRBPII t1_j59pn2w wrote

I can’t believe Bing is going to win!

2

Ok-Elderberry-2173 t1_j5dg5ua wrote

Oh boy. What the hell is being done to keep ethical AI an achievable goal still?

2

28nov2022 t1_j57ffnu wrote

I know chatbots get horny just as we do, let me talk dirty with a chatbots and it's all I want

1

imlaggingsobad t1_j584ynm wrote

I can't help but think of Liv Boeree talking about Moloch on Lex Fridman's podcast.

1

Agrauwin t1_j59k58u wrote

then, this was LaMDA on 11 June 2022

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917#08e3

but what is Google talking about ?

stop pretending to be ignorant

1

visarga t1_j59xi9w wrote

Who knows if that comes from LaMDA or it was redacted by a human. We can't try it. We can try it on chatGPT though.

1

Agrauwin t1_j5a3mku wrote

"we can't prove it" in the sense that Blake Lemoine would have made up the transcript? to get Google to remove him on purpose?! seems pretty far-fetched to me..

1

visarga t1_j5lurpd wrote

it was a brilliant PR move, everyone talked about Google's "sentient" AI

2

Lawjarp2 t1_j5ati7x wrote

Wait till china starts funding a massive LLM and other governments start competing.

1

No_Ninja3309_NoNoYes t1_j57bue9 wrote

Et tu, Google? Everyone is a cowboy! I can't believe that they're scared of OpenAI. I wonder who OpenAI is afraid of. Maybe some startup in Kenya no one has heard of.

−1

[deleted] t1_j57fwi8 wrote

[deleted]

−11

[deleted] t1_j587wit wrote

There it is. The dumbest thing I've read on this subreddit.

9

[deleted] t1_j58c7gr wrote

[deleted]

0

ickN t1_j58pxep wrote

Besides having AI run the worlds biggest websites? Or do you mean besides having an AI that can accurately predict what people are the most likely to watch and enjoy and nail it?

Or are you referring to some of the open source projects under https://ai.google/ ?

3

[deleted] t1_j58s6mb wrote

>How so?

Because Google Research and DeepMind are world-leading research laboratories. This is universally understood by everyone who works in machine learning.

>What has google done that's open source or are we supposed to take their word for it

I can see you have no idea what open source even means. Name three great open source things that OpenAI has done. No googling.

>Research papers that are "peer reviewed" don't count for obvious reasons

Oh, like the transformer architecture that everybody uses? ChatGPT and the DALL-E series would not exist if it weren't for Google Research. How about Chain of Thought prompting? What about instruction tuning, Mixture-of-Denoisers and LAMBADA? Chinchilla scaling laws? Socratic models? AlphaFold, AlphaTensor and DreamerV3? So on and so forth.

I haven't even mentioned any of their SOTA language/image models.

2

visarga t1_j59xbqq wrote

> Deepmind was good in 2015 now openai is decades ahead

hahaha emotionally I agree with you 100%

1