Viewing a single comment thread. View all comments

TFenrir t1_ixcp3ka wrote

>They have a bunch of good models but they are 1-2 years late.

I have absolutely no idea what you mean by "1-2 years late", in what way are they late?

> Also Google is standing to lose from the next wave of AI, from a business-wise perspective. The writing on the wall is that traditional search is on its way out, now more advanced AI can do direct question answering. This means ads won't get displayed. They are dragging their feet for this reason, this is my theory. The days of good old web search are limited.

Mmm, maybe - but Google is already looking at integrating language models into transitional search. They showed this off years ago with MuM. They also have written hands down the most papers on methodologies to improve the accuracy of language models, connecting language models to the internet/search, and have SOTA on all accuracy metrics that I've seen at least, for LLMs.

> But hey, you could say they might ask the language model to shill for various products. True, but language models can also run on the edge, so we could have our own models that listen to our priorities and wishes.

> That was not something possible to do with web search, but accessible through AI. The moral of the story is that Google's centralised system is getting eroded and they are losing control and ad impressions.

Eh I mean this is a lot of somewhat interesting speculation, in my mind the most relevant of which is how Google is going to manage to get inference costs small enough to scale any sort of language model architecture (their work on inference is also bleeding edge), but while there is opportunity to replace search with language models, Google has probably been working specifically on that for longer than anyone else - heck we heard them talking about it almost 3 years ago at I/O.

But back to the core point, Google is still easily, easily the leader in AI research.

1

visarga t1_ixd6ygt wrote

> I have absolutely no idea what you mean by "1-2 years late", in what way are they late?

GPT-3 was published in May 2020, PaLM in Apr 2022. There were a few other models in-between but they were not on the same level.

Dall-E was published in Jan 2021, Google's Imagen is from May 2022.

> Google is already looking at integrating language models

Yes, they are. But do a search and you'll see how poor the results are in reality. They don't want us to actually find what we're looking for, not immediately. They stand to lose money.

Look at Google Assistant - the language models can write convincing prose and handle long dialogues, in the meantime Assistant defaults to web search 90% of the questions and can't hold much context. Why? Because Assistant is cutting into their profits.

I think Google wants to monopolise research but quietly delay its deployment as much as possible. So their researchers are happy and don't make competing products, while we are happy waiting for upgrades.

1

TFenrir t1_ixdcbvj wrote

> GPT-3 was published in May 2020, PaLM in Apr 2022. There were a few other models in-between but they were not on the same level.

> Dall-E was published in Jan 2021, Google's Imagen is from May 2022.

Yes but the research that allowed for GPT itself came out of Google, GPT3 didn't invent the language model, and things like BerT are still the open source standard.

Even the research on image generation, that goes back all the way to 2013 or so with Google and deep dreaming. They had lots and lots is research papers on how to generate realistic images from text for years and years before even the first Dalle model.

On top of that, in present day that have shown the highest quality models. Which going back to my original point, highlights that if we're talking about organizations that will achieve AGI first - Google, with it's software talent, research, and hardware strengths (TPUs) are very very likely to achieve AGI first.

> Yes, they are. But do a search and you'll see how poor the results are in reality. They don't want us to actually find what we're looking for, not immediately. They stand to lose money.

This is essentially conspiracy theory, as well as subjective opinion.

> Look at Google Assistant - the language models can write convincing prose and handle long dialogues, in the meantime Assistant defaults to web search 90% of the questions and can't hold much context. Why? Because Assistant is cutting into their profits.

It's because they can't risk anything as hallucinatory and unpredictable as language models yet - this is clear from the research being done, not even just by Google. Alignment isn't just about existential risk.

> I think Google wants to monopolise research but quietly delay its deployment as much as possible. So their researchers are happy and don't make competing products, while we are happy waiting for upgrades.

Again more conspiracy theories. Take a look at the work Jeff Dean does out of Google, not even for the content, but for the intent of what he is trying to build. Your expectations from Google are based on this idea that they should already just be using language models in production, but they just aren't really ready yet, at least not for search, and Google can't risk the backlash that happens when these models come out undercooked. Look at what happened with Facebook's most recent model and the controversy around that. No conspiracy theories necessary.

1

visarga t1_ixfdcaj wrote

I don't believe that, OpenAI and a slew of other companies can make a buck on cutting edge language/image models.

My problem with Google is that it often fails to understand the semantic of my queries replying with other content that is totally unrelated, so I don't believe in their deployed AI. It's dumb as the night. They might have shiny AI in the labs but the product is painfully bad. And their research teams almost always block the release of the models and don't even have demos. What's the point in admiring such a bunch? Where's the access to PaLM, Imagen, Flamingo, and other toys they dangled in front of us?

Given this situation I don't think they really align themselves with AI advancement, instead they align with short term profit making, which is to be expected. Am I making conspiracies or just saying what we all know - companies work for profits, not for art.

1