Submitted by BobbyWOWO t3_116c4pg in singularity

Even with all the public progress that OpenAI/Microsoft has made in the past few months, I still think DeepMind will be the first to create a general intelligence. They seem to have cracked the code on Reinforcement Learning, and I think it’s probably a very intrinsic part of general intelligence and problem solving.

Either way, I usually like to keep up to date with DeepMinds progress. In the past 3 or 4 years, they usually make blog posts or release papers like 2-3 times a week. And as of Dec.14, they haven’t released a single thing. I think that was around the same time ChatGPT came out.

I would have hoped to hear something about GATO2 or an updated Sparrow, but it’s been complete radio silence for nearly 2 months. Very unlike DeepMind…



You must log in or register to comment.

GoldenRain t1_j95z7vc wrote

I think Google realized that funding all research and then making it available to OpenAI for free, while they don't return the favor isn't a viable strategy.


TFenrir t1_j960ilw wrote

This is probably likely, and not just for this reason. Demis Hassabis himself said recently in a time magazine article that he thinks that OpenAI (without naming them) don't contribute to the science, but take a lot from the science out there - which they use to push AI out into the world faster than he would like people to. So they probably are going to not share as much going forward.


[deleted] t1_j96da21 wrote



TFenrir t1_j96hi1f wrote

I generally appreciate what you are saying, and I feel more or less the same way, in the sense that I think that these models should be in our hands sooner, rather than later, so that we can give appropriate large scale feedback... But I also think the reasoning to hold back is more complicated. I get the impression that fear of bad results is a big part of the anxiety people like Demis feel.


Gagarin1961 t1_j97l74o wrote

> get the impression that fear of bad results is a big part of the anxiety people like Demis feel.

Then he shouldn’t be upset with chatGPT at all, as their product hasn’t produced a particularly “bad” result.

It’s been nothing but positive for millions. He was wrong, the time is right.


[deleted] t1_j96uhks wrote



TFenrir t1_j96v7w5 wrote

It's too easy to look at people who don't give you what you want as monsters, but I think we do ourselves a disservice if we eschew nuance for thoughts that affirm our frustrations.


[deleted] t1_j96z39f wrote



TFenrir t1_j971ped wrote

You're not displaying any ability to look at situations like this with nuance. It's extremely simplistic to look at the world like it's composed of good guys and bad guys, and you do yourself a disservice when you fall into that trap.

It's not dick-riding to say "maybe there are more complicated reasons that people want to be cautious about the AI they release other than being power hungry, mustache twirling villains".

As a creative exercise, could you imagine a reason that you may even begrudgingly agree with, that someone like Demis would have to hesitate to share their AI? If you can't, don't you think that's telling?


Honest_Performer2301 t1_j973cwh wrote

They pioneered this shi while you sat at home watching cartoons farting. Show some respect. I'm so sick of ignorant ungrateful people


BlessedBobo t1_j97e1qv wrote

you seem to be pretty damn entitled for someone who contributes absolutely nothing to humanity


ccnmncc t1_j97ntm6 wrote

Every consciousness has something to contribute to the universe.

But yeah - some more than others.


MuseBlessed t1_j97ugs1 wrote

Want to thank you for helping combat dehumanization.


hydraofwar t1_j96r06y wrote

At the end of the day, either keeping AI for yourself or sharing it with the people is dangerous either way. But it's probably less dangerous to give access to the people than to keep it for the elite.


[deleted] t1_j96ukuu wrote



hydraofwar t1_j96zhpp wrote

I also forgot to mention that Google could already be literally taking advantage of its powerful models without anyone knowing.


tangent26_18 t1_j97zbix wrote

I think decentralization is a utopian fantasy. Look at any history book, the powerful have always been the minority and call all the shots. It’s baked into reality.


BlipOnNobodysRadar t1_j98dug9 wrote

Yet when centralized establishment power is disrupted we see the greatest progress. It may not be a natural state, but it's one worth working towards.


PoliteThaiBeep t1_j9acefq wrote

When powerful call all the shots it shifts wealth dramatically towards the elite and away from the public reducing quality of life and innovation.

It would also mean any friends and family of powerful would hold the keys to major industry sectors and companies and wouldn't let anyone new in. So encumbents can never be overthrown by a new business (blockbusters -> Netflix)

This is exactly what Russia is - Putin holds all the power and whenever new company comes up who does things in innovative way forcing incumbents out - like Yandex, Vk, Tinkoff and many others - he'd either buy them out for cheap (Yandex) or if it's not successful, threaten, publicly defame on state TV and force CEO out of the country, forcing him to sell for pennies (Vk, tinkoff). All of these companies belong to Putin friends via one or another scheme.

And when you look at the map and export data by country and you wonder how despite such a massive stream of wealth from oil and gas, yet Russian people have the worst quality of life in Europe (tied with Ukraine and Belarus). Many countries have nothing and yet hold significantly better quality of life (Estonia, Singapore, etc)

Basically if you look at a country where some guy/girl who was nobody was allowed to force a powerful corporation out through their innovation and ingenuity - that's a good sign that democracy is working there.

Of course it's not black and white it's a spectrum. If we look at any society decades and hundreds of years ago, their best societies would look far worse than most today, and their worse society would be far worse than north Korea today.

Still it's obvious that more democracy means more progress and, faster innovation, better quality of life and reduced power of the wealthy.


gosu_link0 t1_j98ae0y wrote

Except Deepmind and Demis were the ones to invent the technology behind chatGPT and made it available for free for others like openAI to copy.

Literally the opposite of "keeping it for himself"


ReignOfKaos t1_j99o0pe wrote

Bit of a nitpick but Transformers were invented by Google Brain, not DeepMind.


Anen-o-me t1_j96pgnq wrote

OpenAI also refuses to release GPT3 so they're also violating our trust.


ipatimo t1_j96fneh wrote

When papers about nuclear topics stopped to be published, Soviet Union understood that USA is close to the creation of nuclear bomb.


hydraofwar t1_j96r7mt wrote

Lol, don't create that hype friend, but maybe you are right


nomorsecrets t1_j97ykqd wrote

Nukes are the only thing I can think to compare it to, even though I know it doesn't make sense.
Nuclear capability for every man, woman and child.

The threat of Mutually Assured Destruction will not hold up on a grand scale.


IluvBsissa t1_j96ox3l wrote

Hol' up.


Agreeable_Bid7037 t1_j9720op wrote

Its likely DeepMind is ahead with Language models and is simply not saying anything.


Aggravating-Act-1092 t1_j97jrcu wrote

Yeah this. It seems unlikely that DeepMind is behind OAI from a science perspective. OAI has done more/better to monetise LLMs but between Sparrow, Chincilla, Gato and Flamingo DeepMind definitely appears to have a good grasp.

As mentioned already, Demis said they would be cutting back on publications, what we are seeing is just that.


Superschlenz t1_j98leen wrote

>It seems unlikely that DeepMind is behind OAI from a science perspective

So it seems unlikely that Alphabet is not just pouring another $10B into DeepMind as Microsoft did with OpenAI?

Hahaha, just kidding. The people at DeepMind are so much more intelligent than the people at OpenAI, they can run all the new models perfectly inside their heads and don't need massive compute to verify and fix their buggy ideas (or hire a load of paid workers for RLHF).


Aggravating-Act-1092 t1_j99j61t wrote

Huh? Of course Google is pouring money into DeepMind, most likely in similar quantities to MS to OAI.

Where did you derive that statement from?


prodoosh t1_j9a3ruk wrote

Google is a publicly traded company with open balance sheets. You don’t have to speculate my man


Aggravating-Act-1092 t1_j9bk0x0 wrote

That’s a good point. There accounts for 2021 are here:

That works out at just under 2 billion USD in 2021. Given their own and the industry trend we can probably assume 2022 is higher and 2023 will be higher still.

OAI gave no timeline over which their 10B injection will be spent over, but presumably more than 1 or 2 years. So these two are definitely in the same league.


NTIASAAHMLGTTUD t1_j971f6z wrote

Sounds interesting, source?


glaster t1_j978o82 wrote

Trust me, bro. (Follows a quick answer by AI).

During World War II, the US government formed the Manhattan Project, a top-secret research program dedicated to developing the world's first nuclear weapons. The papers published during this time on nuclear topics were often focused on the technical details of creating and using nuclear fission for military purposes.

One of the most important papers from this period was "The Production of Radioactive Substances by Neutron Bombardment" by Glenn T. Seaborg and Arthur C. Wahl. Published in 1945, this paper described the discovery and isolation of several new elements through neutron bombardment, including plutonium, which would later be used in the construction of the atomic bomb.

Another key paper from this time was "Theoretical Possibility of a Nuclear Bomb" by Edward Teller. This paper explored the feasibility of creating a nuclear bomb and outlined the basic principles behind its design.

Other papers from this period focused on the design and construction of nuclear reactors, such as "The Thermal Neutron in Reactors" by Enrico Fermi and "Nuclear Chain Reaction in Uranium and Thorium" by Eugene Wigner. These papers helped lay the foundation for the development of nuclear power.

However, not all papers from this time were focused solely on technical details. Some also explored the ethical implications of using nuclear weapons in warfare. One such paper was "The Social Responsibilities of the Scientist" by James Franck, which called on scientists to consider the potential consequences of their research and to take an active role in promoting peace.

Overall, the papers published on nuclear topics during the Manhattan Project were instrumental in advancing our understanding of nuclear science and technology, and in shaping the world we live in today.


bass6c t1_j96dhu0 wrote

Most of the technologies being used by openai are either from Google or from Deepmind. The transformer and instructed fine tuning,… are from Google brain. OpenAI recent success comes at a heavy cost for the ai community. Companies such as Google, Meta and Amazon will most likely stop publishing influential papers.


FusionRocketsPlease t1_j96um3f wrote

>OpenAI recent success comes at a heavy cost for the ai community. Companies such as Google, Meta and Amazon will most likely stop publishing influential papers.

What the hell they expected?


Utoko t1_j97767s wrote

hm they probably expected the non-profit "Open"AI not completely switch around stop publishing papers and becoming a for profit company. (Usually in the past that isn't the norm or the purpose of a nonprofit)

We had a couple of years were the cooperations somehow realized that sharing their research advanced progress a lot faster. OpenAI will get the companies back to protectionism.


visarga t1_j97eebr wrote

> OpenAI will get the companies back to protectionism.

Now that's an 180.


bass6c t1_j976we4 wrote

Collaborative Work and openness drive progress.


FirstOrderCat t1_j97i6py wrote

> Most of the technologies being used by openai are either from Google or from Deepmind.

it is just indication that google and deepmind create theoretical concepts but can't execute it to complete product.


bass6c t1_j97jd0e wrote

As if Google or Deepmind does not have or cannot buil models such as openai’s. As of now Google hold probably the most powerful language model in the world. Palm beat GPT models in every major beachmark. I’m not even talking about u-palm or flan palm (more advanced versions of palm).


FirstOrderCat t1_j97jovv wrote

> Palm beat GPT models in every major beachmark.

palm is much larger, which makes it harder to run in production serving many user's requests, so it is example of enormous waste of resources.

Also, current NLP benchmarks are not reliable, simply because models can be pretrained on them and you can't verify this.


bass6c t1_j97mvsk wrote

This was a reply to your comment stating Google can’t convert theorical concept into an actual product. That’s not the case. The thing is Google isn’t interested into shipping costly llm to only then hurt their own business. It’s not about they can’t it’s they won’t.


FirstOrderCat t1_j97o5hw wrote

I think my point still stands:

- Google didn't ship LLM as product yet, and now forced to catch up because lost innovation race (even you think they are not interested lol)

- OpenAI shipped multiple generations of LLM products already


thesofakillers t1_j9al7xs wrote

The RLHF paper from Christiano et al. in 2017 was a DeepMind-OpenAI collab


Villad_rock t1_j96njv3 wrote

Doesn’t Microsoft do any ai research?


PeedLearning t1_j97x4yf wrote

no, basically nothing in terms of fundamental research


DukkyDrake t1_j984g11 wrote

A statement from extreme ignorance of reality.


MrEloi t1_j96kdix wrote

They are in a deep sulk about OpenAI getting all the kudos and publicity.

On top of that, they are getting beaten up by Alphabet to produce something which looks good in the media.

Their main task recently has been to throw mud at OpenAI and ChatGPT.
I suppose they want to slow them down with "concerns about safety" whilst Google tries to duct tape its AI systems into a working chat system.

OpenAI's very successful launch of ChatGPT seems to have upset quite a lot of others in the AI sector .. especially those who are usually in the media spotlight.

All that said, it now seems that OpenAI have succumbed to external pressures and have been brought back into line. They have delayed the release of GPT-4 "on safety grounds".

They are also now suggesting that AI systems, hardware, training, models etc should be regulated .. again for "safety".

Being a cynic, I think that OpenAI, Google (and the US government?) have done a deal. They will retain control of the AI platforms, thus becoming a duopoly.

Startups etc will be encouraged - but will of course have to source their AI power from the big boys.

Open Source etc AI systems will be blocked .. due to "safety issues".

High power AI GPUs will only be available to the big boys.

Getty Images, Shutterstock and the like will do licensing deals with the duopoly .. but Open Source systems will be sued for Copyright infringement.

The US government will be happy with all this : they can control the AI systems if required.

Anyway, that's the way I see things turning out.


visarga t1_j97eu47 wrote

All this elaborate scheme falls down in 3 months when we get a small scale, open sourced chatGPT model from Stability or others. There are many working on reproducing the dataset, code and models.


MrEloi t1_j97jzfv wrote

And suppose all parts of such systems and related activities are declared illegal - or even terrorist devices?

The media are in the government's and big corporations' pockets .. I can just imagine the steady propaganda against "dangerous private AI" they could pump out.


YobaiYamete t1_j97rvdp wrote

> Being a cynic, I think that OpenAI, Google (and the US government?) have done a deal. They will retain control of the AI platforms, thus becoming a duopoly.

Lol people say these things and don't realize that Amazon and Apple and Nvidia and the other big companies also have their own AI in the works, as well as, y'know, countries outside the US

The genie is out of the bottle, there's zero chance just two or three companies will get to keep it. Every billionaire worth their salt is focusing heavily on the AI field right now


MrEloi t1_j97x9gk wrote

>The genie is out of the bottle, there's zero chance just two or three companies will get to keep it. Every billionaire worth their salt is focusing heavily on the AI field right now

Agreed .. but these big firms will all do their darndest to 'tax' the population's use of AI.


tangent26_18 t1_j98188y wrote

What good is the current AI? For private knowledge of curious citizens and education. Well the government owns education, basically, and the media will control the informing of citizens. I see this as an opportunity to bottleneck all of our new knowledge generation moreso than already exists with the university specialization system. We will all be informed by a centralized monopoly of knowledge owners. This will lead to a monopoly on our past, present and future. There may be less wars except between west and east…the final conflict. If we can all live globally in peace, then our survival will depend on this centralized knowledge center.


imlaggingsobad t1_j97tpc5 wrote

i don't think this has happened yet, but certainly will in the future


paulyivgotsomething t1_j960jno wrote

Open Ai is reading their papers then implementing and distributing the resulting models. I think they were unhappy about that and stopped sharing. Oh yeah and Open AI is getting rich of the work of others and destroying the parent company that paid for the research. So i don't think you will be seeing anything for a while.


BobbyWOWO OP t1_j96119o wrote

Well I would argue that most companies take the work of others to build products. Like Apple didn’t invent hard drives, monitors, CPUs and operating systems, they just implemented them in an innovative way. DeepMind at its core is a research company… they had to expect that others would use their science to build products


TampaBai t1_j98pfta wrote

Yes, this fiasco reminds me of Steve Jobs' stealing the intellectual property of the GUI from Xerox. They (Xerox) were sitting on perfectly implementable technology, but didn't seem to think there was any need for ordinary consumers to use such an interface. Jobs evidently never signed any kind of confidentiality agreement as Xerox assumed his intentions for touring the facility were for educational proposes. Soon thereafter, Jobs pilfered Xerox's technology -- and the rest is history, as we all are accustomed to using what became the "mouse". I hope there are others who, like Jobs, will do what it takes to get this tech into all of our hands as soon as possible.


[deleted] t1_j96dk9j wrote



Stakbrok t1_j9717f6 wrote

Here is a free guitar, drumset and microphone. 😊


Wait, why the hell did you make a pop song and get #1 on the billboards? 🤬🤬🤬


YobaiYamete t1_j97sau0 wrote

More like "Here's exactly how to make a car that can run for 200,000 miles on one drop of water. I'm not going to make it though, because won't someone think of the poor oil barons?"


"ZOMG!!!! Someone made the car and is selling it for billions???"

It's baffling that Google has sat on the tech for so long, and fully justified that another upstart is castrating them after actually using it


EasternBeyond t1_j98p2rv wrote

Google doesn't want to cannabalize its own bussiness, which is search advertising. They probably realized early on that LLM models will compete with their main profit generator, so they decided to not allocate a significant amount of capital into making it a publically available product.


nomorsecrets t1_j9ad5fu wrote

Yeah well too bad. They don't have a choice in the matter and that should be crystal clear to them.
They have no one to blame but themselves.
Cannibalization and adaptation is their one and only move.


gosu_link0 t1_j98ava1 wrote

OpenAI has turned into ClosedAI


nomorsecrets t1_j9adikm wrote

ClosedAI also opened pandoras box so they do deserve some credit for getting the ball rolling.

Now it will likely be on the next OpenAI to release the next milestone advancement unless OpenAI can strike twice with GPT-4 and move the medium forward passed the ChatGPT plus web search capability of new Bing.

I have no faith in Google doing it, if that isn't clear. I hope they prove me wrong though.


Redditing-Dutchman t1_j966dak wrote

In the beginning of this year Deepmind laid off a big part of it's staff (just like many tech companies).The Deepmind research facility in Edmonton was even closed completely.

Could be that progress is actually slower or even halted because of this, or they kept the basic team and fired people like the blog writer for example.


TFenrir t1_j968cot wrote

They only really laid off operational staff, and closed their Edmonton office. All the Edmonton engineers were offered roles in Toronto/Montreal offices.


Redditing-Dutchman t1_j96t4el wrote

Ok thats good to hear. So it wasn't as bad as I feared.


uishax t1_j98a46e wrote

It sounds extremely dumb to fire your AI engineers at the beginning of 2023, when its plainly obvious the AI tsunami is about to hit. They have been employed for a decade now, when AI produced no economic returns, so no point laying off them now.


lehcarfugu t1_j95ytw9 wrote

Google is realizing how disruptive chat bots are to its business model. They may want to stifle innovation until they have a gun to their head and forced to release (see bard)


vivehelpme t1_j96uht6 wrote

>to its business model.

Googles business model seems to be sitting around doing nothing


visarga t1_j97ffuy wrote

> Googles business model seems to be sitting around doing nothing

They are making record profits. Look at the charts.


TFenrir t1_j960l6m wrote

? Can you clarify what you mean by stifle innovation?


lehcarfugu t1_j9639ur wrote

Well it appears that previous inventions they open sourced are going to hurt their bottom line. The transformer came from Google, and most of what you are seeing now stems from Google

It might be in their best interest to stop open sourcing stuff that will only benefit their competition


TFenrir t1_j96815w wrote

Ah I get you. Yeah, here's the complicated thing though - Google generally provides the most valuable AI research every year, especially if you include DeepMind.

If suddenly they decide that it's more important to be... Let's say cautious, about what papers they release, what impact is that going to have? Are other companies going to step up and provide more research, or are they all going to be more cautious about sharing their findings?


blueSGL t1_j96kgc1 wrote

It's all fine and good being a benevolent company that decides it's going to fund (but not release) research.

Are the people actually developing this researches going to be happy grinding away at problems at a company and not have anything they've created shared?

and see another research institute gain kudos for something they'd already created 6months to a year prior but it's locked in the google vault?


TFenrir t1_j96l9m3 wrote

Yeah I think this is already playing out to some degree, with some attrition from Google Brain to OpenAI.

I don't know how much is just... Normal poaching and attrition, and how much is related to different ideologies, but I think Google will have to pivot significantly to prevent something more substantial happening to their greatest asset.


BobbyWOWO OP t1_j95z8va wrote

I would hope that other products and models would still be available to the public - things that wouldn’t take from Google’s business model like AlphaFold


hold_my_fish t1_j99gwpn wrote

Wow, what you've noticed about DeepMind's blog is quite striking. To have a two-months-and-counting blackout there is strange.


DukkyDrake t1_j98doqz wrote

We're transitioning to the monetization phase of the journey. This is where we start building out AI services for any and everything under the sun that can generate enough revenue. By the turn of the decade, after all these distributed AI services have permeated all of human society, they will collectively be viewed as an AGI.


nillouise t1_j96sr78 wrote

I am also curious about this, but imo using AI to advance science is a wrong tech route, anyway, if DeepMind keep silence, they would better to make a big thing instead of just losing the game.


TemetN t1_j97ctj9 wrote

Honestly, in contrast with a lot of people here I'm less certain this was against OpenAI specifically, but that's partially because OpenAI promptly went and said they were going to do the same thing. If anything, I'm more unnerved that it's a general movement away from sharing research - and we've seen the damage this song and dance does before. Frankly I'm disgusted with both OpenAI and DeepMind at this point.


helpskinissues t1_j9730cv wrote

DeepMind is the worst enemy of Google. It seems most people think the fact that Google AI competes with DeepMind is just a coincidence. No. Google is consciously moving money from DeepMind to Google AI, because DeepMind is against corporation mindset.


NutInBobby t1_j97rwks wrote

Deepmind is owned by Google.


helpskinissues t1_j97t384 wrote

DeepMind (Demis) is against corporation approaches. Google bought DeepMind and Demis later regretted that transaction. They're in a tense relationship, which explains why in the last years Alphabet has heavily invested in Google AI to separate themselves from DeepMind. Anyone that follows closely the AI news would know that Google is ignoring most DeepMind news. They don't even tweet about their progress, yet they tweet everything about Google AI.

They have two LLMs (Lambda and Sparrow), and the one that's going to be released on Google is Lambda, not Sparrow (DeepMind). DeepMind is a rebel inner research team inside Google. I wouldn't even say they're inside Google, they're not even in the same country.


ChipsAhoiMcCoy t1_j983ecw wrote

Recently google came out and said they were no longer going to be sharing academic progress publicallu I thought?