Comments

You must log in or register to comment.

Dr_Love2-14 t1_j7lxp7k wrote

Leader in the space?... It is starting to irk me to see so many articles and discussions about this "AI war" between OpenAI and Google and their respective chatbots. OpenAI's main chatbot is GPT3, Google has LaMDA among many others. One thing for sure, they are both large and perform differently depending on the metric used.

Companies such as Facebook, Google, NVIDA, and Chinese ones like Baidu, ect. all heavily invest in AI research. The contribution of these research scientists nation and worldwide are all noteworthy and build on eachother. Google employs far more research scientists than OpenAI, and the volume of ML publications and impact factor of these publications altogether is therefore greater. Deepmind, an AI research subsidiary of Google, has been a leader in AI research and deep learning for many years.

but to directly answer your question, and for what it's worth, I would say NASA is the leader in space. Honestly your question is vague and poorly defined and you shouldn't equate chatbots to their companies.

32

wonderingandthinking OP t1_j7pbiy7 wrote

Purposefully vaguely defined so that I increase the chances of getting an answer like this. Thanks for the info. And don’t underestimate or undervalue something that appears not well thought out or developed.

2

ElectroNight t1_j7poggj wrote

Meh, size of research team does not strongly correlate outcome quality and innovation. Furthermore bulky teams can reinforce momentum on a certain approach that turns into a dead end long term. Meanwhile small teams elsewhere start from a completely orthogonal approach and sometimes truly innovate. I'm not convinced Google has the right approach for the long term, organizationally or technically. Not saying ChatGPT is a Google killer either, yet.

0

impossiblefork t1_j7mtzzy wrote

I doubt it. Research teams associated with these companies are not known for any important novelties.

They're probably mostly special because they know how to train large transformer architectures and have the resources to do so.

6

MrEloi t1_j7mbb7y wrote

Does it matter?

The situation is so busy & so fluid ... and shrouded too ... that we can have no real idea.

Also, the situation could be totally different in a year or so.

1

wonderingandthinking OP t1_j7mbx3l wrote

As a way of being exposed to other players in the field it does matter. Some of the best and most effective examples may be nestled away under someone(s) less known or someone relatively known that just isn’t getting the press that only the most obvious examples are currently getting.

Edit - typo

1

gamerx88 t1_j7smwbb wrote

Leader in what space and what sense? Fundamental research? Innovation? Marketshare for LLM? Hype?

1

Fast_Goat_9613 t1_j7m51kj wrote

Wu Dao 2 seem like a total beast 🤯

−8

Zetus t1_j7ls05s wrote

Perhaps in America, but in the world, you may want to check out Wu Dao 2.0

Beyond current state of the Art.

−9

farmingvillein t1_j7n41tv wrote

There seems to be basically zero info about wu dao 2, which makes it hard to take seriously as SOTA.

10

BrotherAmazing t1_j7nt1za wrote

Possibly, I guess, but how would you or anyone else know? Wu Dao 2 is like a mythical beast, like the Loch Ness Monster, that we catch blurry glimpses of and that’s it.

Also, even suppose Wu Dao 2 is SOTA despite no one being able to confirm that (trust me bro!). The problem is it was trained by just copying what Google and OpenAI had published and trying to just scale up what they did. I’m not sure I would call that a “leader in the space” if you have no clue how to make any innovations yourself, so you wait for someone else to publish an innovation and then you just copy it and try to scale it up.

6