VelveteenAmbush

VelveteenAmbush t1_jefpifg wrote

The biggest issue is programming, not privacy.

Should we have allowed the USSR to operate a major television broadcasting network in the US at the height of the Cold War?

The GDPR has nothing to do with that concern.

Anyway, "citizens can decide for themselves" is not how we usually handle trade disputes. If Country X tariffs or bans our widgets, we usually respond by tariffing or banning their doohickeys. It isn't up to our citizens to decide for themselves whether to use Country X's doohickeys.

0

VelveteenAmbush t1_jce5y2v wrote

If only it was like paying philosophers. More often it is like paying anti-corporate activists to sit inside the corporation and cause trouble. There's no incentive for them to stay targeted at things that are actually unethical -- nor even any agreement on what those things are. So they have a structural incentive to complain and block, because that is how they demonstrate impact and accrue power.

5

VelveteenAmbush t1_jcdxc8v wrote

Maybe you're onto something.

I guess the trick is coming up with foundational patents that can't be traced back to a large tech company that would worry about being countersued. Like if you make these inventions at Google and then Google contributes them to the GPL-esque patent enforcer entity, and then that entity starts suing other tech co's, you can bet that those tech co's will start asserting their patents against Google, and Google (anticipating that) likely wouldn't be willing to contribute the patents in the first place.

Also patent litigation is really expensive, and you have to prove damages.

But maybe I'm just reaching to find problems at this point. It's not a crazy idea.

5

VelveteenAmbush t1_jcd6opg wrote

You could patent your algorithm and offer some sort of GPL-like patent license, but no one respects software patents anyway (for good reason IMO) and you'd be viewed as a patent troll if you tried to sue to enforce it.

GPL itself is a copyright license and does you no good if OpenAI is using your ideas but not your code. (Plus you'd actually want AGPL to force code release for an API-gated service, but that's a separate issue.)

8

VelveteenAmbush t1_jccksp9 wrote

Right, Google's use of this whole field has been limited to optimizing existing products. As far as I know, after all their billions in investment, it hasn't driven the launch of a single new product. And the viscerally exciting stuff -- what we're calling "generative AI" these days -- never saw the light of day from inside Google in any form except arguably Gmail suggested replies and occasional sentence completion suggestions.

> it's a different mode of launching with higher risks, many of which have different risk profiles for Google-scale big tech than it does for OpenAI

This is textbook innovator's dilemma. I largely agree with the summary but think basically the whole job of Google's leadership boils down to two things: (1) keep the good times rolling, but (2) stay nimble and avoid getting disrupted by the next thing. And on the second point, they failed... or at least they're a lot closer to failure than they should be.

> Example: ChatGPT would tell you how to cook meth when it first came out, and people loved it. Google got a tiny fact about JWST semi-wrong in one tiny sub-bullet of a Bard example, got widely panned and lost $100B+ in market value.

Common narrative but I think the real reason Google's market cap tanked at the Bard announcement is due to two other things: (1) they showed their hand, and it turns out they don't have a miraculous ChatGPT-killer up their sleeves after all, and (2) the cost structure of LLM-driven search results are much worse than classical search tech, so Google is going to be less profitable in that world.

Tech journalists love to freak out about everything, including LLM hallucinations, bias, toxic output, etc., because tech journalists get paid based on engagement -- but I absolutely don't believe that stuff actually matters, and OpenAI's success is proving it. Google's mistake was putting too much stock in the noise that tech journalists create.

9

VelveteenAmbush t1_jcc4mvf wrote

Transformers aren't products, they're technology. Search, Maps, Ads, Translation, etc. -- those were the products. Those products had their own business models and competitive moats that had nothing to do with the technical details of the transformer.

Whereas GPT-4 is the product. Access to it is what OpenAI is selling, and its proprietary technology is the only thing that prevents others from commoditizing it. They'd be crazy to open up those secrets.

−4

VelveteenAmbush t1_jcbw6mx wrote

DeepMind's leaders would love to hoard their secrets. The reason they don't is that it would make them a dead end for the careers of their research scientists -- because aside from the occasional public spectacle (AlphaGo vs. Lee Sedol) nothing would ever see the light of day. If they stopped publishing, they'd hemorrhage talent and die.

OpenAI doesn't have this dilemma because they actually commercialize their cutting-edge research. Commercializing its research makes its capabilities apparent to everyone, and being involved in its creation advances your career even without a paper on Arxiv.

61

VelveteenAmbush t1_jcbv0rs wrote

GPT-4 is an actual commercial product though. AlphaGo was just a research project. No sane company is going to treat the proprietary technological innovations at the core of their commercial strategy as an intellectual commons. It's like asking them to give away the keys to the kingdom.

−2

VelveteenAmbush t1_jcbu8nr wrote

> While they also potentially don't release every model (see Google's PaLM, LaMDA) or only with non-commercial licenses after request (see Meta's OPT, LLaMA), they are at least very transparent when it comes to ideas, architectures, trainings, and so on.

They do this because they don't ship. If you're a research scientist or ML research engineer, publication is the only way to advance your career at a company like that. Nothing else would ever see the light of day. It's basically a better funded version of academia, because it doesn't seem to be set up to actually create and ship products.

Whereas if you can say "worked at OpenAI from 2018-2023, team of 5 researchers that built GPT-4 architecture" or whatever, that speaks for itself. The products you release and the role you had on the teams that built them are enough to build a resume -- and probably a more valuable resume at that.

14