Submitted by minimaxir t3_11fbccz in MachineLearning
JackBlemming t1_jaisvp4 wrote
Reply to comment by Educational-Net303 in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
Definitely. This is so they can become entrenched and collect massive amounts of data. It also discourages competition, since they won't be able to compete against these artificially low prices. This is not good for the community. This would be equivalent to opening up a restaurant and giving away food for free, then jacking up prices when the adjacent restaurants go bankrupt. OpenAI are not good guys.
I will rescind my comment and personally apologize if they release ChatGPT code, but we all know that will never happen, unless they have a better product lined up.
jturp-sc t1_jaj45ek wrote
The entry costs have always been so high that LLMs as a service was going to be a winner-take-most marketplace.
I think the best hope is to see other major players enter the space either commercially or as FOSS. I think the former is more likely, and I was really hoping that we would see PaLM on GCP or even something crazier like a Meta-Amazon partnership for LLaMa on AWS.
Unfortunately, I don't think any of those orgs will pivot fast enough until some damage is done.
badabummbadabing t1_jajdjmr wrote
Honestly, I have become a lot more optimistic regarding the prospect of monopolies in this space.
When we were still in the phase of 'just add even more parameters', the future seemed to be headed that way. With Chinchilla scaling (and looking at results of e.g. LLaMA), things look quite a bit more optimistic. Consider that ChatGPT is reportedly much lighter than GPT3. At some point, the availability of data will be the bottleneck (which is where an early entry into the market can help getting an advantage in terms of collecting said data), whereas compute will become cheaper and cheaper.
The training costs lie in the low millions (10M was the cited number for GPT3), which is a joke compared to the startup costs of many, many industries. So while this won't be something that anyone can train, I think it's more likely that there will be a few big players (rather than a single one) going forward.
I think one big question is whether OpenAI can leverage user interaction for training purposes -- if that is the case, they can gain an advantage that will be much harder to catch up to.
farmingvillein t1_jajw0yj wrote
> The training costs lie in the low millions (10M was the cited number for GPT3), which is a joke compared to the startup costs of many, many industries. So while this won't be something that anyone can train, I think it's more likely that there will be a few big players (rather than a single one) going forward.
Yeah, I think there are two big additional unknowns here:
-
How hard is it to optimize inference costs? If--for sake of argument--for $100M you can drop your inference unit costs by 10x, that could end up being a very large and very hidden barrier to entry.
-
How much will SOTA LLMs really cost to train in, say, 1-2-3 years? And how much will SOTA matter?
The current generation will, presumably, get cheaper and easier to train.
But if it turns out that, say, multimodal training at scale is critical to leveling up performance across all modes, that could jack up training costs really, really quickly--e.g., think the costs to suck down and train against a large subset of public video. Potentially layer in synthetic data from agents exploring worlds (basically, videogames...), as well.
Now, it could be that the incremental gains to, say, language are not that high--in which case the LLM (at least as these models exist right now) business probably heavily commoditizes over the next few years.
[deleted] t1_jako73i wrote
[removed]
[deleted] t1_japabmm wrote
[removed]
Derpy_Snout t1_jajfxrw wrote
> This would be equivalent to opening up a restaurant and giving away food for free, then jacking up prices when the adjacent restaurants go bankrupt.
The good old Walmart strategy
VertexMachine t1_jajjq8b wrote
Yea, but one thing is not adding up. It's not like I can go to a competitor and get access to similar level of quality API.
Plus if it's a price war... with Google.. that would be stupid. Even with Microsoft's money, Alphabet Inc is not someone you want to go to war on undercutting prices.
Also they updated their polices on using users data, so the data gathering argument doesn't seem valid as well (if you trust them)
Edit: ah, btw. I don't say that there is no ulterior motive here. I don't really trust "Open"AI since the "GPT2-is-to-dangerous-to-release" bs (and corporate restructuring). Just that I don't think is that simple.
farmingvillein t1_jajtmly wrote
> Plus if it's a price war... with Google.. that would be stupid
If it is a price war strategy...my guess is that they're not worried about Google.
Or, put another way, if it is Google versus OpenAI, openai is pretty happy about the resulting duopoly. Crushing everyone else in the womb, though, would be valuable.
astrange t1_jajpps3 wrote
"They're just gathering data" is literally never true. That kind of data isn't good for anything.
TrueBirch t1_jakosce wrote
I worked in adtech. It's often true.
Purplekeyboard t1_jajcnb5 wrote
> This is not good for the community.
When GPT-3 first came out and prices were posted, everyone complained about how expensive it was, and that it was prohibitively expensive for a lot of uses. Now it's too cheap? What is the acceptable price range?
JackBlemming t1_jajg4dz wrote
It's not about the price, it's about the strategy. Google maps API was dirt cheap so nobody competed, then they cranked up prices 1400% once they had years of advantage and market lock in. That's not ok.
If OpenAI keeps prices stable, nobody will complain, but this is likely a market capturing play. They even said they were losing money on every request, but maybe that's not true anymore.
Beli_Mawrr t1_jajvgax wrote
I use the API as a dev. I can say that if Bard works anything like OpenAI, it will be super easy to switch.
[deleted] t1_jajgqsv wrote
[deleted]
bmc2 t1_jajjjvd wrote
Training based on submitted data is going to be curtailed according to their announcement:
“Data submitted through the API is no longer used for service improvements (including model training) unless the organization opts in”
Viewing a single comment thread. View all comments