Viewing a single comment thread. View all comments

BCBCC t1_j6x9axw wrote

I know what the Pareto principle is, and I don't think 20% of users will pay this subscription fee, that's a pretty wild assumption

22

TrevorIRL t1_j6xhuvh wrote

Your right, it was just some quick napkin math, I’m not saying it’s guaranteed.

I would however say that even if you said only 10% of users would pay, you are still at $20 000 000.

5% is still $10 000 000.

Imagine having a product better than Google, being able to improve productivity and save hours in your business, and not having to fear that too many people are using it when you need it most.

I guarantee we see more than 5% of users willing to shell out $20/mo for this.

Edit: This is also a product that’s going to continue to get better over time!

2

ResetThePlayClock t1_j6xkc53 wrote

I agree with this take. It’s already gotten me out of several jams at work, and it is DEFINITELY better than google.

5

arhetorical t1_j6xxijd wrote

$20 is frankly a very reasonable price for anyone who uses it professionally. For people who just use to generate memes or students who want to cheat on homework it's less reasonable, but I don't think that's their target market (and in the case of cheating, something they actually want to avoid).

4

2blazen t1_j6ykrcq wrote

I've been using the GPT3 API for around 0.4c per request with 0 down time. With my current usage this sums up to around 10c a day, 3usd per month. I don't see how 20usd is reasonable

1

CowardlyVelociraptor t1_j7025ho wrote

You're paying a premium for the nice UI

1

2blazen t1_j70vh2g wrote

Might be just me, but I really hate how the reply is returned in the UI. Even if the subscription will solve the random interruptions during generation, the word-by-word printing kills me, I'd rather wait a bit but receive my answer in one piece

1

danielbln t1_j7c9mpc wrote

I much prefer to see the tokens as they are generated, it's much better UX as you can abort the generation if you feel it's not going in the right direction. All my GPT3 integrations use stream:true and display every word as it comes in.

1

arhetorical t1_j70ndxc wrote

Isn't ChatGPT more advanced than the davinci models available through the API? In any case, the point is that if you use it for work, $20 is negligible compared to the time you'll save.

1

2blazen t1_j70ux9o wrote

I thought so too, but haven't actually notice any difference, other than how the davinci models don't have the extensive content filters.

>if you use it for work, $20 is negligible

If my company pays for it, sure, otherwise I'll always prefer the request-based pricing with a nice API that I can just call from my terminal

1