Submitted by Kaarssteun t3_zz3lwt in singularity

To those with a slight grasp on LLMs, you might have noticed ChatGPT isn't that big of a deal architecturally speaking. It's using an updated version of GPT - GPT 3.5, fine-tuned on conversational data, with RLHF (reinforcement learning with human feedback)

Everyone could have had this functionality, a smart chatbot capable of slicing a big chunk of your workload for you, with a little prompt engineering in openai's playground.

No source for this one, but if I recall correctly ChatGPT wasn't that big of a project - understandable given it's not much more than an easy-to-use pre-prompted interface to GPT 3.5. OpenAI likely did not expect this kind of a reaction from the general public, given their three previous big language models were certainly not talked about on the streets. ChatGPT being in the familiar format of a simple chat interface wholly dictated its success.

ChatGPT is officially a research preview - which subsequently exploded. Instead of collecting human feedback from little extra computational cost, they now face hordes of people sucking the FLOPS out of their vaults for puny tasks, expecting this to remain readily available and free - while the costs for openai are "eye-watering".

Openai cannot shut this thing down anymore, the cat's out of the bag. This is of course exciting from an r/singularity user's perspective; google is scrambling to cling to the reigns of every internet user, and AI awareness is higher than it has ever been.

Just can't imagine this was the optimal outcome for openai!



You must log in or register to comment.

jdmcnair t1_j29hpnz wrote

For all of the FLOPS people are sucking down, OpenAI is getting a fucking massive boost in that RLHF you mention. It may not be paying for itself yet, but it's more than worth the investment for the real-world human training context they're getting.

And when they do decide to close down the public preview and go for a subscription model, lots of people will go for it, because they've already proven out how clearly useful it is.


ftc1234 t1_j2a4gwv wrote

This. The harder thing to build than good AI is to build an early movers advantage. OpenAI is starting to see that now.


SoylentRox t1_j2adgyv wrote

Yep. I want a premium tier where I can make as queries and get immediate responses with no cooldowns. I would expect a monthly plan where i get a certain number of queries included and can buy more.

In a few years I would expect my employer to pay for the subscription but in the immediate future I'm happy to do so. I don't ask it to write anything I can't write but it saves all this time.


terrabi t1_j2axjua wrote

I hope it won't be a subscription but pay-as-you-go, just like it already is with gpt3 and dall-e.


AdminsBurnInAFire t1_j2dlkbr wrote

Holy shit do you guys have infinite money? Do you not understand how awful a subscription economy is? You will own nothing and rent forever.


SoylentRox t1_j2ebbo1 wrote

That's fine. Owning stuff is inefficient. I would rather own shares of stocks and rent everything else.


AdminsBurnInAFire t1_j2ecy9p wrote

That’s how you become a digital serf. Owning is always the smart choice.


SoylentRox t1_j2ei4bm wrote

No, it makes you a digital elite.

If you own stock but rent your phone, car, and home, you can move whenever you want and always have the latest car and phone. You benefit from the extra technology.

While I don't actually rent my car or phone as I don't need either to be the absolute latest, I do rent software. As anything but the most recent version is useless to me.

For AI models it's the same idea.

I have hundreds of thousands, soon to be over 1m in stock. As much 'equity' as an extremely lucky homeowner.


AdminsBurnInAFire t1_j2ej85m wrote

No, the digital elite all have their possessions secured with a purchase, often multiple purchases, because they’re not foolish.

What you do not own, can always be taken from you. You don’t need to worry (too much) about your software being taken from you but you do need to worry about your house being taken from you. The only argument for renting that can be taken seriously is convenience and security always trumps convenience. The same thing for stocks, if Wall Street fucks up one day and says your stocks are worth nothing, what can you do? Meanwhile if the bank comes for your house, you have a bill of ownership protecting your rights.


SoylentRox t1_j2ek52w wrote

>What you do not own, can always be taken from you. You don’t need to worry (too much) about your software being taken from you but you do need to worry about your house being taken from you.

This is not a problem if you have money. Just go rent something else. Also if your landlord decides to go through the eviction procedure, there is no ASSET for you to lose.

If you own a house, and a judge decides to order it seized in a civil action (like a divorce or lawsuit), or your corrupt HOA makes up some fines of arbitrary scale and then sues you and seizes it if you can't pay, you lose the EQUITY.

I'd rather have all my assets in stock, and borrow against it if I have a need for money fast when the market is low.


PanicV2 t1_j2eiweq wrote

So what's your plan then? You want to buy your own?

I'm pretty confident a subscription is cheaper than the alternative here, unless you are FAANG.


sharkymcstevenson2 t1_j2cjujb wrote

Alfred has this.


SoylentRox t1_j2cjzfa wrote

Is this the same model that was refined via RLHF? The base GPT-3 is not that.


sharkymcstevenson2 t1_j2ckgav wrote

Alfred is based on GPT-3 with a finetuned model to be as similar as possible as ChatGPT, since ChatGPT API isnt available yet (hoping for a public release soon)


Spacebetweenthenoise t1_j2aoo0y wrote

Another subscription??? Please not.


stevenbrown375 t1_j2b467b wrote

On the flipside I'd pay $200 a month, but it's very useful in my line of work. It's saved me days.


visarga t1_j2bhslk wrote

If you're after the capability and not the chat interface, you can already use text-davinci-003 through the playground or API.


stevenbrown375 t1_j2bkt4p wrote

I’m actually after the chat I guess; I’m in marketing.


karaburmication t1_j2bs95a wrote

Mind explaining briefly what you use it for exactly?


stevenbrown375 t1_j2bt7cj wrote

A few things off the top of my head:

  • Writing criteria and methodologies for marketing studies.
  • Converting table data into prose.
  • Copywriting (duh)
  • Creative brainstorming
  • Project planning and basic guidance
  • A file-naming-standards widget I’m building in Excel, and potentially in PowerApps.
  • Building a style guide
  • Writing go-to-market plans
  • javascript expressions for Adobe After Effects
  • Presentation planning

-ZeroRelevance- t1_j2bzq04 wrote

If you didn’t know, you can actually train a fine-tuned model through the playground if you want, you just need to supply the training set and pay a bit more, which may be a bit tricky depending on your resources though.


stevenbrown375 t1_j2c122l wrote

Good to know. We have data scientists here that could implement something like this but they’re working on stuff that’s way too specific to train a model on good marketing practices just for my little department. All in all though, chatGPT has been really great as-is. I feel like I have a new work buddy, and I’m ravenously consuming every GPT-4 rumor I can find.


10GigabitCheese t1_j2bjn5f wrote

One of the few subscriptions that would significantly add value to your day job, plus if it had access to live internet it would save hours on researching basic tasks you’ve never done before but don’t know the correct jargon for google.

It’s like a personal assistant or private tutor.


drizel t1_j2als12 wrote

I'll definitely be one of those subscribers given a GPT-4 version.


visarga t1_j2bhlqz wrote

Not just human preferences, but also task distribution. They can fine-tune the model specifically on these tasks to make it even better.


SoylentRox t1_j2ckmtj wrote

And there's a bunch of obvious automated training it could do to be specifically better at software coding.

It could complete all the challenges on sites like leetcode and code signal, learning from it's mistakes.

It could be given challenges to take an existing program and make it run faster, learning from a timing analysis.

It could take existing programs and be asked to fix the bugs so it passes a unit test.

It could be asked to write a unit test that makes an existing program fail.

And so on. Ultimately millions of separate tasks that the machine can get objective feedback on how well it did on them, and so it can refine it's skills to be above human.


Educational-Nobody47 t1_j2ddsyt wrote

It is extremely likely based on your point above that their investor meetings have gone completely nuts. There has to be so many funds offered at the door right now its ridiculous, people trying very hard to get in. Doordash isn't profitable, it runs off of investor funds on a future return. It honestly could stay free because it's an infinite flow of data coming into their coffers.

Sam Altman is on record saying "We have a soft promise to our investors that one day when we create AGI or something close to it we will ask it to help us monetize and pay investors back".


hauntedhivezzz t1_j299yeo wrote

Umm, the optimal outcome was a viral hit / free marketing, which would lead to an excited user base who would then pay for their product.


throwaway_almost t1_j29ujl7 wrote

Wait what am I missing? After a few attempts I was told I need to pay for the usage already now… is that not the same for everyone?


Kaarssteun OP t1_j29a9ad wrote

That's the thing - this going viral is costing them millions. There is no product for them to sell given people now expect this service to be free.


gantork t1_j29czbf wrote

Nah, they could easily shut it down or start charging. DALLE-2 was also free at the start. I'm pretty sure they have this under control.


dietcheese t1_j2b8p7p wrote

They will start charging, and instead of spending millions they will make millions every month.

These people are not stupid - they have major backers funding and advising them.


hauntedhivezzz t1_j29cfjw wrote

They’ve already sold it, it will be integrated into Bing next year, and while this cost may be a lot for a small startup, it’s a drop in the bucket for the company paying for it, Microsoft


Kaarssteun OP t1_j29cl0a wrote

Isnt that just a rumor so far? I love that microsoft is working with openai so closely, but has that been confirmed?


Kaarssteun OP t1_j29curp wrote

As i said, i know and love they work together closely, but that's not confirming chatgpt will be integrated into bing


hauntedhivezzz t1_j29d4o2 wrote

sure, not confirmed, implied, but the premise that this is a disaster because it’s outsized success is costing them, is just short sighted


Equivalent-Ice-7274 t1_j29n998 wrote

The ChatGPT app costs money, and they can easily place ads within it, and around it.


slashd t1_j2dhgav wrote

Finally a good reason to use Bing instead of Google 😂


blueSGL t1_j29d1ee wrote

> There is no product for them to sell given people now expect this service to be free.

I don't get the argument.

If they want to yoink it and put it behind a paywall where you pay for tokens they could do that today.

If people still want to use it they pay or stop using it.

This has happened before. (look at Dalle2)


treedmt t1_j29ty59 wrote

That would be awesome for the free competitors though.


blueSGL t1_j29uv5o wrote

> for the free competitors

Who are the free competitors?


treedmt t1_j29v2zd wrote

LUCI for one. Not exactly chat format but generative single turn question answering. Http://


blueSGL t1_j29xsiu wrote

but that's not what ChatGPT is offering.

Anywhere that is able to do the sorts of things that ChatGPT does will be in a 'loss leader' phase to begin with to attract customers. Or a [x] tokens per month are free, or other marketing trick.

Until inference cost is lower than the cash generated via advertising all services will be losing money, at that point it's either start charging or stop the service.

ChatGPT has succeeded in getting the name out. They are losing money by operating (if the training data they are getting from people is worth less than inference costs) so the solution is to start charging money.

Continually running a product that is in the red to prevent competitors products who are also in the red from succeeding seems like poor decision making in the long term.


treedmt t1_j29yope wrote

LUCI is also built on a fine tuned gpt3.5 model, so pretty close to chatgpt in terms of capabilities.

They have a very different monetisation model afaik. They are tokenising the promise of future revenue to monetise, instead of charging customers up front.

> if the training data is worth less than the inference cost.

The thesis is that training data could be worth much more than inference cost, if it is high quality, unique, and targeted to one format (eg. problem:solution or question:answer)

In fact, I believe they’re rolling out “ask-to-earn” very shortly, which will reward users for asking high quality questions and rating the answers, in Luci credits. The focus appears to be solely on accumulating a massive high quality QA database, which will have far more value in the future.

I’m not aware of any rate limits yet but naturally they may be applied to prevent spam etc., however keeping the base model free is core to their data collection strategy.


theRIAA t1_j2ejmws wrote

> so pretty close to chatgpt in terms of capabilities

I was impressed that it could give me generic working one-liners, but that is quite far off from writing a working program with 100+ lines of code in all major languages, like ChatGPT can (effortlessly) do. But thank you for the link, it's still very useful.


Think_Olive_1000 t1_j29bca8 wrote

They've cut the number of requests you can make an hour, addresses the cost issue somewhat. I think they can plug the hole made by unexpected influx with their marketing budget. It's gotta be one of the most succesfull tech product launches of all time with number of unique and new users reaching the million mark within a week of going live.


TheTomatoBoy9 t1_j29s315 wrote

The expectations for it to be free are with the current version. Subsequent versions will easily be marketed as premium and sold through subscriptions.

Then, this doesn't even address the whole business market where expensive licenses can be sold.

Finally, they are bankrolled by Microsoft, among others. Eye watering costs are only eye watering to small startups. It's not much of a problem when the company backing you is sitting on $110 billion in cash.

In the tech world, you can lose money for years if you can sell a good growth story. Especially with backers like Microsoft.


visarga t1_j2bi28f wrote

I expect in the next 12 months to have an open model that can rival chatGPT and runs on more accessible hardware, like 2-4 GPUs. There's a lot of space to optimise the inference cost. Flan-T5 is a step in that direction.

I think the community trend is to make small efficient models that rival the original, but run on local hardware in privacy. For now, the efficient versions are just 50% as good as GPT-3 and chatGPT.


apinkphoenix t1_j29eg4m wrote

This is nonsense. You are aware that they can shut down their servers at any time they want, right? Even though they describe their costs as “eye-watering” it doesn’t mean they can’t afford it.

As for there being an expectation of being free… lol. This is a very useful tool we’re talking about here. It’s only going to get better with time. They can and will monetise it and you’ll damn well like it!


stevenbrown375 t1_j2b4du2 wrote

They're going to have venture capital coming out of their ears now. This was a home run for them and OP's take is wack.


apinkphoenix t1_j2bwm31 wrote

They’re suffering from success lol. Can’t believe anyone has taken OP’s post seriously.


AdminsBurnInAFire t1_j2dlq2m wrote

They don’t need venture capital. So funny how many people don’t know OpenAI is owned by Microsoft. That’s the only way they can afford the costs of such a large LLM.


stevenbrown375 t1_j2eeocr wrote


Microsoft has equity in OpenAI but it's not the sole owner, it's largely split between the founders. Information on how the equity is allocated between these players is unavailable to the public AFAICT.


dietcheese t1_j2b8z1c wrote

Most successful products have an investment phase where their losses top their gains. This is a investment in marketing, and they know exactly what they’re doing.


YouGotNieds t1_j29b9y2 wrote

I disagree with most of what you are saying.

First you can pay for lifetime access to chatgpt now even at its current version I am sure many people would consider buying pro versions of this.

Second, don't forget Chatgpt is version 1 at the moment. It can only go up from here and start being used in finance, accounting, hr, and many more areas of business that we can't even think of yet.

Third, I am sure the data that openai is getting from chatgpt in itself is extremely valuable for the company as they get more data and type of tasks people might ask. This gives them an actual idea of how they could create a profitable product that could compete with the sorts of google or alexa but just way better.


Lodge1722 t1_j29h0o6 wrote

Any source for the lifetime access? Quick search only lead me to their API pricing.


ReadSeparate t1_j29kvft wrote

Yeah I would like to see this as well, I couldn’t find it either


YouGotNieds t1_j29ktp9 wrote

on the android app InstaGPT they offer it


lehcarfugu t1_j29lkby wrote

This isn't through OpenAI, you are paying a third party. OpenAI charges by usage


nebson10 t1_j29nk9c wrote

Is InstaGPT a loophole to get a lifetime subscription to ChatGPT?


Zermelane t1_j29mja2 wrote

Sounds scammy. ChatGPT itself does not have an official API at all and the research preview is free to use, while with the other models (that do have official APIs) that OpenAI does charge for, they charge by usage, and purchasing lifetime access from a third party is... not likely to be a good deal, is the best I can say.


YouGotNieds t1_j29y5ya wrote

Might be actually. The app seemed really legit and I didn't consider buying it but it seemed like a good idea to have a subscription


noop_noob t1_j2a0msq wrote

That one will probably stop working as soon as OpenAI stops offering ChatGPT for free.


sharkymcstevenson2 t1_j2ck5l2 wrote

Yep! The real apps are built on GPT3 rather than reverse engineered ChatGPT hacks


SoylentRox t1_j2aehvs wrote

I don't see how "lifetime access" makes any sense.

(1) Assuming it's to the current model and not future updates, that would be like buying a "lifetime copy" of MS-DOS internal beta 0.7 (whatever they called it back then), or an iphone 1 loaded with a pre-release copy of the OS.

It may work offline for your lifetime, but it's going to be useless compared to what's available within months.

(2) who's hosting it? GPT-3 is around 180 billion parameters, or 720 gigabytes of memory. This means the only thing capable of running it currently is a cluster of 8 Nvidia A100s with 80 Gb memory each, and each costs $25,000 and consumes 400 watts of power.

I'm not sure how "fast" it is, if you see chatGPT typing for 30 seconds are you drawing 3.2 kilowatts of power just for your session? I don't think it's that high, probably the delays are it's servicing other users.


dietcheese t1_j2b945a wrote

There is no lifetime access. Where are you getting your information?


lloesche t1_j29b5sv wrote

Idk, they could take it out of research/beta mode, switch to a $30/month subscription, citing the enormous cost associated with providing the service, and nobody would bat an eye.

That is until Google comes out with their own, provided free of charge, injected with what Google does best, ads.


Economy_Variation365 t1_j29bx0h wrote

Agreed. Just the fact that ChatGPT is so helpful with some school assignments means that most high school and college students would be eager to plunk down money for a monthly subscription.


GuyWithLag t1_j2atfmc wrote

Fuck it, I'm an IT pro and the way that ChatGPT can generate corporate boilerplate already saves me hours per month; I'd be willing to pay for a subscription just for that, and I assume I'll find more uses as it improves.


Kaarssteun OP t1_j29bhnn wrote

I do think people would bat an eye if they started charging money - to the layperson, this feels like google, and google has been free forever.


SoylentRox t1_j2cla5j wrote

Imagine if the next best search engine was like an early version of bing and NOTHING else existed.

And nobody was remotely close to releasing anything better. Would you pay for it then?

If OpenAI starts charging for chatGPT, whatcha gonna do? Keep writing shit by hand?

The computional requirements are so expensive that realistically this is going to be a paid service maybe forever.

I say forever because compute will get much cheaper over time, but the best models will use even more compute and be much smarter. All the elite people will be using top end models, plebs using free models won't have the same resources.


justowen4 t1_j2b0vpm wrote

Ah yes good point, Google ad team is salivating


sharkymcstevenson2 t1_j2cllos wrote

I think OpenAI sees themselves as an operating system rather than a direct-to-consumer business - it seem they are encouraring companies/startups to build services on top of OpenAI tech. I dont think they will compete directly with that ecosystem they are trying to build by offering their own service like that, since building an ecosystem is 100x more valuable over time


Equivalent-Ice-7274 t1_j29o5go wrote

Agreed - Google could add hyperlinks throughout the response text, as well as banner ads above and below, and perhaps even video commercials before you get to see the ai’s response. Then they could charge a subscription per employee for companies that want to buy, just like they do with Google Workspace. Google will make mountains of money off of this.


GuyWithLag t1_j2atjfy wrote

>commercials before you get to see the ai’s response

They're not that amateurish.


NoName847 t1_j29he90 wrote

didnt sam said that all the feedback is amazing and exciting on twitter? they can shut it down any minute if it wouldnt be what they want

if people really throw a tantrum when its costing money I think these entitled , irrational people can be ignored

edit: "we are learning so much from ChatGPT; it's going to get a lot better, less annoying, and more useful fast." sounds like they love it


AbeWasHereAgain t1_j2an4gh wrote

Just need to integrate ads and it’s game over


Superschlenz t1_j2bpvaj wrote

AFAIK, the lawyers will only shut it down if it doesn't explicitly declare paid ads as such. As long as they don't integrate ads into ChatGPT itself but only show them in the user interface, they should be OK. Of course, there is still the copyright issue if it outputs information from publishers without directing users to their websites.


AbeWasHereAgain t1_j2bq2df wrote

The copyright thing is a big deal. Mean that from a training perspective as well.


Kinexity t1_j29az33 wrote

The only things they did wrong was not collecting user feedback from the beginning and not starting as a slow rollout just like they did with dalle 2 but that could have hampered the popularity. Other than this I'm not sure what are trying to say here and what else were they supposed to do.


No_Ask_994 t1_j29nq8w wrote

Of course they can shut it down.

If they do not, its because is worth it for them.

I expect an assistant over gpt-4 next year, much better, and of course, Pay per use


el_chaquiste t1_j29rvra wrote

We are masses of unpaid beta tester for their system, finding bugs and awkward prompts they need to edit by hand. Definitely worth it for them.

Thanks to that, GPT4 will be far less dumb.


No_Ninja3309_NoNoYes t1_j2a7kfc wrote

Yeah but how much is it? A million dollars an hour, more? Methinks they're exaggerating to sound cooler than they are.


Glitched-Lies t1_j2artba wrote

It is very much over hyped. That much is true. Nobody expected it to be so incredibly popular. I guess it's just because it codes and lots of people made it popular for no reason.


DukkyDrake t1_j2aztlg wrote

>expecting this to remain readily available and free - while the costs for openai are "eye-watering"

The rest of that quote.


somethingstrang t1_j2b8xgg wrote

RLHF sounds like most of the work and the improvement over base transformer architecture. Hard to recreate cause I’d imagine it’s tens of thousands of man hours involved


kalydrae t1_j2be8w9 wrote

I have been trying to work out what kind of implementation of this technology you can have on the average home computer. For now, the ram/gpu required to run gpt3.5 is beyond the compute power of the average power users home equipment.

You can make a basic system that can do limited tasks for specific inputs but the chat bot using the parameter set for gpt 3.5 is too large.

And the emergent properties present in this larger model is the most interesting and useful part of the the current progress. So for NOW openAI have a huge market advantage - they have a live product with a huge existing user base and compute power to support current throttled usage.

If I were openAI, I would be looking at how to launch the paid 'beta' product for generic use and then look at the subset of interactions on the free version to see if there are use cases that could do with additional training inputs to give further enhanced interactions. Some of my nebulous thoughts on potential use cases for custom products that people might pay for include,

Roleplaying bot - partner with online role playing systems to ingest large amounts of (anonymised) conversational data and have human feedback on training based on the new model.

Developer/infrastructure/IT helper: ingest even more publicly available data sets on q&a forums, open source systems documentation and support forums, GitHub, etc

Private instances of ChatGPT that have a "commercial in confidence" license so that businesses can provide their commercial IP datasets and transform the chat bot into the company knowledge system - all data, processes procedures can be used and accessed in a dynamically linked interactive and proprietary context. (Would also need to conform with country/state privacy laws etc)

Similar private instances provided to academic institutions where all academic and student information, emails and conversations (also anonymised) can be used to train the course and subject matter expert bots that can assist academics to design courseware and students to learn and understand much faster.

I think once we can run our LLM models on home computers, all bets are off. Your fridge might have a bot to tell you the options for dinner. Your wallet will alert you when your expenses are off track from previous months. You will ask your home assistant for a daily plan and it will remind you to take your medicine and prompt you to eat/drink something depending on your current vitals... The next steps are very exciting.

I am sure there are issues with these ideas but I'm very excited to see where this all goes!


chadbarrett t1_j2bq86c wrote

Yesterday I spent a good 2 hours googling a bunch of shit with total failure. For instance, you would think a 10 year, year over year, average electricity rate in California be easy to find but it wasn’t. Instead it was nothing but SEO dense shit talking about vague nothing. And then I decided to ask gpt and got that data in seconds. No idea where or how it found it but it did.


Kaarssteun OP t1_j2d6qvk wrote

Remember, LLMs are prone to hallucination; it can't "find" anything per se, and is a pathological liar.


jloverich t1_j29eu0p wrote

Fwiw already has an llm similar to chatgpt on their website.


nebson10 t1_j29ndrw wrote

They can shut down the free research preview at any time. If they haven't shut it down yet, it must be to their benefit to keep it open for some reason.


Imaginary_Ad307 t1_j29wohs wrote

Ai i remember they are backed by Microsoft, so they can take it and most certainly will make profit from it in the near future


Schyte96 t1_j2akuv0 wrote

I am expecting it to go paid in short order TBH. Let's say it's 10 USD/mo like Copilot, that might be the best use of 10 USD in the world.


_z_o t1_j2b2g9v wrote

The more traffic it gets more investment money it will be given. Investors expect that they figure out a profitable mode in the future. An AI based search engine with a small wiki like answer plus AI curated links to AI validated content and products can make Google obsolete overnight. AD money will flow into it.


no-longer-banned t1_j2bas7o wrote

OpenAI is backed by venture capital. The money doesn’t really matter. If they needed more, there would be a line of investors miles long trying to get in on this company.

Users and data. That’s all they really care about.


theghostecho t1_j2bw6nl wrote

People weren’t talking about it when gpt first came out because they were busy learning about them.


LoneRedWolf24 t1_j2c5hcc wrote

As others have stated, they can shut it down if they choose to and they likely will monetize ChatGPT. However, I don't think it will be long until real competition reveals itself and a free alternative is offered.


Lawjarp2 t1_j2cd8kz wrote

That's why they put limits on it. If it's really useful people/companies would pay to use it and cover the costs. If it's useless then it will get shutdown eventually. I do think it will be useful for companies with GPT-4 and that's why they have released it right now to get more companies ready for it.


Happy-Ad9354 t1_j2cnm7v wrote

Does it forward / save all your queries to OpenAI / in their databases?


leonidganzha t1_j2dhbg7 wrote

OpenAI got a free army of QA testers spending hours to make ChatGPT generate offensive and nsfw content. Just because it was really fun. So they got a lot of valuable human-in-the-loop data out of this, which will help them to develop their LLMs further


Mementoroid t1_j29ysks wrote

Hopefully one day there'll be an AI that can generate pennies in my account for everytime I read "The cat is out of the bag" or good already old "the genie is out of the lamp".