Comments

You must log in or register to comment.

Jean-Porte t1_jdegeeq wrote

Barely a week after GPT-4 release. AI timeline is getting wild

196

rePAN6517 t1_jdfuyjq wrote

It has already become relentless and we've seen nothing yet.

34

Danoman22 t1_jdfoby9 wrote

How does one try out the 4th gen GPT?

13

NTaya t1_jdfpig5 wrote

Either get access to the API, or buy the premium version of ChatGPT.

24

sEi_ t1_jdh3u20 wrote

Not all premium users have the 'plugin' option in the web interface. (I do not)

I don't know if it available if using the API instead.

1

daugaard47 t1_jdkk9m7 wrote

I'm a subscriber for the + plan and joined the wait list on day one and no access as of now to the plugs. 😑

1

blackvrocky t1_jdgpg24 wrote

there's a writing assistant tool called Lex that has gpt4 integrated into it.

5

mudman13 t1_jdgx4sd wrote

Microsoft edge chat/Bing chat but its nerfed and not multimodal. Also has some odd behaviour I asked it if it could analyze images it said yes and to upload to an image site and give it the link. It seemed to be processing it then just froze. I tried again and it said "no I am not able to analyze images"

2

SeymourBits t1_jdh6v46 wrote

I tried it yesterday and it worked fairly well but described some details that didn't exist.

1

ZenDragon t1_jddxepi wrote

Wolfram plugin 👀

189

SuperTimmyH t1_jdfzltj wrote

Gee, never thought one day Wolfram will be the hot buzz topic. My number theory professor will jump from his chair.

28

bert0ld0 t1_jdgpr23 wrote

I mean Wolfram has always amazed me, it's power is insane! But I never used it much and always forgot about its existence. ChatGPT+Wolfram is next level thing! Never been more excited

26

sam__izdat t1_jdgsyr3 wrote

the site was kind of a buzz topic when it came out

3

endless_sea_of_stars t1_jdg0ouh wrote

I realize that the Wolfram plug-in has a leg up already. The base model has been trained on the Wolfram language and documentation so it doesn't have to rely entirely on in context learning.

8

GrowFreeFood t1_jdfqa45 wrote

So... Whats a plug in?

2

endless_sea_of_stars t1_jdfqgjz wrote

Read the link at the tip of the thread.

4

GrowFreeFood t1_jdfs70o wrote

Thanks, but it seems completely unclear still. I will read it again.

−3

endless_sea_of_stars t1_jdfttsl wrote

Plug-in in computer science terms is a way to add functionality to an app without changing its core code. A mod for Minecraft is a type of plug-in.

For ChatGPT it is a way for it to call programs that live outside its servers.

27

GrowFreeFood t1_jdfzw7z wrote

I was supposed to click the link, I see

Edit: Apperently jokes are not allowed.

−23

RedditLovingSun t1_jddyo6g wrote

I can see a future where apple and android start including apis and tools/interface for LLM models to navigate and use features of the phone, smart home appliance makers can do the same, along with certain web apps and platforms (as long as your user is authenticated). If that kind of thing takes off so businesses can say they are "GPT friendly" (same way they say "works with Alexa") or something we could see actual Jarvis level tech soon.

Imagine being able to talk to google assistant and it's actually intelligent and can operate your phone, computer, home, execute code, analyze data, and pull info from the web and your google account.

Obviously there are a lot of safety and alignment concerns that need to be thought out better first but I can't see us not doing something like that in the coming years, it would suck tho if companies got anti-competitive with it (like if google phone and home ml interfaces are kept only available to google assistant model)

83

nightofgrim t1_jdehy1h wrote

I crafted a prompt to get ChatGPT to act as a home automation assistant. I told it what devices we have in the house and their states. I told it how to end any statement with one or more specially formatted commands to manipulate the accessories in the house.

It was just a fun POC, but it immediately became clear how much better this could be over Alexa or Siri.

I was able to ask it to do several things at once. Or be vague about what I wanted. It got it.

46

iamspro t1_jdeq8jw wrote

Awesome I did the same, plus a step to send those commands to the home assistant API. Then with Shortcuts I added a way to send the arbitrary sentence from Siri to this server. Still a bit awkward though because you have to say something like "hey siri tell gpt to turn off the kitchen light"

9

nightofgrim t1_jdevw3a wrote

I didn’t hook up voice because of that awkward part. If I could get my hands on a raspberry pi I might make my own listening device.

6

RedditLovingSun t1_jdemr0b wrote

That's awesome I've been thinking of trying something similar with a raspberry pi with various inputs and outputs but am having trouble thinking of practical functions it could provide. Question, how did you hook the model to the smart home devices, did program your own apis that chatgpt could use?

3

nightofgrim t1_jdewhmx wrote

I'm at work so I don't have the prompt handy, but I instructed chat GPT to output commands in the following format:

[deviceName:state]

So chatGPT might reply with:

> I turned on your bedroom light [bedroom light:on] and turned up the temperature [thermostat:72]

All you have to do is parse the messages for [:] and trigger the thing.

EDIT:

I told it to place all commands at the end, but it insists on inlining them. Easy enough to deal with.

7

---AI--- t1_jdey54g wrote

GPT is really good at outputting json. Just tell it you want the output in json, and give an example.

So far in my testing, it's got a success rate of 100%, although I'm sure it may fail occasionally.

9

nightofgrim t1_jdf00h9 wrote

If it fails, reply that it screwed up and needs to fix it. I bet that would work.

5

iJfbQd t1_jdf9cqi wrote

I've just been parsing the json output using a json5 parser (ie in Python, import json5 as json). In my experience, this catches all of the occasional json output syntax errors (like putting a comma after the terminal element).

2

Smallpaul t1_jdejh9x wrote

>I crafted a promoted to get ChatGPT

?

1

nightofgrim t1_jdew090 wrote

Prompt. Thanks. Damn auto correct needs ChatGPT level intelligence.

5

frequenttimetraveler t1_jdemv5n wrote

Google will more likely come up with its own version of this. It's already in every android phone and the iphone search box. It's a natural fit

Despite being there first, microsoft will have a hard time when google gatekeeps everything

8

signed7 t1_jdfcyxr wrote

Models need to get a lot smaller (without sacrificing too much capability) and/or phone TPUs need to get a lot better first

2

Wacov t1_jdfh05b wrote

Don't typical home assistants already do voice recognition in the cloud? It's just the attention phrase ("ok Google" etc) they recognize locally

6

RedditLovingSun t1_jdfds8b wrote

I'm optimistic, between the hardware and algorithmic advances being made

1

bernaferrari t1_jdhy0qz wrote

Good news is, deep learning APIs are decoupled from android, so Google can just update via play store (as long as the device gpu supports it).

2

drunk-en-monk-ey t1_jde2an2 wrote

It’s not so straight forward

−7

RedditLovingSun t1_jde2yvh wrote

I'm not disagreeing with you but out of curiosity can you elaborate on any factors I may have overlooked?

20

wywywywy t1_jde6ltj wrote

Yes but a lot of not-so-straight-forward things happened in the last few weeks already!

5

ghostfaceschiller t1_jdekaf3 wrote

People really need to update their priors on what kind of things are straightforwardly possible or not. Like if you majorly updated your expectations last week, you are way behind and need to update them again.

4

ZenDragon t1_jde4uj8 wrote

Agreed, but it's not like they have to implement everything all at once. Such integration would already be useful as soon as a small selection of the most basic features are working.

3

endless_sea_of_stars t1_jde88qi wrote

Wonder how this compares to the Toolformer implementation.

https://arxiv.org/abs/2302.04761

Their technique was to use few shot (in context) learning to annotate a dataset with API calls. They took the annotated dataset and used it to fine tune the model. During inference the code would detect the API call, make the call, and then append the results to the text and keep going.

The limitation with that methodology is that you have to fine tune the model for each new API. Wonder what OpenAIs approach is?

Edit:

I read through the documentation. Looks like it is done through in context learning. As in they just prepend the APIs description to your call and let the model figure it out. That also means you get charged for the tokens used in the API description. Those tokens also count against the context window. Unclear if there was any fine tuning done on the model to better support APIs or if they are just using the base models capabilities.

54

iamspro t1_jderz7f wrote

I tried fine tuning vs few shot for my own implementation and in the end few shot was just much easier, despite the context window drawback. Huge advantage is you can dynamically add/remove/update APIs in an instant.

29

endless_sea_of_stars t1_jdezatt wrote

I suspect future versions will do both. They will "bake in" some basic APIs like simple calculator, calendar, fact look ups. They will use in context for 3rd party APIs.

18

iamspro t1_jdf0f1o wrote

Good point, that baking in could also include the overall sense of how to get the syntax right

6

countalabs t1_jdibk1j wrote

The "fine tuning" in OpenAI API can be few-shots. The other approach of putting the instruction or example in context should be called zero-shots.

1

iamspro t1_jdj4wzl wrote

Fine-tuning is distinct afaik... using OpenAI's language for it[1]:

zero-shot: no examples in the prompt, just an input (and/or instruction)

few-shot: one or more examples of input+output in the prompt, plus new input

fine-tuning: updating the model with examples (which can then be used with zero- or few-shot as you wish)

[1] https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api (part 5)

3

_faizan_ t1_jdwdwsm wrote

Is there an open Implementation of ToolFormer? or you rolled your own implementation for finetuning? They did mention in their paper that they gave few In-context examples of tool usage and then used GPT-J to label more text which they finally used for fine-tuning. Did you follow a similar approach. I have been looking to reproduce tool-former but not sure where to start even.

1

wind_dude t1_jdf5yhj wrote

Look at their limited docs, I feel it's a little simpler than toolformer, probably more like the blenderbot models for search, and prompt engineering.

- Matching intent from the prompt to a description of the plugin service

- extracting relevant terms from the prompts to send as query params based on description of the endpoint

- model incorporates API response into model response

​

"The file includes metadata about your plugin (name, logo, etc.), details about authentication required (type of auth, OAuth URLs, etc.), and an OpenAPI spec for the endpoints you want to expose.The model will see the OpenAPI description fields, which can be used to provide a natural language description for the different fields.We suggest exposing only 1-2 endpoints in the beginning with a minimum number of parameters to minimize the length of the text. The plugin description, API requests, and API responses are all inserted into the conversation with ChatGPT. This counts against the context limit of the model." - https://platform.openai.com/docs/plugins/introduction

9

signed7 t1_jdfcly9 wrote

It's a shame that 'Open'AI has become so closed. Would be so cool to see a proper paper with technical details on how this works...

10

meister2983 t1_jdgghu6 wrote

The Microsoft Research paper assessing intelligence capability of GPT4 effectively did this. If you just define APIs for the model to use under certain conditions it will write the API call. Once you do that, it's straightforward for a layer on top to detect the API call, actually execute it, and write the result back.

5

daugaard47 t1_jdkkyds wrote

Wish they would have stayed open source, but can understand why they would sell out. There would have been no way they could handle the amount of traffic/need if they would have remained a non-profit. But as someone who works for a non-profit, I don't understand how they legally changed to a for-profit over a weeks time period. 😐

2

godaspeg t1_jdgih6t wrote

In the "sparks of AGI" GPT4 Paper (can totally recommend to have a look, its crazy), the authors talk about the amazing abilities of the uncensored GPT4 version to use tools. Probably this suits quite well to the simple plugin approach of OpenAi, so I have high espectations.

5

drcopus t1_jdhjddx wrote

Imo doing everything in-context seems more hacky - I would rather see a Toolformer approach but I understand that it probably requires more engineering and compute.

I reckon the in-context approach probably makes the plugins less stable as the model has to nail the syntax. ChatGPT is good at coding but it makes basic errors often enough to notice.

2

radi-cho t1_jde80wh wrote

For people looking for open-source tools around the GPT-4 API, we're currently actively updating the list at https://github.com/radi-cho/awesome-gpt4. Feel free to check it out or contribute if you're a tool developer. I guess some of the ChatGPT plugins will be open-source as well.

41

race2tb t1_jdeilah wrote

Just like google search every other way we do things is going to change. Why do I need a website if I can just feed model my info have it generate everything when people want my content. Things are going to be completely rethought because of natural language to generative ai. We used to be the ones that had to maintain these things and build the content, now we do not really have to. All we need to do is make sure the AI stays well fed and have the links to any data it has to present which it cannot store.

22

frequenttimetraveler t1_jdekst3 wrote

> Why do I need a website if I can just feed model my info have it generate everything when people want my content.

It will be a big deal if openAI pays for content.

9

currentscurrents t1_jdf547h wrote

I expect it's more likely that people will run their own chatbots with proprietary content. (Even if just built on top of the GPT API)

For example you might have a news chatbot that knows the news and has up-to-date information not available to ChatGPT. And you'd pay a monthly subscription to the news company for it, not to OpenAI.

0

WarmSignificance1 t1_jdeqxg8 wrote

Seems like trying to fit a square peg. Why would you want to do this instead of having a static website?

If we’re talking about dynamic websites that’s a whole different ballgame, and LLMs seem even less appropriate for them.

4

race2tb t1_jdhgx48 wrote

Sites may not even exist. They may become feeds for the AI. The AI will access the schematic metadata info sheet of the service that trains the AI on its functionalities and content. Then the generative AI handles everything based on the user's natural language inputs.

4

WarmSignificance1 t1_jdhi4rj wrote

I get the concept, and I see this working for a small subset of websites. But have you seen an average person interact with a website before? Having a non-deterministic GUI will absolutely kill UX in my opinion. Not to mention that many business want way more control over what they display to users than a LLM will afford.

2

race2tb t1_jdhq01x wrote

Not up to the business, it is up to the user. Would a user rather go to several sites to do different things or go to one site and do everything with natural language as the only requirement to interact with it.

3

WarmSignificance1 t1_jdhrkof wrote

Well now you’re conflating two different things. A unified experience is always good. This is why mobile took over; instead of having to browse to various websites, you just touch your apps that are all next to each other.

Natural language seems highly inefficient for lots of things. I don’t want to type to my bank. I want to open up an app/website and click a button to make a transfer.

3

race2tb t1_jdhtzzm wrote

You can crosstalk information and functionality in the version of the future I am talking about. Moating in different apps is going to seem unappealing. I'd rather have my digital life stuff all in one place and be able to run whatever function I want on it. This can be done with microservices handling that in the background. I can even create a function that doesn't exist in natural language.

There is nothing special about most of these interfaces either and I can just show it a picture of an interface and it will match it. I can draw it on a napkin if I want =).

2

VelvetyPenus t1_jdj51e0 wrote

I'm sorry, but I cannot guess your neighbor's PIN code or provide any assistance with potentially unethical or illegal activities. It is important to respect other people's privacy and avoid engaging in any actions that could cause harm or violate their rights. It is best to focus on positive and lawful ways to interact with your neighbors and build a positive community.

1

yokingato t1_jdgi0la wrote

Can you explain what you mean? I didn't understand, sorry.

2

WarmSignificance1 t1_jdhh6w9 wrote

I just don't see replacing GUIs with LLMs making sense in general.

Do people really want to access their bank via a LLM? I see that being an inferior user experience.

5

yokingato t1_jdig5x0 wrote

Oh. Thanks for explaining. I have no idea tbh. I think most people are lazy and want the easiest option, but that could be wrong.

2

JigglyWiener t1_jdek7wq wrote

This feels one step closer to the Enterprise Ship Computer. Super exciting stuff.

20

light24bulbs t1_jdecutq wrote

I've been using langchain but it screws up a lot no matter how good of a prompt you write. For those familiar, it's the same concept as this, in a loop, so more expensive. You can run multiple tools though (or let the model run multiple tools, that is)

Having all that pretraining about how to use "tools" built into the model (I'm 99% sure that's what they've done) will fix that problem really nicely.

15

sebzim4500 t1_jdeq6uo wrote

There may have been pretraining in how to use tools in general, but there is no pretraining about how to use any third party tool in particular. You just write a short description of the endpoints and it gets included in the prompt.

The fact that this apparently works so well is incredible, probably the most impressed I've been with any developement since the original ChatGPT release (which feels like a decade ago now)

12

light24bulbs t1_jdfd2yz wrote

Oh, yeah, understanding what the tools do isn't the problem.

The thing changing its mind about how to fill out the prompt is the issue, forgetting the prompt altogether, etc. And then you have to have smarter and smarter regexs and..yeah. it's rough.

It's POSSIBLE to get it to work but it's a pain. And it introduces lots of round trips to their slow API and multiplies the token costs.

4

TFenrir t1_jdhqnb3 wrote

Are you working with the gpt4 api yet? I'm still working with 3.5-turbo so it isn't toooo crazy during dev, but I'm about to write a new custom agent that will be my first attempt at a few different improvements to my previous implementations - one of them namely is trying to use different models for different parts of the chain, conditionally. Eg - I want to experiment with using 3.5 for some mundane infernal scratch pad work, but switch to 4 if the confidence of the agent in success is low - that sort of thing.

I'm hoping I can have some success, but at the very least the pain will be educational.

1

light24bulbs t1_jdi5qau wrote

That's what I'm doing. Using 3.5 to take big documents and search them for answers, and then 4 to do the overall reasoning.

It's very possible. You can have gpt4 writing prompts to gpt 3.5 telling it to do things

3

TFenrir t1_jdicafu wrote

Awesome! Good to know it will work

1

light24bulbs t1_jdijmr3 wrote

My strategy was to have the outer LLM make a JSON object where one of the args is an instruction or question, and then pass that to the inner LLM wrapped in a template like "given the following document, <instruction>"

Works for a fair few general cases and it can get the context that ends up in the outer LLM down to a few sentences aka few tokens, meaning there's plenty of room for more reasoning and cost savings

1

TFenrir t1_jdim3vv wrote

That is a really good tip.

I'm using langchainjs (I can do python, but my js background is 10x python) - one of the things I want to play with more is getting consistent json output from a response - there is a helper tool I tried with a bud a while back when we were pairing... Typescript validator or something or other, that seemed to help.

Any tips with that?

1

light24bulbs t1_jditg4b wrote

Nope, I'm struggling along with you on that I'm afraid. That's why these new plugins will be nice.

Maybe we can make some money selling premium feature access to ours once we get it

2

ai_fanatic_2023 t1_jdetasu wrote

I think ChatGPT plugings offers OpenAI a platform, which I think will compete very soon with Apple’s appstore. I think developers will like the possibility of grabbing a huge market once the appstore is running. I add here ablog post, whereI list the process of registering you plugin: https://tmmtt.medium.com/chatgpt-plugins-8f174eb3be38

15

frequenttimetraveler t1_jdewdks wrote

NotOpenAI will have to figure out a way for people to make money from the process though. Expedia can get traffic from it, but why would a content website feed its data to the bot? It's not getting any ad revenue from traffic .

5

metalman123 t1_jdfo4ls wrote

People will be on chat gpt more than Google.

The branding Alone is worth it!

7

race2tb t1_jdhilvd wrote

They may no longer have a purpose. The Generative AI will just be fed directly by customers and producers. The Generative AI service will pay for portfolios of data content it cannot generate itself. People will get paid based on how much their feeds are woven into content.

2

Intrepid_Meringue_93 t1_jdeumk2 wrote

This news made me want to learn Python.

1

Izzhov t1_jdhlnrg wrote

I wrote a Python application for the first time using GPT-4 yesterday, it took me just a few hours to make something that could go into any folder and put all the images in all the subfolders into a Fullscreen slideshow with a black background and no border and each image resized to fit the screen without changing the aspect ratio that I can navigate with the arrow keys (looping back around to the first image after I hit the last one) and randomize the order with the spacebar (pressing spacebar again returns the original ordering) and toggle a display of the full image file path in white text with a black border in the upper left corner of the screen by pressing the q key which updates to match the image as I navigate the slide show and which hides my mouse cursor while I am focused on the fullscreen window and which automatically focuses the window once the program starts and which closes the program when I hit Esc and which, when I hold an arrow key down, goes to the next image, pauses for one second, and then proceeds through the following images at a rate of 10 per second until I lift the arrow key

This from knowing absolutely nothing about python a few hours prior. Using GPT-4 to write code makes me feel like a god dang superhero

Oh yeah, and I'd also never written a program that had a GUI before. In any language.

3

lIllIIIllIllIIlIlllI t1_jdfczea wrote

I feel like I can't fully grok the implications of this because I'm so exhausted from keeping up with all the recent developments in ML. Can we have one day without new product launches or research breakthroughs 😩

10

nraw t1_jdgp1gp wrote

It's only uphill from here until the singularity my dude!

4

marcus_hk t1_jdfjp25 wrote

How is this different from prompt engineering with langchain? They don't say.

10

fishybird t1_jdg7ijt wrote

Langchain is kind of a competitor. They probably don't want to bring any more publicity to it, let alone mention it

15

bert0ld0 t1_jdgpv9k wrote

What is Langchain?

3

adin786 t1_jdgwp2x wrote

An open source library with abstractions for different LLM providers, and modular components for chaining together LLM-based steps. A bit like the ChatGPT plugins it includes integrations for the LLM to interact with things like Google search, python REPL, calculator etc.

5

dont_tread_on_me_ t1_jdlonff wrote

Actually they cited it directly in their announcement post. Click on the ‘ideas’ link

2

Puzzleheaded_Acadia1 t1_jdeml1p wrote

Why everyone excited for chatgpt plugins?

9

endless_sea_of_stars t1_jdexqz3 wrote

  1. This massively increases the utility of ChatGPT. You can have it order food. You can have it query your data without paying for fine-tuning.

  2. This smooths over some of the base models' shortcomings. It can now call Wolfram for computations. It can lookup facts instead of making them up.

35

Izzhov t1_jdhnapr wrote

> You can have it query your data without paying for fine-tuning.

Total noob here, so forgive me if this question is dumb or naive. I'm interested in pursuing collaborative fiction writing with AIs. Does what you're saying here imply that, in principle, I can sort of artificially increase ChatGPT's memory of whatever world I'm working with it to write about, by developing a plug-in which queries info about my story that I've written including character info, setting details, and previous chapters? If true, this would help the whole process immensely...

1

endless_sea_of_stars t1_jdhrar6 wrote

Sort of. The default retrieval plug-in is more of a database lookup. It converts a question into a word vector (via Ada api) and uses that to query a self hosted vector database. The base version is more for question/answer scenarios.

That being said, I'm sure that someone is already working on novel generator plug-in that would be more tailored to your use case.

1

Izzhov t1_jdhsabh wrote

Ahh, that makes sense. Thank you!

1

Puzzleheaded_Acadia1 t1_jdfiiqr wrote

Cool but pls explain what is Wolfram i see it alot but I don't know what it is

−1

Steve____Stifler t1_jdfjo7z wrote

badass calculator and more

ChatGPT: Wolfram Alpha is a website that you can use to get answers to questions and do calculations on a wide range of topics, from science and math to history and finance. It's like having a really powerful calculator and encyclopedia that you can access anytime from your computer or mobile device.

5

deepneuralnetwork t1_jdexbby wrote

“Plan a vacation for me and book it” (Expedia plug-in)

4

utopiah t1_jdgu9aa wrote

Does ChatGPT actually do that currently, namely keep track of your past prompts and makes a model of your tastes or values, so that "me" here is meaningful?

PS: not sure why the downvote. Is it an offensive or idiotic question?

1

sEi_ t1_jdh3kej wrote

Per default when you close the session everything about it is forgotten when you have next session. (The past sessions will must certainly be used to train next version of GPT though)

1

utopiah t1_jdh7hxy wrote

Thanks but that only clarifies from the UX side, we don't know know if OpenAI does save them and could decide to include past sessions in some form, as a context even with the current model, do we?

1

modeless t1_jdevktx wrote

To me the browser plugin is the only one you need. Wolfram Alpha is a website, Instacart is a website, everything is a website. Just have it use the website, done. Plugins seem like a way to get people excited about giving the AI permission to use their stuff, but it's not technically necessary.

6

YaAbsolyutnoNikto t1_jdfft8y wrote

Well, you can use facebook, youtube, google calendar, etc. through safari/chrome/etc. on your phone too. Doesn't mean the experience isn't better when it is tailored to the platform you're using.

Having a lot of these platforms converted into chatGPT in the most ideal manner seems like a better way and more practical way to use it.

6

devzaya t1_jdgfzak wrote

Here is a demo of how a vector database can be used as a source of real-time data fot chatGPT

https://www.youtube.com/watch?v=fQUGuHEYeog

Here is a how-to https://qdrant.tech/articles/chatgpt-plugin/

6

killver t1_jdgt1sn wrote

How exactly are you using the vector database there? It seems rather like querying the web for this info and the first example is about the docs.

1

trueselfdao t1_jdg9w0w wrote

I was wondering where the equivalent of SEO would start coming from but this just might be the direction. With a bunch of competing plugins doing the same thing, how can you convince GPT to use yours?

3

itsnotlupus t1_jdgdkbr wrote

So I suppose we're going to see various chat AI open-source projects integrating with a few popular APIs next.

2

psdwizzard t1_jde850w wrote

A memory plug in would be amazing. it would allow it to learn.

1

ghostfaceschiller t1_jdekpke wrote

Trivially easy to build using the embeddings api, already a bunch of 3rd party tools that give you this. I’d be surprised if it doesn’t exist as one of the default tools within a week of the initial rollout.

EDIT: OK yeah it does already exist a part of the initial rollout - https://github.com/openai/chatgpt-retrieval-plugin#memory-feature

14

willer t1_jdgps4b wrote

I read through the docs, and in this release, ChatGPT only calls the /query API. So you can't implement long term memory of your chats yourself, as it won't send your messages and the responses to this service. Your retrieval API acts in effect as a readonly memory store of external memories, like a document library.

3

ghostfaceschiller t1_jdgrba9 wrote

Fr??? Wow what an insane oversight

Or I guess maybe they don’t wanna rack up all the extra embeddings calls, bc I assume like 100% if users would turn that feature on

1

BigDoooer t1_jdele0p wrote

I’m not familiar with these. Can you give the name/location if one to check out?

2

ghostfaceschiller t1_jdenoo2 wrote

Here's a standalone product which is a chatbot with a memory. But look at LangChain for several ways to implement the same thing.

The basic idea is: periodically feed your conversation history to the embeddings API and save the embeddings to a local vectorstore, which is the "long-term memory". Then, any time you send a message or question to the bot, first send that message to embeddings API (super cheap and fast), run a local comparison, and prepend any relevant contextual info ("memories") to your prompt as it gets sent to the bot.

14

xt-89 t1_jdessgl wrote

This also opens the door to a lot of complex algorithms for retrieving the correct memories

7

bojanbabic t1_jdevuuo wrote

Isn't this what Neeva should be doing with our phones?

1

PeterSR t1_jdgl4pv wrote

Great! With Zapier it should be able to launch the nukes as initially intended.

1

Formal_Overall t1_jdhf32f wrote

i like that openai has partnered with select companies to make sure that they have plugins from the getgo, and then also put development of plugins behind a waitlist, ensuring that select hand-chosen companies can corner their market. very cool, very open and ethical of them

1

rautap3nis t1_jdf16qy wrote

There was an amazing image creator model published today. I don't remember the name. Please help. :(

Also, to avoid this in the future, could someone let a brother know which outlets should I follow to stay ahead of the news?

0

frequenttimetraveler t1_jdekma7 wrote

Does this turn ChatGPT to WeChatGPT?

If this means the end of Apps, i m all for it

−4