Comments

You must log in or register to comment.

rya794 t1_jdrypjf wrote

Yea, it would be nice.

But what benefit does any LLM provider gain by implementing/adhering to an open protocol? OpenAI is trying to build a moat around their service, from their perspective plugins are key to establishing a competitive advantage.

I can’t see this happening in reality.

29

ThirdMover t1_jdrzd7f wrote

That depends on how well they will be able to keep their moat. There is a lot of hunger for running LLMs on your own - if not hardware than at least in software environments you control. People want to see what makes them tick rather than trust "Open"AIs black boxes.

Yeah they have a performance lead but time will tell how well they can stay ahead of the rest of the field trying to catch up.

32

rya794 t1_jds0xqs wrote

I don’t think so, I suspect my argument holds no matter who is running the most advanced LLM. The market leader will never have an incentive to open source their “app store”.

The only way this breaks down is if by some miracle, an open source model takes and maintains the lead.

8

ThirdMover t1_jds1kid wrote

The lead may not always be obvious and the trade off from transparency may be worth it. LLMs (or rather "foundation models") will continue to capture more and more areas of competence. If I want one that - for example - forms the front end chat bot to a store I have so that people can ask for product explanations, do I need then the 500 IQ GPT-7 that won two Nobel prizes last year?

I think it's most likely that there will always be black box huge models that form the peak of what is possible with machine intelligence but what people use and interact with in practice will simply be "good enough" smaller and open source models.

20

Dwanyelle t1_jds42hs wrote

Exactly. It's not "what's the most impressive model possible?". It's "what's the most impressive model possible that can run on $1000 or less of hardware?"

15

rya794 t1_jdsev38 wrote

Yea, I agree with this, but I still don’t see what advantage the state of the art providers receive by adhering to an open protocol. If anything doing so would (on the margin) push users towards open source models when they might have been willing to pay for a more advanced model just to access certain plugins.

That being said, I do think that a standardized approach to a plugin ecosystem will arise. I just think it’s silly to expect any of the foundation model providers to participate.

4

alexmin93 t1_jduoxj4 wrote

The problem is not the model but the training dataset. That's the thing that costs millions for OpenAI. Alpacca is rather poorly performing mostly due to the fact its trained on gtp 3 generated texts

1

sweatierorc t1_jdszzh4 wrote

Firefox did, they only lost to another "open-source" project

2

rya794 t1_jdt0dxe wrote

That’s a really good counter argument. You may have moved me over to the other side.

3

AngusDHelloWorld t1_jdtq232 wrote

And not everyone care about open source. At least for the non technical people, as long as they can get things done, it’s good enough for them.

3

beryugyo619 t1_jds9oz8 wrote

Yeah the only advantage they have seems just couples of <500GB model weights in their hand, solely by being the first mover, without much else to back it up.

2

mcilrain t1_jds23vc wrote

Once competition kicks in AIs are going to be accessing every API accessible on the web by default.

5

light24bulbs t1_jds3mdl wrote

What's the underlying approach here? Just prompt engineering right?

I really really want to apply the ToolFormer paper to llama. They're both Facebook systems, you can get they've done it.

ToolFormer just seems like SUCH a good and thorough approach. There are quite a few gaps between the paper and building a working example, IMO, but it's clearly doable.

The way Facebook licensed the weights is frustrating me. We should all be passing around Alpaca trained, GPTQ quantized, SparseGpt optimized Llama derived models by now. Is there some telegram group i need to be in or something?

18

endless_sea_of_stars t1_jdskiit wrote

The advantage of in context learning is that it is trivial to add and remove plug-ins.

Training with the plug-ins is more powerful, but you can't really easily add or subtract. In theory training with APIs should result in a smaller model as the main model no longer needs to learn math or trivia (in theory).

2

light24bulbs t1_jdsulyn wrote

By "in context learning" i take it you mean zero shot.

Yes, you can hot swap. Id be unsurprised if what Open-AI did is fine tune on how to use plugins in general by giving some examples combined with a little bit of zero-shot primer.

Something trained with ToolFormers technique and then told it can use a new, but similar, plugin is IMO going to generalize way better than something that's never used a plugin before.

2

endless_sea_of_stars t1_jdtdiar wrote

Here is what we know about OpenAIs plug-ins. A compact API description gets prepended to the prompt. (In context) Technically it is few shot depending on which definitions you use. We don't know what if any fine-tuning of the model they did to get plug-ins working.

1

light24bulbs t1_jdtgrjb wrote

Based on how much langchain struggles to use tools and gets confused on them, I'd bet on fine tuning. I asked a contact to reveal what they're injecting into the prompt but it's not public information yet so i couldn't get it

3

endless_sea_of_stars t1_jdtik00 wrote

It is mostly public information. The API developer is required to make a specification document that describes the API. This gets injected into the prompt. They may transform it from json to something the model better understands. It may also inject some other boilerplate text.

1

light24bulbs t1_jdtiq9w wrote

I'm aware of that part. The wording of the test that's injected is not public. If it was, if use it in my langchain scripts.

Again i really expect there's fine-tuning, we will see eventually maybe.

1

alexmin93 t1_jdup63s wrote

Do you have GPT-4 API? Afaik plugins run on GPT-4 which even in current state is way better at following formal rules. But it's likely that they've indeed fine tuned it to make decisions to use tools

1