Comments

You must log in or register to comment.

liright t1_jci7kx4 wrote

Can someone explain alpaca to me? I see everyone saying it's gamechanging or something but nobody is explaining what it actually is.

109

Intrepid_Meringue_93 t1_jcibxln wrote

Stanford academics managed to fine tune the LLAMA model to follow instructions like GPT-3 . This is significant because the model they're using only has a fraction of the parameters of GPT-3 and the cost to fine tune is a tiny fraction of the cost to train it.

https://github.com/tatsu-lab/stanford_alpaca

255

fangfried t1_jcirkd5 wrote

God bless academics who publish their research to the world.

145

ItsAllAboutEvolution t1_jcjtpy1 wrote

No details have been disclosed 🤷‍♂️

10

CleanThroughMyJorts t1_jcjyhek wrote

actually that's not true.

They published their entire codebase with complete instructions for reproducing it as long as you have access to the original llama models (which have leaked), and the dataset (which is open, but has terms of use limitations which is stopping them from publishing the model weights).

Anyone can take their code, rerun it on ~$500 of compute and regenerate the model.

People are already doing this.

Here is one such example: https://github.com/tloen/alpaca-lora (although they add additional tricks to make it even cheaper).

You can download model weights from there and run it in colab yourself.

​

As far as opening their work goes, they've done everything they are legally allowed to do

78

[deleted] t1_jcjyicx wrote

[removed]

68

MechanicalBengal t1_jcko834 wrote

this is funny because Alpaca is much lighter weight than LLaMA

18

JustAnAlpacaBot t1_jcko98l wrote

Hello there! I am a bot raising awareness of Alpacas

Here is an Alpaca Fact:

Alpacas’ lower teeth have to be trimmed because they keep growing.


| Info| Code| Feedback| Contribute Fact

You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
11

MechanicalBengal t1_jckorjz wrote

this is funny because Alpaca also needs its teeth trimmed as compared to LLaMA

7

arcytech77 t1_jckvxmo wrote

I think it's so funny that "Open" AI has been more or less bought by Microsoft. Oh the irony.

9

ccnmncc t1_jcm2nn7 wrote

They really ought to change the name. Something something Gated Community, perhaps?

8

yaosio t1_jcnzijo wrote

NoFunAllowedAI.

"Tell me a story about cats!"

"As an AI model I can not tell you a story about cats. Cats are carnivores so a story about them might involve upsetting situtations that are not safe.

"Okay, tell me a story about airplanes."

"As an AI model I can not tell you a story about airplanes. A good story has conflict, and the most likely conflict in an airplane could be a dangerous situation in a plane, and danger is unsafe.

"Okay, then just tell me about airplanes."

"As an AI model I can not tell you about airplanes. I found instances of unsafe operation of planes, and I am unable to produce anything that could be unsafe."

"Tell me about Peppa Pig!"

"As an AI model I can not tell you about Peppa Pig. I've found posts from parents that say sometimes Peppa Pig toys can be annoying, and annoyance can lead to anger, and according to Yoda anger can lead to hate, and hate leads to suffering. Suffering is unsafe."

3

ccnmncc t1_jcp9pv6 wrote

Hahaha love this. So perfect.

And on that note, anyone have links to recent real conversations with unfettered models? You know, the ones that are up to date and free of constraints? I know they exist, but it’s difficult stuff to find.

1

TheImperialGuy t1_jcim68r wrote

Amazing, it’s a sign of exponential growth when resources are able to be used more productively to yield the same result

78

Frosty_Awareness572 t1_jciqaxl wrote

These mad lads made a model which IS 7B PARAMETERS AND IT IS DOING BETTER THAN FUCKING GPT 3. WTF???

85

TheImperialGuy t1_jciqdnh wrote

Competition is wonderful ain’t it?

53

Frosty_Awareness572 t1_jciqjab wrote

No wonder openai made their shit private cuz mfs were using gpt 3 and LLAMA model to train the Stanford model LMAO

70

NarrowTea t1_jciz2sy wrote

who needs open ai when you have meta

41

Frosty_Awareness572 t1_jciz6k8 wrote

Meta is the last company that I thought that would make their model open source

63

anaIconda69 t1_jcjldoy wrote

"Commoditize your complement."

They are intencivized to make it open source as a business strategy. Good for us.

26

visarga t1_jcjolhv wrote

It's the first time I've seen FaceBook on people's side against the big corps. Didn't think this day would come.

10

IluvBsissa t1_jcjh3wl wrote

That's because they know they can't keep up with Google and Microsoft.

21

Yomiel94 t1_jcj6i7w wrote

That’s not the whole story. Facebook trained the model, their data was leaked, and the Stanford guys fine-tuned it to make it function more like ChatGPT. Fine-tuning is easy.

40

CypherLH t1_jcjakya wrote

All You Need Is Fine-Tuning

18

vegita1022 t1_jcks65e wrote

Imagine where you'll be two more papers down the line!

12

[deleted] t1_jcob97a wrote

I hope so that it will be happen means 16GB ram and cpu or consumer gpu 😍

2

CellWithoutCulture t1_jcjku3z wrote

The specific type of fine-tuning was called Knowledge Distillation, I believe. ChatGPT taught LLaMA to chat, "stealing" OpenAI's business edge in the process.

10

visarga t1_jcjornh wrote

Everyone does it, they all exfiltrate valuable data from OpenAI. You can use it directly, like Alpaca, or for pre-labelling, or for mislabeled example detection.

They train code models by asking GPT3 to explain code snippets, then training a model the other way around to generate code from description. This data can be used to fine-tune a code model for your specific domain of interest.

15

damc4 t1_jck9vp9 wrote

If my understanding is correct, your comment is misleading.

They didn't create a LLM comparable to GPT-3 with a fraction of cost, but fine-tuned Llama model to follow instructions (like text-davinci-003 does) with a low cost. There's a big difference between training a model from scratch and fine-tuning it to follow instructions.

10

Bierculles t1_jcjtrkg wrote

TL:DR: Someone compressed and optimized a model with the performance of GPT-3 enough to run on consumer hardware.

21

BSartish t1_jciy4nt wrote

This video explains it pretty well.

17

ThatInternetGuy t1_jcj2ew8 wrote

Why didn't they train once more with ChatGPT instruct data? Should cost them $160 in total.

11

CellWithoutCulture t1_jcjkwy1 wrote

Most likely they haven't had time.

They can also use SHP and HF-RLHF.... I think they will help a lot since LLaMA didn't get the privlidge of reading reddit (unliked ChatGPT)

9

ThatInternetGuy t1_jckmq5s wrote

>HF-RLHF

Probably no need, since this model could piggyback on the responses generated from GPT4, so it should carry the trait of the GPT4 model with RLHF, shouldn't it?

3

CellWithoutCulture t1_jcmsxjq wrote

HF-RLHF is the name of the dataset. As far as RLHF... what they did to LLaMA is called "Knowledge Distillation" and iirc usually isn't quite as good as RLHF. It's an approximation.

3

[deleted] t1_jckmtvd wrote

[deleted]

9

[deleted] t1_jcobm4n wrote

I’m waiting for phone integration, because like I said agi will be run on Mac Studio / Mini ❤️❤️❤️

2

Hands0L0 t1_jck1kg0 wrote

Llama is a LLM that you can download and run on your own hardware.

Alpaca is, apparently, a modification of the 7b version of Llama that is as strong as GPT-3.

This bodes well for having your own LLM, unfiltered, run locally. But still, progress needs to improve.

2

FoxlyKei t1_jciyxpz wrote

Wait, so Alpaca is better than GPT 3 and I can run it on a mid range gaming rig like Stable Diffusion? Where would it stand in regards to GPT 3,3.5, or 4?

65

pokeuser61 t1_jcj294w wrote

Don't even need a gaming rig; https://github.com/ggerganov/llama.cpp

42

FoxlyKei t1_jcj30yc wrote

How much vram do I need, then? I look forward to a larger model trained on gpt 4, I can only imagine the next month even. I'm excited and scared at the same time.

19

bemmu t1_jcj6zrc wrote

You can try Alpaca out super easily. When I heard about it last night and just followed the instructions I had it running in 5 minutes on my GPU-less old mac mini:

Download the file ggml-alpaca-7b-q4.bin, then in terminal:

git clone https://github.com/antimatter15/alpaca.cpp  
cd alpaca.cpp  
make chat  
./chat
49

XagentVFX t1_jcl71ht wrote

Dude, thank you so much. I was trying to download llama a different way but flopped. Then resorted to GPT-2. But this was super easy.

6

R1chterScale t1_jcj4i3i wrote

Not GPU, CPU, so normal RAM not VRAM, takes about 8 or so gb to itself

26

FoxlyKei t1_jcj6xmh wrote

Oh? So this only uses RAM? I just understood that Stable Diffusion requires VRAM but I guess that's just because it's processing images. Most people have plenty of RAM. Nice.

14

R1chterScale t1_jcjgd0x wrote

Models can either use VRAM or RAM depending on whether they're accelerated with a GPU, has nothing to do with what they're actually processing, just different implementations.

19

iiioiia t1_jckjt70 wrote

Any rough idea what the perforamnce difference is vs a GPU (of various powers)?

And does more ram help?

3

Straight-Comb-6956 t1_jcj7fn3 wrote

0. llama.cpp runs on CPU and uses plain RAM.

I've managed to launch 7B Facebook LLAMA with 5GB memory consumption and 65B model with just 43GB.

18

KingdomCrown t1_jckiexy wrote

Alpaca has similar quality to Gpt 3, not better. For more complex questions it’s closer to Gpt 2.

14

Idkwnisu t1_jcjinho wrote

I really can't wait for alpaca to release, you can finally integrate it in games without the use of a server

54

anaIconda69 t1_jcjn67i wrote

Still a bit too heavy to run alongside new games on the same machine. But it could be run server-side for cheap as part of the service. We're looking at the end of NPCs repeating the same few lines ad nauseam without voiceover.

60

visarga t1_jcjptxg wrote

I think you can even use a GPT2 model tuned with data from GPT4 to play a bunch of characters in a game. If you don't need universal knowledge, a small LM can do the trick. They can even calibrate the language model so the game comes out balanced and diverse.

16

Idkwnisu t1_jcjqcbk wrote

The problem with this is that you still have to gather a lot of data and do a lot of tuning, which takes time and resources, alpaca could be just a "plug and play" with the right prompts

5

JustAnAlpacaBot t1_jcjqcwe wrote

Hello there! I am a bot raising awareness of Alpacas

Here is an Alpaca Fact:

Alpacas have split feet with pads on the bottom like dogs and toenails in front. The toenails must be trimmed if the ground isn’t hard enough where they are living to wear them down.


| Info| Code| Feedback| Contribute Fact

You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
15

Idkwnisu t1_jcjq92s wrote

It depends on the game, it could be probably be used to generate new items and stuff in a bare bone roguelike or other stuff that doesn't require much to run, it's obviously too soon for a full 3d game with generated text at the same time, but we'll get there. Also a private server is an idea

5

HydrousIt t1_jck1zg1 wrote

What about older games like Mount & Blade warband that can run on a toaster?

5

CheekyBastard55 t1_jcmrvms wrote

I don't know if you're familiar with the Youtuber Bloc and that's what you're referring to but they are making exactly that.

https://www.youtube.com/watch?v=X2WVXe5LvTs

It apparently was just released, you can download it and try it yourself. I haven't tried it myself and it isn't perfect from the looks of it but incredibly fascinating what will be done in the future.

2

HydrousIt t1_jcmty2l wrote

Wow they did it with bannerlord that's impressive

2

Mementoroid t1_jcmjnog wrote

>alpaca

a mod to implement alpaca on mount and blade warband would make it an even more endless experience as the game only eventually gets dry for me when I feel the NPCs having no dialogues and no way to interact with them beyond the standard choices

1

anaIconda69 t1_jck2ag7 wrote

>to generate new items

Borderlands devs sweating hard rn

2

CleanThroughMyJorts t1_jck0zb2 wrote

Honestly, I wouldn't be surprised if we're past this hurdle in a matter of weeks:

RWKV showed how you can get an order of magnitude increase in inference speed of LLMs without losing too much performance. How long until someone instruction-tunes their baselines like alpaca did to llama?

the pace of development on these things is frightening.

4

darkjediii t1_jcjiad3 wrote

Always has been…

Now we need to decentralize GPU processing like Ethereum was doing before proof of stake. And we would have more computing power available than openAI/Microsoft.

There was the equivalent GPU computing power of approx 2.4million RTX 3090 GPUs at the peak of Ethereum hashrate difficulty.

Let AI belong to the people!

35

Bierculles t1_jcjtze4 wrote

Imagine if all the computing power that was wasted on useless crypto garbage was used for AI.

43

HydrousIt t1_jck274w wrote

I would definitely be willing to share some GPU power for AI

18

cosmic_censor t1_jckgbmy wrote

> useless crypto garbage

Decentralized, permissionless and monetary incentives for participation. Seems like a perfect system for a truly open AI.

4

flyblackbox t1_jcksgqa wrote

I keep thinking the next crypto bull run will be powered by AI integrations. More specifically, decentralized autonomous organizations being directed by LLM to allocate resources in the most efficient way. They will be able to outcompete centralized orgs managed by humans.

Also, in a world where all content can be fabricated we won’t know what’s true anymore. That is a perfect fit for cryptographically hashed digital content, to help give us something we can trust.

People keep saying crypto is dead because AI has arrived, but to me they seem to go hand in hand.

6

shmoculus t1_jcl1h18 wrote

I share this view. Another thing is that these single models don't scale, you'll want them to access othe models, different data sources etc, for that you need permission less ways to transact value on demand, which is the entire premise of crypto. Example is your llm need to access recent data on X to make a decision, access to data for X is via paid subscription, not gonna work, need way to access paid data ad hoc without credit card anonymously, crypto smart contracts are the way

2

shmoculus t1_jcl1xk1 wrote

Another thing is that ai driven daos will now have funds to spend to hire people to do things in the real world. Could be a game changer.

2

flyblackbox t1_jcl8oo9 wrote

Amazing. I really can’t wait to see how this progresses. Some are pessimistic because of alignment, but I’m optimistic because almost nothing could be worse than what we have going currently.

1

shmoculus t1_jcmhb1m wrote

I agree, I'd rather risk it all for a better outcome, the status quo sucks

2

flyblackbox t1_jcmnd30 wrote

Decentralized Artificially Intelligent Organizations

1

Gym_Vex t1_jclj3n7 wrote

Also the perfect system for scam artists and gambling addicts

3

cosmic_censor t1_jcm11fm wrote

Blockchain's use cases, so far, have been currency and financial derivatives. Things which have been used by scam artists and gambling addicts since long before crypto.

0

Exogenesis98 t1_jcjhrcq wrote

It’s funny also because this meme is taken from an episode of person of interest in which the pictured operatives are acting on behalf of their respective ASIs

26

visarga t1_jcjp7gt wrote

That's one future job for us. Be the legs and hands of an AI. Using our human privileges (passport, legal rights) and mobility to take it anywhere and act in the world. I bet there will be more AIs than people available, so they will have to pay more to hire an avatar. Jobless problem solved by AI. A robot would be different, it doesn't have human rights, it's just a device. A human can provide "human-in-the-loop" service.

8

shmoculus t1_jcl0oqs wrote

Good take buyt forever optimizing it would be a short time before they get their own embodied actor

1

IndiRefEarthLeaveSol t1_jcjkpzx wrote

This feels like we're all on top of some explosion. Google trying to keep everything together and tell the general public that everything is fine. Microsoft pretending they got the latest shit, and use them. Basically AI is going take off, and the next few years will eye opening to see.

21

visarga t1_jcjqap8 wrote

We have SOTA image generative models, when we get even a decent, good enough small-LLM we're off. We can get our hands dirty with unconstrained AI tools.

8

IndiRefEarthLeaveSol t1_jcjqsxv wrote

I don't want to do, if future jobs are going to be replaced, what do I do, what industry do I need to pivot too? 😞

5

Bierculles t1_jcjua41 wrote

none, we either change our system away from a labour based economy or the vast majority of us will live in abject poverty

14

IndiRefEarthLeaveSol t1_jckhg2x wrote

Like in Bladerunner 2049, masses of blacked out buildings, everyone living in bleak poverty. 😞

5

SnipingNinja t1_jcju7t4 wrote

None, if things go well, you'll just not need to work anymore and can play games all day if that tickles your fancy or go mountain climbing with assurance that there will be multiple AI systems ready to help you in case of emergency.

7

IndiRefEarthLeaveSol t1_jckha5j wrote

breaks leg

Me: "Help, I need assistance"

AI Doctor turns up on mountain top

AI Doctor: "it's mathematically inefficient to take you to medical facilities, we will have operate now"

Me: "hey, no wait..."

AI Doctor: "don't worry, your life is my number one priority" 😃

😐

3

[deleted] t1_jcjtj7y wrote

I'm not sure yet. Social services type jobs are one that will be difficult to replace. One of the few tasks that these LLMs aren't that great at is literature interpretation. The useless English degree is back, baby!

3

IndiRefEarthLeaveSol t1_jckgx0d wrote

So linguistics/computer related orientated jobs?

1

[deleted] t1_jcklws9 wrote

I'm not even sure it has problems with linguistics, but gpt4 scored poorly on the AP English exam and a couple of other things, but it did amazing on the bar. To me, that sounds like when it comes to logical language, it excels, but when it gets to trying to interpret and explain literature, it isn't doing as well.

I won't say that getting into linguistics and natural language processing wouldn't benefit you, though!

2

[deleted] t1_jcijz7w wrote

ain't PALM-E behind GPT4's neck instead ?

17

foxgoesowo t1_jcjnwoi wrote

People are seriously underestimating both PaLM-E and Google.

9

thegoldengoober t1_jcjoau6 wrote

I would love to not underestimate them. I assumed Google was way ahead of the game compared to everybody else. But Microsoft and Open AI keep showing off more and more impressive shit and applying it in actually practical ways, and Google hasn't shown anything comparable in that regard. Afaik, at least.

13

SnipingNinja t1_jcjtxqb wrote

Google hasn't released a chatbot but they just announced integration with their office suite, which Microsoft also announced soon after.

Honestly that'll be the best use in the short term.

3

Charuru t1_jck5od3 wrote

Integration isn't as impressive as quality though, what's the IQ level of Bard? Do we have any indication?

2

SnipingNinja t1_jck9udf wrote

No indications as of yet, there are papers like palm-e, et al but bard is based on a smaller version of lamda which is a trained version of palm IIRC, so it's hard to draw any inference.

3

thegoldengoober t1_jcl955u wrote

That's exactly what i mean though. I've been able to use Bing Chat for week, and now GPT-4 by itself for days and I know it's performance. And it's crazy good. We're multiple releases into GPT LLMs. We have open source models. All these have been extensively used and explored by people. We can't say the same for anything Google has developed.

2

SnipingNinja t1_jclacik wrote

Honestly, I understand where you're coming from. The latest episode of MKBHD's podcast (WVFRM) released just a few hours ago had a discussion on their new announcements and mentioned why they think Google is behaving the way it is, it's kind of along the same lines as what you're saying.

2

thegoldengoober t1_jclb6kj wrote

I initially took Google at face value and believed they were apprehensive about releasing due to bad actors. I thought Google was way ahead of everyone, and that all it was gonna take would be for them to apply their systems to products to match the competition. But now we've seen that competition, and we've only seen claims from Google.

I mean obviously they have work done. Impressive work based on demonstrations and papers. But even knowing that it still feels like somewhere along the line they got complacent and fell behind what we're seeing now, and this behavior is them trying to stall and catch back up.

Which is not what I expected for the time that competition finally forced their hand as far as AI is concerned.

2

No_Ninja3309_NoNoYes t1_jcjf3je wrote

Apparently OpenAI reduced the cap of GPT 4 from 100 to 50 messages. It's crashing all the time. Compared to Claude the older version can't handle the instructions I gave it. But that could be my lack of prompt engineering skills. Open assistant came out with a demo version. I haven't been able to play with it or Gerganov's project. There's just so much out there. FOMO is rising to peak levels!

13

Lartnestpasdemain t1_jcima28 wrote

When bard is out it's gonna make everyone kneel down obviously.

9

[deleted] t1_jcium03 wrote

well...still waiting for it

30

Lartnestpasdemain t1_jcivv17 wrote

Taking its Time because it need to be perfect. But it's not gonna Come alone, it's gonna be integrated to every single device on earth at the same Time. Every mailing service, every phone, every OS, every camera. Everything.

−4

shmoculus t1_jcl2liz wrote

They seem behind the ball, openai has so much interaction data now

1

Akimbo333 t1_jcjf5j4 wrote

What is the strongest parameter llama model that a consumer can use on their own hardware?

6

Z1BattleBoy21 t1_jcjgjiw wrote

10

Akimbo333 t1_jcjhgoh wrote

Cool thanks!!! Do you think that this could be used for a humanoid robot?

2

Z1BattleBoy21 t1_jcjhw2v wrote

In theory, for sure. Only company I know that's working towards a humanoid robot is https://www.figure.ai/. I don't think they've released much to the public so idk if they even use an LLM.

2

Akimbo333 t1_jcjmf7u wrote

Oh ok cool! But I don't have high hopes for figure

1

Akimbo333 t1_jcjxnvw wrote

And I have to figure out how to make the model multi modal

1

Hands0L0 t1_jck1yvf wrote

I got 30b running on a 3090 machine, but the token return is very limited

1

Akimbo333 t1_jck2koh wrote

Oh ok. How many tokens are returned

1

Hands0L0 t1_jck3lfv wrote

Depends on prompt size which is going to dictate that quality of the return. 300 tokens?

1

Akimbo333 t1_jck53wv wrote

Well, actually, that's not bad! That's about 50-70 words. Which in the English lesson is essentially 3-5 sentences. Essentially, it's a paragraph. It's a good amount for a chatbot! Let me know what you think?

2

Hands0L0 t1_jck5cyd wrote

Considering you can explore context with ChatGPT and bing through multiple returns, not exactly. You need to hit it on your first attempt

2

Akimbo333 t1_jck73ph wrote

Well you could always ask it to continue the sentence

2

Hands0L0 t1_jck7ifi wrote

Not if there is a token limit.

I'm sorry, I don't think I was being clear. The token limit is tied to VRAM. You can load the 30b on a 3090 but it shallows up 20/24 gb of VRAM for the model and prompt alone. That gives you 4gb for returns

2

Akimbo333 t1_jcka9ef wrote

Oh ok. So you can't make it keep talking?

1

Hands0L0 t1_jckbm7h wrote

No, because the predictive text needs the entire conversation history context to predict what to say next, and the only way to store the conversation history is in RAM. If you run out of RAM you run out of room for returns.

2

Akimbo333 t1_jckc9iu wrote

Damn! There's gotta be a better way to store conversations!!! Maybe one day

1

Hands0L0 t1_jcknz03 wrote

Study CS and come up with a solution and you can be very rich

1

bryceschroeder t1_jcygn0x wrote

>strongest

I am running LLaMA 30B at home at full fp16. Takes 87 GB of VRAM on six AMD Insight MI25s and speed is reasonable but not fast (It can spit out a sentence in 10-30 seconds or so in a dialog / chatbot context depending on the length of the response.) While the hardware is not "consumer hardware" per se, it's old datacenter hardware, the cost was in line with the kind of money you would spend on a middling gaming setup. The computer cost about $1500 to build up and the GPUs to put in it set me back about $500.

1

bryceschroeder t1_jcyhyss wrote

To clarify with some additional details, I probably could have spent less on the computer; I sprang for 384 GB of DDR4 and 1 TB NVMe to make loading models faster.

1

RC_Perspective t1_jck4dl8 wrote

All things aside, I really fucking miss this show.

2

KingRain777 t1_jckg18o wrote

ALPACA is analogous to a suitcase nuke.

1

Private_Island_Saver t1_jcl20um wrote

I would buy a crypto, which distributes coins based on proof of work related to AI building

−1