Submitted by Haghiri75 t3_11wdi8m in deeplearning

Recently, I installed dalai on my Macbook Pro (late 2019, i7 processor and 16GB of RAM) and I also installed Alpaca-7B model. Now when I ask it to write a tweet, it writes a wikipedia article and it does the same pretty much every time 😂

First, should I fine-tune it?

Second, is there any "prompt magic" going on here?

P.S: using this one, I got much better results. What's the difference between the two?

15

Comments

You must log in or register to comment.

Haghiri75 OP t1_jcxt80d wrote

I guess I found the reason. Dalai system does quantization on the models and it makes them incredibly fast, but the cost of this quantization is less coherency.

8

Jaffa6 t1_jd03par wrote

That's odd.

Quantisation should make it go from (e.g.) 32 bit floats to 16bit floats, but I wouldn't expect it to lose that much coherency at all. Did they say somewhere that that's why?

3

Haghiri75 OP t1_jd32s29 wrote

Apparently I was wrong, the problem is not only quantization. It is because it's not Stanford's Alpaca and another alpaca-like model. This was what I can surely say about that.

1

j-solorzano t1_jd16g7r wrote

Try adjusting the temperature.

2

Haghiri75 OP t1_jd32z3b wrote

Temperature is just a matter of randomness, getting it higher actually helps in generating more variations from the same prompt, but the coherency is still a problem.

2

Board_Stock t1_jczmxht wrote

Hello, I've been running the alpaca.cpp on my laptop, and have you figured out how to make it remember conversations yet? sorry if this is a beginner question

1

j-solorzano t1_jd16nfb wrote

Language models don't remember conversations by themselves. You'd have to implement a memory and then add retrieved memories to the prompt.

1

Board_Stock t1_jd1z78z wrote

Yes that's what I meant, I want run the alpaca.cpp in an api like way so that it will automatically enter the previous convo along with the new message in the prompt

1