Viewing a single comment thread. View all comments

Jaffa6 t1_jd03par wrote

That's odd.

Quantisation should make it go from (e.g.) 32 bit floats to 16bit floats, but I wouldn't expect it to lose that much coherency at all. Did they say somewhere that that's why?

3

Haghiri75 OP t1_jd32s29 wrote

Apparently I was wrong, the problem is not only quantization. It is because it's not Stanford's Alpaca and another alpaca-like model. This was what I can surely say about that.

1