I will assume you are much more knowledgeable than I am in this space.. have few basic questions that have been bothering me since all the craze started around GPT and LLM recently.
I managed to get Alpaca working on my end using the above link and get very good result. LLaMa biggest takeaway was it is able to reproduce quality comparable to GPT and much lower compute size. If this is the case, why is the ouput much shorter on LLaMa than what I get on OpenGPT? I would imagine the OpenGPT reponse is much longer because ... it is just bigger? What is the limiting factor to not for us to get longer generated response comparable to GPT?
ggml-alpaca-7b-q4.bin is only 4 gigabyt - I guess this what it means by 4bit and 7 billion parameter. Not sure if rumor or fact, GPT3 model is 128B, does it mean if we get trained model of GPT, and manage to run 128B locally, will it give us the same results? Will it be possible to retrofit GPT model within Alpaca.cpp with minor enhancement to get output JUST like OpenGPT? I have read to fit 128B, it requires muliple Nvidia A100.
Last question, inference means that it gets output from a trained model. Meta/OpenAI/Stability.ai have the resources to train a model. If my understanding is correct, Alpaca.cpp or https://github.com/ggerganov/llama.cpp are a sort of 'front-end' for these model. They allow us to provide an input to get an output by inference with the model. The question I am trying to ask is, what is so great about llama.cpp? Is it because it's in C? I know there is Rust version of it out, but it uses llama.cpp behind the scene. Is there any advantage of an inference to be written in Go or Python?
lurkinginboston t1_jd0zr7c wrote
Reply to comment by Straight-Comb-6956 in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
I will assume you are much more knowledgeable than I am in this space.. have few basic questions that have been bothering me since all the craze started around GPT and LLM recently.
I managed to get Alpaca working on my end using the above link and get very good result. LLaMa biggest takeaway was it is able to reproduce quality comparable to GPT and much lower compute size. If this is the case, why is the ouput much shorter on LLaMa than what I get on OpenGPT? I would imagine the OpenGPT reponse is much longer because ... it is just bigger? What is the limiting factor to not for us to get longer generated response comparable to GPT?
ggml-alpaca-7b-q4.bin is only 4 gigabyt - I guess this what it means by 4bit and 7 billion parameter. Not sure if rumor or fact, GPT3 model is 128B, does it mean if we get trained model of GPT, and manage to run 128B locally, will it give us the same results? Will it be possible to retrofit GPT model within Alpaca.cpp with minor enhancement to get output JUST like OpenGPT? I have read to fit 128B, it requires muliple Nvidia A100.
Last question, inference means that it gets output from a trained model. Meta/OpenAI/Stability.ai have the resources to train a model. If my understanding is correct, Alpaca.cpp or https://github.com/ggerganov/llama.cpp are a sort of 'front-end' for these model. They allow us to provide an input to get an output by inference with the model. The question I am trying to ask is, what is so great about llama.cpp? Is it because it's in C? I know there is Rust version of it out, but it uses llama.cpp behind the scene. Is there any advantage of an inference to be written in Go or Python?