Nezarah

Nezarah t1_jdz1zqc wrote

For specifically personal use and research? And not commercial? LlaMA is a good place to start, and/or Alpaca 7B. Small scale (can run on most hardware locally), can be Lora trained and fine-tuned. Also has High token limits (I think it’s 2000 or so?).

Can have outputs comparable to GPT3 which can be further enhanced with Pre-Context training.

Can add branching functionality through the Langchain library.

6

Nezarah t1_jd297zo wrote

Is this essentially In-Context Learning?

You condense additional knowledge as a prefix to the prompt as “context” so that the question/input can use that information to create a more accurate/useful output?

1