Nezarah
Nezarah t1_jd4tt7y wrote
Reply to comment by usc-ur in Smarty-GPT: wrapper of prompts/contexts [P] by usc-ur
Ah! Iv only just started to dive into the machine learning rabbit hole so I was not sure of if was understanding the term correctly.
Im keen to check it out.
Nezarah t1_jd297zo wrote
Reply to Smarty-GPT: wrapper of prompts/contexts [P] by usc-ur
Is this essentially In-Context Learning?
You condense additional knowledge as a prefix to the prompt as “context” so that the question/input can use that information to create a more accurate/useful output?
Nezarah t1_jdz1zqc wrote
Reply to [D] Small language model suitable for personal-scale pre-training research? by kkimdev
For specifically personal use and research? And not commercial? LlaMA is a good place to start, and/or Alpaca 7B. Small scale (can run on most hardware locally), can be Lora trained and fine-tuned. Also has High token limits (I think it’s 2000 or so?).
Can have outputs comparable to GPT3 which can be further enhanced with Pre-Context training.
Can add branching functionality through the Langchain library.