Submitted by juliensalinas t3_11tqryd in MachineLearning
I just released an instruct version of GPT-J using Stanford Alpaca's dataset.The result of this experiment is very cool and confirms that, when fine-tuned on the right data, GPT-J is a very powerful AI model!You can download the model from the HuggingFace hub: https://huggingface.co/nlpcloud/instruct-gpt-j-fp16
Here is an example:
from transformers import pipeline import torch
generator = pipeline(model="nlpcloud/instruct-gpt-j-fp16", torch_dtype=torch.float16, device=0)
prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n" print(generator(prompt))
More details about this experiment here: https://nlpcloud.com/instruct-version-of-gpt-j-using-stanford-alpaca-dataset.html
I hope it will be useful! Please don't hesitate to share some feedbacks!
Julien
pitrucha t1_jckiv1q wrote
Any plans to quantize it? I saw that someone managed to do so with 65B LLama and push it from 120 to 30 GB