Submitted by nashcaps2724 t3_117l2vf in deeplearning
hayAbhay t1_j9duoda wrote
Create a Corpus C like this
<source text from corpus A> <human generated text from corpus B> . . .
Make sure you add some unique tokens marking the start and end of each example and the input and output within it.
Then, take any pretrained LLM (tuning gpt3 is trivial with ~10-20 lines of code).
For inference, use the tuned model and give it the input and let it complete the output. You can add the "end" marker token to get generation to complete.
[Source: trained/tuned several language models including gpt3]
nail_nail t1_j9gkc5w wrote
That means it will need to be paid for each summarization API call, forever, right? Is there an alternative model that one can tune on a couple of high end nvidia cards? Like GPT NeoX?
hayAbhay t1_j9i9nfv wrote
If you have the hardware, and if you have a lot of those input-output examples, you can use alternative smaller models in the gpt family.
Should work reasonably well especially if the variance in the input-output isn't too much. (A lot depends on your dataset here)
Definitely tradeoffs here in terms of model dev, inference and maintenance of it. If the expected costs aren't too high, I'd strongly recommend gpt3 as a base.
Viewing a single comment thread. View all comments