machineko

machineko t1_je88wj9 wrote

I'm working on an open source library focused on resource-efficient fine-tuning methods called xTuring: https://github.com/stochasticai/xturing

Here's how you would perform int8 LoRA fine-tuning in three lines:

python: https://github.com/stochasticai/xturing/blob/main/examples/llama/llama_lora_int8.py
colab notebook: https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing

Of course the Colab still only works with smaller models. In the example above, 7B required 9G VRAM.

12

machineko t1_je70llx wrote

Why would you say that fine-tuning is not viable? There are many production use cases of fine-tuning a model using in-house proprietary data.
If fact, if you have the resources you can do both fine-tuning of an existing model (whether is just supervised or unsupervised) and also use that for retrieval augmented generation.

10

machineko t1_je05orp wrote

I agree. While these giant centralized models are all over the news, there are ways to make smaller models much more efficient (i.e. LoRA mentioned above). And during the process working with these techniques, we can perhaps discover new methods and architecture .

We are working on an open-source project focused on making fine-tuning for LLMs, simple, fast and efficient: https://github.com/stochasticai/xturing.

OP, we till got a ton of stuff we want to try out to make fine-tuning faster and more compute/memory efficient, if you are interested in contributing.

6

machineko t1_jbu36nu wrote

How long is your text? If you are doing short sentences, try fine-tuning RoBERTa with your labeled dataset for classification. If you don't have labeled datasets, you need to use zero or few-shot learning on a larger model. I'd start with a smaller LLM like GPT-J, try playing with some prompts on a free playground like this (you can select GPT-J) until you find something that work well.

1

machineko t1_ja4jubd wrote

Inference acceleration involves model accuracy / latency / cost trade-offs and also how much $ and time you are willing to spend to speed things up. Is your goal to achieve real-time? Can you do it while taking 2-3% accuracy hits? What compute resource is the model going to run on? On the cloud and you have access to any GPUs? For example, certain inference optimization techniques will only run on newer and more expensive GPUs.

For example, for highly scalable and low-latency deployment, you'd probably want to do model compression. And once you have a compressed model, you can optimize inference using TensorRT and/or other compilers/kernel libraries. Happy to share more thoughts, feel free to reply here or DM me with more details.

1

machineko t1_j8b0zyv wrote

Are you interested in reducing the latency or just cutting down the cost? Can you run the workload on GPUs instead?

For BERT-type models, doing some compression and using inference libraries can easily get you 5-10x speedup. If interested, I'd be happy to share more resources on this.

1

machineko t1_ixzkdbt wrote

AWS Lambda provides serverless but you do not need serverless to make something scalable, if you are referring to scaling from single to multiple GPUs as your workload grows.

The simplest method is to containerize your application and use auto-scaling from GCP. You can also auto-scale it on Kubernetes. Alternatively, you can use services like stochastic.ai which deploys your model containerized and provides auto-scaling out of the box. You just need to upload your model and deploy.

However, I suggest you "accelerate" your inference first. For example, you can use open-source inference engines (see: https://github.com/stochasticai/x-stable-diffusion) to easily accelerate your inference 2x or more. That means you can generates 2x more images / $ on public clouds.

1

machineko t1_isdv3te wrote

I agree with this comment. Back when the tools were crappy, it might've been better to build from scratch but with many good tools available now (often giving you better performance than building them on your own and also cheaper), you should at least try them. Especially if you are interested in running deep learning.

There are mlops sw for:
- low latency inference

- training large language models

- explainable ml

and more.

2