Viewing a single comment thread. View all comments

LiquidDinosaurs69 t1_j44wp7w wrote

It’s definitely infeasible to train and run inference on your own for a large language model. You would need many datacenter gpus. But you could maybe create an application that interfaces with a chatgpt api (or some other api accessible LLM)

2