Nineshadow

Nineshadow t1_j45i8rf wrote

No, 60 problems are not enough, probably not even for fine tuning. I would also strongly advise against starting from scratch.

The best approach in this case would be to fine-tune a pre-trained LLM which was trained on both natural language and code, something like GPT-Neo with 125M parameters. I'm mentioning the small version because the you'll have trouble fitting in memory larger models with billions of parameters!

Personally this is what I used for my Bachelor's where I made a tool to automatically generate input code from competitive programming statements.

1