Viewing a single comment thread. View all comments

IntelArtiGen t1_iqu74lu wrote

Transformers like the one in BERT have already defined tasks to train themselves without labels. You can use a corpus like Universal Dependencies if you want to predict labels on words / sentences but you can also just use any text and do tasks like "predict hidden words" or "predict next sentence", the way they are defined here: https://arxiv.org/pdf/1810.04805.pdf or any other way as long as it makes sense for the neural network, you can also use OPUS if you want to try translating sentences with the whole encoder-decoder architecture of the Transformer.

You probably don't need a high-end GPU to train a small transformer on a small corpus. I trained a basic transformer in 30min with an rtx2070s on europarl with just the masked word prediction task. If you don't have a GPU it'll be harder though, I never tried to train a very small Transformer, don't know how they scale. I guess you could try to predict masked words with ~100 sentences and a very small transformer and train that model on CPU.

If you're only testing the architecture of the transformer and not the embeddings you can start the model from pretrained embeddings it should speed up the training a lot.

2

sharp7 OP t1_iqu81jn wrote

Hmm interesting that it only took you 30 min for europarl and masked word prediction. Do you have any links to more information about that dataset and task? I'm not familiar with masked word prediction. But that's pretty fast. Although I only have an old GTX 1060 6GB. Not sure how much worse that is than your rtx2070.

1

IntelArtiGen t1_iqv7vu7 wrote

The task is described in the paper I linked (3.1, Task #1: Masked LM). Any implementation of BERT should use it, like this one.

2