eyeofthephysics

eyeofthephysics t1_jbhu9d4 wrote

First I would say there exist versions of FinBERT which aren't just tuned for sentiment analysis. There are two groups who developed models they called FinBERT https://arxiv.org/abs/1908.10063 and https://arxiv.org/abs/2006.08097. The first paper's model can be fond here and is tuned for seniment analysis but the second model, found here, was pre-trained using masked language modelling on general financial text. So that one can be fine-tuned for other tasks.

Since you're interested in text embeddings, you may also be interested in this paper https://arxiv.org/pdf/2111.00526.pdf. The focus of that paper is sentiment analysis, but the general idea of using a sentence-BERT model to get better textual embeddings (as opposed to using vanilla BERT) should hold more generally.

2

eyeofthephysics t1_j4f2w85 wrote

>u/IamTimNguyen

Hi Tim, just to add on to your comment, Sho Yaida (one of the co-authors of PDLT) also wrote a paper on the various infinite width limits of neural nets, https://arxiv.org/abs/2210.04909. He was able to construct a family of infinite width limits and show that in some of them there is representation learning (and he also found agreement with Greg's existing work).

1