Viewing a single comment thread. View all comments

nombinoms t1_j6x36k6 wrote

Well when you consider the fact that every transformer is based on self-attention, which is a type of GNN, I'd say they are getting quite a bit of attention (no pun intended).

4

chaitjo OP t1_j70k5tr wrote

In a sense, yes indeed!

For those who are curious, check out this blogpost from me: Transformers are Graph Neural Networks - https://thegradient.pub/transformers-are-graph-neural-networks/

It explores the connection between Transformer models such as GPTs and other LLMs for Natural Language Processing, and Graph Neural Networks. It is now one of the top-3 most read articles on The Gradient and features in coursework at Cambridge, Stanford, etc.

4

fraktall t1_j6z1m35 wrote

Damn, I had no idea, thx, will now go read papers

3