Submitted by chaitjo t3_10r31eo in MachineLearning

Geometric GNNs are an emerging class of GNNs for spatially embedded graphs in scientific and engineering applications, s.a. biomolecular structure, material science, and physical simulations. Notable examples include SchNet, DimeNet, Tensor Field Networks, and E(n) Equivariant GNNs.

How powerful are geometric GNNs? How do key design choices influence expressivity and how to build maximally powerful ones?

Check out this recent paper for more:

📄 PDF: http://arxiv.org/abs/2301.09308

💻 Code: http://github.com/chaitjo/geometric-gnn-dojo

💡Key findings: https://twitter.com/chaitjo/status/1617812402632019968 

P.S. Are you new to Geometric GNNs, GDL, PyTorch Geometric, etc.? Want to understand how theory/equations connect to real code?

Try this Geometric GNN 101 notebook before diving in:
https://github.com/chaitjo/geometric-gnn-dojo/blob/main/geometric_gnn_101.ipynb

37

Comments

You must log in or register to comment.

fraktall t1_j6wbugp wrote

I don’t get why graph NN aren’t attracting more attention tbh

2

nombinoms t1_j6x36k6 wrote

Well when you consider the fact that every transformer is based on self-attention, which is a type of GNN, I'd say they are getting quite a bit of attention (no pun intended).

4

chaitjo OP t1_j70k5tr wrote

In a sense, yes indeed!

For those who are curious, check out this blogpost from me: Transformers are Graph Neural Networks - https://thegradient.pub/transformers-are-graph-neural-networks/

It explores the connection between Transformer models such as GPTs and other LLMs for Natural Language Processing, and Graph Neural Networks. It is now one of the top-3 most read articles on The Gradient and features in coursework at Cambridge, Stanford, etc.

4

fraktall t1_j6z1m35 wrote

Damn, I had no idea, thx, will now go read papers

3