Viewing a single comment thread. View all comments

MathChief t1_izjarfb wrote

1x1 conv is essentially a linear transformation (of number of channels) as the other redditor suggests, same as nn.Linear in PyTorch.

What I would to add is in PyTorch the 1x1 conv by default accepts tensor of shapes (B, C, *), for example (B, C, H, W) in 2d, this is convenient for implementing purposes. If you use nn.Linear, the channel dimension has to be first permuted to the last, and then applying the linear transformation, and permuted back. While using the 1x1 conv, which is essentially a wrapper for the C function that does the einsum automatically, it is just a single line thus the code is cleaner and less error prone.

7

quagg_ t1_izjvgkv wrote

To add onto this point about it being the same as nn.Linear in PyTorch, it is useful in hypernetwork applications where you have a data-conditioned (a context set, partially observed sequence, etc.) hypernetwork. Because of the data-conditioning, each sample has a different main-network MLP and doesn't inherently allow for batching.

If you want to parallelize over multiple different MLPs at one time still, then you can use 1x1 convolutions in 1D alongside the "groups" tag to enable running through all of those separate networks at the same time, saving you from sequential processing at the cost of a larger CNN (NumNodes * BatchSize as filters) in each convolution layer.

4

Mefaso t1_izmyrsw wrote

Oh that sounds very useful, you don't randomly happen to know a coded example of that?

1

quagg_ t1_izn5ha7 wrote

No 3rd party ones (that I know of), but I've an implementation of my own. Give me a day and I'll setup a repo to share it!

2

quagg_ t1_izr231j wrote

Here's the first (quick) version up! https://github.com/qu-gg/torch-hypernetwork-tutorials

It currently only contains simple examples on MNIST to highlight the implementation structure, however I'll add time-series (e.g. neural ODE) examples and better README explanations when I can over time. Let me know if you have any questions or feel free to just ask in the Issues.

Thanks!

1