Viewing a single comment thread. View all comments

agent229 t1_iur2oqk wrote

You should be able to have autograd calculate the jacobian for you in torch or tensorflow. Another thing I’ve done is a Monte Carlo version (sample near the encoding of a data point, propagate through decoder, inspect changes to output). Perhaps it would be useful to use tSNE to view the embeddings in 2D…

2

Dear-Vehicle-3215 OP t1_iuren5w wrote

Yes, I know that I can calculate the jacobian with autograd, but the problem is that in the paper they use a particular formulation given that they have sigmoid non-linearity https://ibb.co/8xGQkZ6.

Regarding MC and t-SNE, I will investigate.

1

i-heart-turtles t1_iusf0zy wrote

Generally I think there should be more efficient ways of doing what you want without having to compute the full Jacobian- people do similar things in adversarial robustness so you can have a look.

https://arxiv.org/abs/1907.02610

https://arxiv.org/abs/1901.08573

I think you should check the stuff on evaluating for disentanglement. This paper could also be useful for u: https://arxiv.org/abs/1812.06775. For vae disentanglement better Jacobian is close to orthogonal than just small norm.

1