Viewing a single comment thread. View all comments

bluuerp t1_iux2ixx wrote

A neural network reduces a large number of parameters down to a few. And even those that don't like autoencoders have some kind of bottleneck. Hence they are lossy data compression methods. That is how they learn. It is in it's vary nature not reversible. You can't reverse dog/cat output to a full image...but you can use gradCAM to get estimates. I.e you can use gradient ascent to get what you are looking for. Do that for a bunch of different random noise start values and you can estimate which neurons are most responsible for a certain output class.

3

ojiber OP t1_iux5uw3 wrote

Hi, I had a quick look into gradCam and it seems to be very complex. Do you have any good resources on it?

1