Viewing a single comment thread. View all comments

shellyturnwarm t1_j6hw7yq wrote

In your dataloaders, why do you set persistent_workers to False. And why do you choose 2 for num_workers?

Also, what does self.se stand for in ConvGroup and what is it doing there?

Finally what is whitening, and what are you trying to achieve with it?

2

tysam_and_co OP t1_j6hxgzk wrote

Hi hi hiya there! Great questions, thanks so much for asking them! :D

For the dataloaders, that dataloading only happens once -- after that, it's just saved on disk as a tensor array in fp16. It's wayyyyy faster for experimentation this way. We only need to load the data once, then we move it to GPU, then we just dynamically slice it on the GPU each time! :D

As for self.se, that used to be a flag for the squeeze_and_excite layers. I think it's redundant now as it's just a default thing -- this is a one person show and I'm moving a lot of parts fast so there's oftentimes little extraneous bits and pieces hanging around. I'll try to clean that up on the next pass, very many thanks for pointing that out and asking!

I'm happy to answer any other questions that you might have! :D

1