loly0ss

loly0ss t1_jdloewa wrote

Hello everyone,

I had a very ignorant question which I’m trying to find an answer too but i still couldn’t find it.

In terms of the deep learning model in supervised segmentation vs semi-superised segmentation.

Is the model itself the same in both cases, for example using Unet++ for both? And the only diffference comes during training where we use psuedo-labels for example for semi-supervised segmentation?

Or is the model different when it comes between supervised vs semi-supervised segmentation?

Thank you!

1

loly0ss t1_j0zjhsb wrote

Hello everyone!

I had a quick question regarding the KL divergence loss as while I'm researching I have seen numerous different implementations. The two most commmon are these two. However, while look at the mathematical equation, I'm not sure if mean should be included.

KL_loss = -0.5 * torch.sum(1 + torch.log(sigma**2) - mean**2 - sigma**2)

OR

KL_loss = -0.5 * torch.sum(1 + torch.log(sigma**2) - mean**2 - sigma**2)

KL_loss = torch.mean(KL_loss)

Thank you!

1

loly0ss t1_ix4c4ix wrote

Yeah I've trid with no hidden layers and 2 hidden layers still the same. I've also tried Relu and softmax btu sigmoid was better. It's the mnist dataset, I'm trying to predict if the label is 1 or not 1. Since labels of 1 are 10% of the dataset. I reduced the dataset to around 40/60 , so 40% are labeled one and 60% are not ones, which I encoded them to 0.

1

loly0ss t1_ix40v2u wrote

I have sigmoid in all hidden layers and output but it seems the model is only predicting one class. I tried balancing the datset, changing learning rate, shuffled data and iteration number and weight initilization yet still wrong :(

1

loly0ss t1_ix1xo8r wrote

I'm using sigmoid function for binary classification. However I'm using at each layer. For binary classifaction is it better to ouse sigmoid only at the output layer?

1