Viewing a single comment thread. View all comments

FakeOuter t1_iwbsmnp wrote

- try triplet loss

- swap Flatten with GlobalMaxPooling2D layer, it will reduce trainable params 49x in your case. Less params -> lower chance of overfitting. Maybe place some normalization layer right after maxPool

8