Viewing a single comment thread. View all comments

PleaseKillMeNowOkay OP t1_iqqwpem wrote

That's what I thought but I haven't been able to get the second model to even match the performance of the first one. I tried regularization methods without much success.

1

thebear96 t1_iqqwxur wrote

Is the loss decreasing enough after running for specified number of epochs? Are you getting a flat tail after convergence?

1

PleaseKillMeNowOkay OP t1_iqqxd7o wrote

Yes, I trained until the validation loss stopped improving, and then some more just to make sure.

1

thebear96 t1_iqqxkb4 wrote

That's strange. It could be a data quantity issue. Bigger networks typically will need more data to perform well.

2

PleaseKillMeNowOkay OP t1_iqqxw6h wrote

I wouldn't call it a bigger network necessarily. The second network has two more output neurons compared to the first. Rest are the same. How much difference that makes. Idk

1

thebear96 t1_iqqykoz wrote

That shouldn't create a lot of difference but yes the performance should be worse than the first network in that case. It's far easier to predict two outputs than four. You can try increasing linear layers and using a slower learning rate to see if the model improves.

1

PleaseKillMeNowOkay OP t1_iqqz3lp wrote

I could add more linear layers and based on my experiments it would probably help but my intention is to compare my new model with the old one for which I presume the architecture should be as close as possible.

1

thebear96 t1_iqr04o9 wrote

Ideally it should. In that case you will have a worse performance for the second architecture. When you compare you'll have to say that. But it's pretty expected that the second architecture will not perform as well as the first one, so I'm not sure if there's much use comparing. But it's definitely doable.

2