Submitted by Sadness24_7 t3_z3hwj7 in deeplearning
Sadness24_7 OP t1_ixpuuzn wrote
Reply to comment by pornthrowaway42069l in Keras metrics and losses by Sadness24_7
I dont care about the loss, i care about the rmse metric. When i calculate it for each output it just does not add out to what the training method says. 🫤
pornthrowaway42069l t1_ixq1g4e wrote
Write a custom function, and use it as a metric? Not sure what you mean by "What the training method says", but I think default metrics get summed just like losses.
Sadness24_7 OP t1_iy8qtwh wrote
i did write my own metric based on examples from keras. But since i have to do it using callbacks and their backend, it works only on one output at a time meaning both predictions and true values are vector.
What i meant by that is that when i call model.fit(....) i tells me at each epoch something like this:
Epoch 1/500
63/63 [==============================] - 1s 6ms/step - loss: 4171.9570 - root_mean_squared_error: 42.4592 - val_loss: 2544.3647 - val_root_mean_squared_error: 44.4907
​
where root_mean_squared_error is a custom metric as follow.
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true)))
which when called directly wants data in a for of vector, meaning this function has to be called for each output separatly.
In order to better optimize my model i need to understand how the losses/metrics are calculated so it results in one number (as shown above during training)
pornthrowaway42069l t1_iy8srkr wrote
Ah, I see. During training, the loss and metrics you see are actually moving averages, not exact losses/metrics in that epoch. I can't find the documentation rn, but I know I seen it before. What this means is that losses/metrics during training won't be a "good" gauge to compare with, since they include information from previous epochs.
Viewing a single comment thread. View all comments