Submitted by Sadness24_7 t3_z3hwj7 in deeplearning

Hello there,

Im againt facing the issues of not knowing how the metrics and losses are calculated in keras/tensorflow.

I have a custom metric (RMSE) and when training a model with 3 outputs (dense layer) i only get one value. The metric function ( made using the K backend) allows only one output to be calculated at once ( meaning the ground truths and predicted values are vector not array) thus it has to calculate each output separatly, but how it the comes up with a single value ?

​

This is what my model definition looks like.

model = keras.Sequential()
model.add(Input(shape=len(input_keys), ))
model.add(Dense(units=192, activation='tanh'))
model.add(Dropout(.3))
model.add(Dense(units=192, activation='tanh'))
model.add(Dropout(.3))
model.add(Dense(units=64, activation='relu'))
model.add(Dropout(.3))
model.add(Dropout(.3))
model.add(Dense(units=3, activation='linear'))

and compile ...

model.compile(optimizer=opt, loss=loss_fn, metrics=[root_mean_squared_error])

0

Comments

You must log in or register to comment.

IshanDandekar t1_ixlugz1 wrote

Hi, if you really want to use RMSE as a metric, here's the link RMSE

1

Sadness24_7 OP t1_ixpuynf wrote

I would rather use my own rmse this one does some weighting and it just does not make sense to me.

1

pornthrowaway42069l t1_ixm7q3s wrote

You can specify several losses, or have multi-output with a single loss - in both cases Keras will average them out (I think its non-weighted by default, and you can specify the weights, but I don't remember 100%).

You can't really have 3 different loss values for a single network - otherwise it won't know how to use that to backpropagate. The best you can do is write a custom loss function, and mix them in a way that makes sense for your problem (You will still need to provide a singular value at the end), or provide the weights (You'd need to look up APIs docs for that).

1

Sadness24_7 OP t1_ixpuuzn wrote

I dont care about the loss, i care about the rmse metric. When i calculate it for each output it just does not add out to what the training method says. 🫤

1

pornthrowaway42069l t1_ixq1g4e wrote

Write a custom function, and use it as a metric? Not sure what you mean by "What the training method says", but I think default metrics get summed just like losses.

1

Sadness24_7 OP t1_iy8qtwh wrote

i did write my own metric based on examples from keras. But since i have to do it using callbacks and their backend, it works only on one output at a time meaning both predictions and true values are vector.
What i meant by that is that when i call model.fit(....) i tells me at each epoch something like this:

Epoch 1/500

63/63 [==============================] - 1s 6ms/step - loss: 4171.9570 - root_mean_squared_error: 42.4592 - val_loss: 2544.3647 - val_root_mean_squared_error: 44.4907

​

where root_mean_squared_error is a custom metric as follow.

def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true)))

which when called directly wants data in a for of vector, meaning this function has to be called for each output separatly.

In order to better optimize my model i need to understand how the losses/metrics are calculated so it results in one number (as shown above during training)

1

pornthrowaway42069l t1_iy8srkr wrote

Ah, I see. During training, the loss and metrics you see are actually moving averages, not exact losses/metrics in that epoch. I can't find the documentation rn, but I know I seen it before. What this means is that losses/metrics during training won't be a "good" gauge to compare with, since they include information from previous epochs.

1