Viewing a single comment thread. View all comments

PredictorX1 t1_iyzsby0 wrote

For modeling solutions featuring intermediate calculations (such as the hidden layers of multilayer perceptrons), the hope is that what is learned about each target variable might be "shared" with the others. Whether this effect yields a net gain depends on the nature of the data. Outputs in a multiple-output model which is trained iteratively tend to reach their optimum performance at differing numbers of iterations. There is also the logistical benefit of only having to train one, larger model versus several.

1