Viewing a single comment thread. View all comments

josejo9423 t1_jcpu2pe wrote

I would go with 1 but I would no tune early stopping just the number of estimators , xgbboost has the option of stopping iterations (early stopping) when there are no improvements in the metric, if you plot then what model believes and realizes that could have been stopped early , step up that number that you consider before overfitting

1

EcstaticStruggle t1_jcthdzz wrote

Thanks. This was something I tried earlier. I noticed that using the maximum number of estimators almost always lead to the highest cross validation score. I was worried there would be some overfitting as a result.

1