Viewing a single comment thread. View all comments

eigenlaplace t1_iz7dq71 wrote

there are problems where the target is not ideal, but it is noisy instead due to the rater being imperfect

so if you get 100% accuracy on test set, you might just be predicting wrong things because another, more experienced, rater would judge the ground truth to be different than what the first rater said

this is in fact true for most, if not all, data, except for toy/procedural datasets where you actually create the input-output pairs deterministically

1