Viewing a single comment thread. View all comments

vwings t1_j0l4q82 wrote

This is not true... There are many other works,e.g. Platt Scaling that also provide calibrated classifiers (i suppose this is what you call "valid"). But conformal prediction indeed tackles this problem...

4

bremen79 t1_j0n8wsn wrote

Platt scaling does not have any guarantee and in fact it is easy to construct examples where it fails. On the other hand, conformal prediction methods, under very weak assumptions, on the multiclass problem of the question would give you a set of labels that is guaranteed to contain the true label with a specified probability.

3

vwings t1_j0osaa7 wrote

Now you are completely deviating from the original scope of the discussion. We discussed what is more general, but - since you changed scope - you agree with me on that.

About "guarantees": also for CP, it is easy to construct examples where it fails. If the distribution of the new data is different from the calibration set, it's not exchangeable anymore, and the guarantee is gone.

1

Extra_Intro_Version t1_j0lduaf wrote

IIRC, CP extends Platt.

2

vwings t1_j0ljgfr wrote

That's what the CP guys say. :)

I would even say that Platt generalizes CP. Whereas CP focuses on the empirical distribution of prediction score only around a particular location at the tails, e.g. at confidence level 5%, Platt scaling tries to mold the whole empirical distribution into calibrated probabilities -- thus Platt considers the whole range of the distr of scores.

4