Viewing a single comment thread. View all comments

olmec-akeru OP t1_iy2pewq wrote

Awesome answer: for a set of assumptions, what would you use?

I've seen some novel approaches on arXiv on categorical variables; but can't seem to shake the older deep-learning methods for continuous variables.

1

NonOptimized t1_iy2qybl wrote

>Awesome answer: for a set of assumptions, what would you use?
>
>I've seen some novel approaches on arXiv on categorical variables; but can't seem to shake the older deep-learning methods for continuous variables.

Could you link some of these novel approaches? I'm quite curious what they could be?

3

BrisklyBrusque t1_iy3s0ha wrote

Cool links. I’ll add “entity embeddings” into the mix. Entity embeddings reimagine a categorical variable as a continuous-valued vector and allow us to skip one-hot encoding.

6

olmec-akeru OP t1_iy7a1yc wrote

I fear that the location in the domain creates a false relationship to those closer on the same domain

i.e. if you encode at 0.1, 0.2, …, 0.9 you're saying that the category encoded to 0.2 is more similar to 0.1 and 0.3 than it is to 0.9. This may not be true.

1

BrisklyBrusque t1_iy8wfoa wrote

I freely admit I haven’t looked into the math. But my understanding was the embeddings are a learned representation. They are not arbitrary; instead they aim to put categories close to one another on a continuous scale only in those situations where it is justified.

1