Comments

You must log in or register to comment.

trajo123 t1_jbits7f wrote

Posting assignment questions to reddit.

13

RoboiosMut t1_jbj01s2 wrote

You can ask chatgpt for this type of questions

9

neuralbeans t1_jbiu3io wrote

Yes, if the features include the model's target output. Then, the overfitting would result in the model outputting that feature as is. Of course this is a useless solution, but the more similar the features are to the output, the less overfitting will be a problem and the less data you would need to generalise.

6

BamaDane t1_jbjhitr wrote

I’m not sure I understand what your method does. If Y is the output, then you say I should also include Y as an input? And if I manage to design my model so it doesn’t just select the Y input, then I’m not overfitting? This makes sense that it doesn’t overfit, but doesn’t it also mean I am dumbing-down my model? Don’t I want my model to preferentially select features that are most similar to the output?

2

neuralbeans t1_jbjizpw wrote

It's a degenerate case, not something anyone should do. If you include Y in your input, then overfitting will lead to the best generalisation. This shows that the input does affect overfitting. In fact, the more similar the input is to the output, the simpler the model can be and thus the less it can overfit.

1

Constant-Cranberry29 OP t1_jbiutcs wrote

Can you provide a reference that states that feature engineering can address overfitting?

0

neuralbeans t1_jbixife wrote

3

trajo123 t1_jbjpz72 wrote

Have you done any research at all? What did you find so far?

2

jzaunegger t1_jbjst8q wrote

Heres one paper that I can immediately think of, https://arxiv.org/abs/1409.7495. The authors use a synthetic dataset to select and enginer features of a “real” dataset. Not sure if this is what you are looking for but could be a step in the right direction.

2