Lucas_Matheus
Lucas_Matheus t1_jbatxn8 wrote
Reply to [R] Analysis of 200+ ML competitions in 2022 by hcarlens
Amazing. This seems like a great way to learn how things are currently being done in ML
Lucas_Matheus t1_iyxfcvy wrote
To me this seems more related to the early-stopping parameters. Important questions are:
- What's the minimal percentage drop in validation loss you accept? If it's too high (20%), you don't train much. If it's too low (0.05%), it won't stop training.
- What interval of validations are you using? If you check for earlystop every validation, an erratic loss may make the check inconsistent. If it takes too long to check again, the model may already be overfitting
Lucas_Matheus t1_iyf023f wrote
Reply to comment by ykilcher in [D] Paper Explained - CICERO: An AI agent that negotiates, persuades, and cooperates with people (Video) by ykilcher
oh wow it's really him, guys 😮
Lucas_Matheus t1_ixswszk wrote
Reply to [D] Paper Explained - CICERO: An AI agent that negotiates, persuades, and cooperates with people (Video) by ykilcher
Is this actually Yannic? I read Meta's blog post about Cicero yesterday. I really like this video series. Will definitely watch it
Lucas_Matheus t1_jd5j1co wrote
Reply to [D] Simple Questions Thread by AutoModerator
In few-shot learning, are there gradient updates from the examples? If not, what difference does it make?