Submitted by fedegarzar t3_z9vbw7 in MachineLearning
whatsafrigger t1_iyjaub9 wrote
Reply to comment by picardythird in [R] Statistical vs Deep Learning forecasting methods by fedegarzar
It's so so so important to set up good experiments with solid baselines and comparisons to other methods.
notdelet t1_iyjhvqd wrote
If you use a flawed evaluation procedure, does a solid baseline do you any good?
Ulfgardleo t1_iylr164 wrote
The "and" in the post you replied to was a logical "and". The best evaluation procedure does not help if you use poor, underperforming baselines.
csreid t1_iykq7xn wrote
And it's sometimes kinda hard to realize you're doing a bad job, especially if your bunk experiments give good results
I didn't have a ton of guidance when I was writing my thesis (so, my first actual research work) and was so disheartened when I realized my excellent groundbreaking results were actually just from bad experimental setup.
Still published tho! ^^jk
Pikalima t1_iylah8s wrote
Sometimes I consider retracting my very first paper because of this.
maxToTheJ t1_iyjw8b8 wrote
A lot of people are doing hyperopt with how they set up their experiment to get better results so that they get in prestigious conferences.
Viewing a single comment thread. View all comments