Tiny_Arugula_5648

Tiny_Arugula_5648 t1_izix4qp wrote

This sub is overloaded with bad data viz and there are many other problems that aren’t as obvious as these are.. it’s really easy for untrained people to make bad graphs that look good.

The other big issue is a lack of data skepticism.. even if you know best practices, if you use bad data it’s still a bad data viz.

Unsurprisingly the posters always get pissed when you explain where they are making their mistakes.. more interested in getting an upvoted than learning the art.

1

Tiny_Arugula_5648 t1_isx67jr wrote

Doubtful you got “bricked” or that Google caught you switching accounts… more likely TPUs are in a lot of demand and are expensive and the Colab service is a best effort to give you unused resources and there just wasn’t any TPUs available…

1

Tiny_Arugula_5648 t1_isj2xgz wrote

Well it doses depend on what typed of models you want to build and how much data you’ll be using… but the general rule of thumb is always go with the most powerful GPU and largest amount of ram you can afford.. having to little processing power means you’ll wait around much longer for everything (training, predicting) with to little ram many of the larger models out like BERT might not run at all..

Or just get a couple of colab accounts.. I get plenty of v100 & even a100 time, by switching between different accounts

4