Viewing a single comment thread. View all comments

sonofmath t1_j7se4mx wrote

Reply to comment by mr_house7 in [D] List of RL Papers by C_l3b

Can't really speak for Hugging Face. It seems to touch on relatively advanced topics and challenging tasks. It certainly looks nice from a practitoner's side, which is very useful to learn the various tricks to make RL work.

Regarding Silver's course, it is a bit outdated indeed, but the focus is more on the basics of RL, whereas Levine focuses on deep RL and assumes a good understanding of the basics.

Now, there are some topics in Silver's course which are a bit outdated (e.g. TD(lambda) with eligibility traces or linear function approximation) which would be better replaced by other topics in more modern courses, typically DQN or AlphaGo (UCL has also a more recent series, which touches on Deep RL). But Silver's explainations are very instructive and is one of the best taught university courses I have seen (in general). I would for sure at least watch the first few lectures.

2

mr_house7 t1_j7uza9c wrote

I'm so sorry to bother you again. Just one final question.
Do you know if the Spinning up algos are worth while? Since I'm on Windows it seems to be a little more changeling to install it in my local machine. Is there an alternative to installing on local machine like colab?

1

sonofmath t1_j7vehdk wrote

Not really, I think the main strength of the library is that it is designed to be easy to understand how the algorithms are implemnted. At the time, the main alternative was OpenAI/Stable baselines, which was quite obscure to understand how the algorithms are implemented. On the other hand, the algorithms do not use some more advanced tricks that enhance performance

However, there are better libraries now. In the same spirit, there is CleanRL, that is clean (with algorithms in one file) , but also performent. If you are looking for a modular easy-to-use library, I would recommend Stable Baselines3

2