Viewing a single comment thread. View all comments

UnusualClimberBear t1_iz9ohx8 wrote

TRPO follows the same direction as NPG with a maximal step size to still satisfy the quadratic approximation of the KL constraint. I'm not sure of what you would like to to better.

Nicolas Leroux gave a nice talk on RL seen as an optimization problem: https://slideslive.com/38935818/policy-optimization-in-reinforcement-learning-rl-as-blackbox-optimization

3

randomkolmogorov OP t1_iz9z8av wrote

Thank you, this talk is very helpful. I was thinking about the formulation in terms of the natural gradient but adapting the approach in TRPO to my case seems like a good idea.

1

UnusualClimberBear t1_iza00fp wrote

TRPO is often too slow for applications because of that line search and researchers often prefer to use PPO, which also has some guarantees in terms of KL on the state distribution and is faster. I'd be curious to hear about your problem if it ends up that TRPO is the best choice.

1

randomkolmogorov OP t1_iza7hwl wrote

I am not really doing RL but rather aleatoric uncertainty quantification where I need to optimize over a manifold of functions. My distributions are much more manageable than if I were doing policy gradient so I have a feeling that with some cleverness it might be possible to sidestep a lot of the complications in TRPO but use the same ideas in the paper.

4