Submitted by EmbarrassedFuel t3_10w5f9u in MachineLearning

I have a problem I need to solve that, as far as I can tell, doesn't fit very well into most of the existing RL literature.

Essentially the task is to create on optimal plan over a time horizon extending a flexible number of steps into the future. The action space is both discrete and continuous - there are multiple available distinct actions, some of which need to be given continuous (but constrained) parameters.

In this problem however, the state of the environment is known ahead of time for all the future time steps, and the updated state of the agent after each action can be calculated deterministically given the action and the environment state.

Modelling the entire problem as a MILP is not feasible due to the size of the action and state space, and we have a very large data set for agent and environment state to play with. Does anyone have any suggestions for papers or models that might be appropriate for this scenario?

2

Comments

You must log in or register to comment.

UnusualClimberBear t1_j7lvpz8 wrote

Looks like an optimal control problem rather than an RL one. RL is there for situations with no good model available. If stochasticity is present, but you still have a good model once the uncertainty is known, then Markov predictive control is a good way to go.

3

UnusualClimberBear t1_j7opc2r wrote

Also if your world is deterministic but you cannot build a good model of it, it may be that you are close to the situation of games such as Go, and Monte Carlo Tree search algorithms are an option to consider (variants of UCT with or without function approximation)

2

EmbarrassedFuel OP t1_j7p40eo wrote

oh also the model needs to run at inference time in a relatively short period of time on cheap hardware :)

1

EmbarrassedFuel OP t1_j7p3xc1 wrote

I haven't been able to find anything about optimal control with all of:

  • non-linear dynamics/model
  • non-linear constraints
  • both discrete and continuously parameterized actions in the output space

but in general, discovery of papers/techniques in control theory seems to be much harder for some reason

1

BasedAcid t1_j7x88ly wrote

Search for the keyword “deterministic MDP.” This is a relatively well-studied area.

2

jimmymvp t1_j7oybk9 wrote

Ok, first off, I'm very curious what's the actual problem that you're solving. Can you describe it a bit more in detail or give a link?

If you have a perfect model that's cheap to compute, you can go with sampling approaches, I don't know how your constraints look like though. If your state/action space is too big, you might want to reduce it somehow by learning an embedding.

Is the model differentiable? I guess it is if you're using a MILP approach.

I guess some combination of MCTS with value function learning is plausible if your search space is big, such as it's done with alpha zero etc. I find the hybrid aspect of it very interesting though. It sounds like if you want to do amortized search, you need to combine MCTS and search in continuous space (sampling). Should be simple enough with a perfect model. Probably some ideas from mu zero would come in handy.

1

EmbarrassedFuel OP t1_j7p519o wrote

Basically given some predicted environment state, going forward for say 100 time steps, we need to find an optimal cost course of action. Although the environment state has been predicted, for the purposes of this task the agent can consider it deterministic. The agent has one variable of internal state and can take actions to increase or decrease this value based on interactions with the environment. We can then calculate the new cost over the given time horizon by simulating the actions chosen at each step, but this simulation is fundamentally sequential and wouldn't allow backpropagation of gradients.

>you can go with sampling approaches

What exactly do you mean by this? something like REINFORCE?

> I guess it is if you're using a MILP approach.

Not sure I follow here, but I'm not using a MILP (as in mixed integer linear program). At the moment I'm using a linear programming approximation and heuristics, which doesn't generalize well.

> some combination of MCTS with value function learning

I think this could work, however without looking into it I'm not sure that it would work at inference time in my resource-constrained setting

1