Comments

You must log in or register to comment.

fogandafterimages t1_izy7w6q wrote

You don't want ML; you want constrained optimization.

18

raz1470 t1_izy8ysa wrote

Agree. Have a look at OR Tools. I’m sure one of the examples they give is similar to your problem. Or Pulp and Gekko are both good too (and will also have examples similar to your problem).

3

jobeta t1_izy9slt wrote

This is mostly a constrained optimization problem. It could benefit from ML if you need to predict some of the variables you’re optimizing for I guess? How many teams? How big are they? It’s hard to help you without details.

2

Clouwels OP t1_izybpmr wrote

Approximately 150 people split into 11 teams, person parameters: age, gender, skill index, department - all are numbers. Teams should be of equal size, if there are fewer members in a team, the team should be “stronger".

1

Clouwels OP t1_izycir8 wrote

Normally this takes several hours, it's not just about values but experience. That's why I thought it would be possible to use ML.

1

jobeta t1_izyks32 wrote

Here is my 2 cents:

They have a process, it is slow but it does the job. So start by saving them several hours and help automate their current process. This is valuable (saving manual labor) and will already be challenging: you will have to sit with them and understand how they do it. If it takes several hours, it is unlikely that they have a deterministic algorithm for it. To create one, you will likely have to have them make a number of decisions. You can probably help them make them. The outcome should be an algorithm, as simple as possible, that can perform the assignment of associates to teams.

Once you have solved this problem for them, you can think about ways to improve the assignment. But this open a very different can of worms. What does better mean? How do I measure that it is better? You will have to define some meaningful metrics (make sure they define them or that they definitely sign off on them) to be able to compare different assignment algorithms. Because you have so few teams, it will be pretty difficult to design a rigorous experiment that helps you determine if your new assignment algorithm beats the baseline. You can always come up with some fancy algorithm but how do you prove it works better? Some associates will say they don't like the new system some will like it. Who should we believe?Not to mention that you'll want to be able to track teams easily to be able to run some analytics. Chances are you'll have to build tracking for the teams. It might not be worth your time.

I've spent months trying to do things like this. The main challenge is that the Ops team wanted something better but never wanted to invest into defining or measuring what better was.

Alternatively you can keep adding simple constraints to your model that satisfies their intuition but that's not exactly Machine Learning so I would try to not get stuck in that position.

Good luck!

2

Clouwels OP t1_izyo4p0 wrote

I've tried make teams a few times myself, and I've written a silly app that helps me do it. Since I started doing ML I wanted to try using it for this case.

Thank you so much for the response, it was very inspiring to me.

1

arg_max t1_izymwa9 wrote

You basically need some kind of value function that estimates how good one assignment of teams is. For example, if each player has score between 1 and 100 your value function could simply be to minimize the difference between the strongest and weakest team. Typically you design this by hand. Then you run a constraint optimization method that makes sure that each player gets assigned to exactly one team and probably also takes team size into account. Then you can optimize this. It's not really ML but more of an optimization problem. Though if you really want to you might try to learn a player score, although it might be hard to collect training data for that.

2

Clouwels OP t1_izyott2 wrote

Thank you, that sounds like the way

1