Tips & Tricks in OR (Operations Research) Practice
- What is Operations Research (OR)?
- A non-exhaustive list of best practices in OR
What is Operations Research (OR) and how does it work?
Highlights, a non-exhaustive list of best practices in OR.
Tips & tricks that may help the OR practitioner to get the best of its work.
Load only necessary data
Usually, data models are far richer than what the optimization actually deals with. There are two alternatives for this: either limit the data model that pulls the information from corporate data silos, or preprocess data to create the scenario to optimize with a restriction of the available input data.
More models during prototyping
One can trust an optimization model only by testing it on a set of relevant data. When data comes late, the risk of creating a math model that might not scale is hidden. That’s why we highlighted the urge of getting relevant data as soon as possible (see §3.1 Data collection). With this assumption, the OR practitioner must come quickly to the point where the complexity of its model can be challenged. For instance, if the model is continuously linear for most of the constraints but one or two specific use cases that imply discretization, it is absolutely critical to retrieve or build a data set that would allow testing this feature.
As time goes by, at DecisionBrain we build our own libraries that may be reused when creating a new product or starting a project. It’s a good practice that avoids starting from the blank page. It also progressively adds instances to the common library and improves its robustness, which then benefits all the other projects that use it.
More testing during prototyping
Use all available test environments, such as CPLEX/CPO command line . This will allow the OR practitioner to focus on the solve behavior with the ability to change settings without changing the code, analyse the behavior of the engine such as engine presolved models.
Rewrite part of the model
Some models might be mathematically equivalent but the way some constraints are implemented can make a difference. For instance, in a linear problem, avoid getting very long expressions made of tens of thousands of variables. Sometimes, we can see dramatic improvements when switching to a “delta” model. For instance, instead of summing terms from the start of the horizon to the end, you could define a new expression that sums the terms that differ from one period to the next.
High level solvers such as CPLEX / CPO allow the OR practitioner to find a minimal set of conflicts out of the original model. This is very useful to debug mathematical models at an early stage and also detects data errors.
Document your data and mathematical models
Data models change usually synchronize with the mathematical model. Both models' descriptions shall be properly documented and maintained all along the project and then during its maintenance. Think of other resources that might jump into the project and replaces the original developer team.
Provides a replay mechanism
Once the application is deployed, optimization jobs are executed on customers’ premises or cloud. What if any job fails or ends up with unexpected results? This behavior needs to be analyzed by the OR developer with the very same context as when it was launched on the production environment.
Build a separate module for non-regression optimization tests. This benchmark will increase its data set all along with the project life cycle. It will mainly measure the stability (or the improvement) of the optimization and/or business key performance indicators. Whenever the benchmark worsens quality, it shall be considered as a high priority warning that something is probably wrong in the latest code, including third party library updates.
Inject a solution
Provide a tool and an API to load an external solution. This enables us to challenge natural wonders such as “if I swap machines of activity 1 and activity 2, I should get a better solution”. Such local improvement guesses are quite natural to express but slightly more difficult to implement: they probably imply checking several constraints in cascade. As a consequence, a checker of this new solution needs to be launched to assert first the feasibility of the solution and then recompute the solution quality indicators to prove or reject the assertion that the move actually improves the current solution. Note that this checker implementation can be either:
- Deterministic: we isolate a set of constraints to check (capacities, material flow balance…) and we recompute manually their satisfaction/dissatisfaction
- Through the optimization engine: the complete original solution plus the local modification suggested by the planner is injected in the solver model as additional constraints “Variable = value”. This is the most accurate checker since the first one may answer “check” letting some unchecked constraints unfeasible
Originally published March 30, 2020, modified March 23, 2023
Keep up to date with our blog
Share this Post
Speak with an Expert