Tips & Tricks in OR Practice
- What is Operations Research (OR)?
- A non-exhaustive list of best practices in OR
- What is Operations Research (OR)?
What is Operations Research (OR) and how does it work?
Highlights, a non-exhaustive list of best practices in OR.
Tips & tricks that may help the OR practitioner to get the best of its work.
Load only necessary data
Usually, data models are far richer than what the optimization actually deals with. There are two alternatives for this: either limit the data model that pulls the information from corporate data silos, or preprocess data to create the scenario to optimize with a restriction of the available input data.
More models during prototyping
One can trust an optimization model only by testing it on a set of relevant data. When data comes late, the risk of creating a math model that might not scale is hidden. That’s why we highlighted the urge of getting relevant data as soon as possible (see §3.1 Data collection). With this assumption, the OR practitioner must come quickly to the point where the complexity of its model can be challenged. For instance, if the model is continuously linear for most of the constraints but one or two specific use cases that imply discretization, it is absolutely critical to retrieve or build a data set that would allow testing this feature.
As time goes by, at DecisionBrain we build our own libraries that may be reused when creating a new product or starting a project. It’s a good practice that avoids starting from the blank page. It also progressively adds instances to the common library and improves its robustness, which then benefits all the other projects that use it.
More testing during prototyping
Use all available test environments, such as CPLEX/CPO command line . This will allow the OR practitioner to focus on the solve behavior with the ability to change settings without changing the code, analyse the behavior of the engine such as engine presolved models.
Rewrite part of the model
Some models might be mathematically equivalent but the way some constraints are implemented can make a difference. For instance, in a linear problem, avoid getting very long expressions made of tens of thousands of variables. Sometimes, we can see dramatic improvements when switching to a “delta” model. For instance, instead of summing terms from the start of the horizon to the end, you could define a new expression that sums the terms that differ from one period to the next.
High level solvers such as CPLEX / CPO allow the OR practitioner to find a minimal set of conflicts out of the original model. This is very useful to debug mathematical models at an early stage and also detects data errors.
Document your data and mathematical models
Data models change usually synchronize with the mathematical model. Both models' descriptions shall be properly documented and maintained all along the project and then during its maintenance. Think of other resources that might jump into the project and replaces the original developer team.
Provides a replay mechanism
Once the application is deployed, optimization jobs are executed on customers’ premises or cloud. What if any job fails or ends up with unexpected results? This behavior needs to be analyzed by the OR developer with the very same context as when it was launched on the production environment.
Build a separate module for non-regression optimization tests. This benchmark will increase its data set all along with the project life cycle. It will mainly measure the stability (or the improvement) of the optimization and/or business key performance indicators. Whenever the benchmark worsens quality, it shall be considered as a high priority warning that something is probably wrong in the latest code, including third party library updates.
Inject a solution
Provide a tool and an API to load an external solution. This enables us to challenge natural wonders such as “if I swap machines of activity 1 and activity 2, I should get a better solution”. Such local improvement guesses are quite natural to express but slightly more difficult to implement: they probably imply checking several constraints in cascade. As a consequence, a checker of this new solution needs to be launched to assert first the feasibility of the solution and then recompute the solution quality indicators to prove or reject the assertion that the move actually improves the current solution. Note that this checker implementation can be either:
- Deterministic: we isolate a set of constraints to check (capacities, material flow balance…) and we recompute manually their satisfaction/dissatisfaction
- Through the optimization engine: the complete original solution plus the local modification suggested by the planner is injected in the solver model as additional constraints “Variable = value”. This is the most accurate checker since the first one may answer “check” letting some unchecked constraints unfeasible
Originally published March 30, 2020, modified June 3, 2020
Keep up to date with our blog
We are proud of the DecisionBrain Team that participated in the Axa Code Contest 2020 “Héros and Métros”, ranked 8th among 70 teams. The Code Challenge consisted of minimizing...
We are excited to announce, today on the 50th Anniversary of Earth Day, that DecisionBrain’s headquarters in Paris have switched to 100% renewable energy by joining Planète OUI....
Babilou is a leading private childcare network based in France, with a presence in 12 countries and around 700 establishments. The DecisionBrain project called MODE (Match de...
DecisionBrain announces a deal whereby IBM will include in its Data and AI product offering an OEM version of DecisionBrain optimization platform. The platform will be...
If you’re like me and you work in a company that uses Gradle as its build tool, you may have wondered, while writing the build scripts for your project, what exactly should go...
DecisionBrain is glad to announce it is working on AIDA (Artificial Intelligence for Digital Automation) a joint research project led by IBM, along with SOFTEAM, STET, and...
DecisionBrain is glad to announce that it has been selected by a leading french retailer, to develop a solution to predict short and mid-term cash flows at a store and...
DecisionBrain is glad to announce that it has been selected by a call center provider to develop a workforce tactical and operational planning solution. In order to maximize...
Following an exciting conclusion to 2019 with the announcement of a strategic partnership with IBM DOC 4.0, DecisionBrain continues its expansion in 2020 by announcing the...
DecisionBrain introduces Dynamic Scheduler , a flexible yet powerful mobile workforce scheduling tool. Building on the experience of working with leading facility service and...
DecisionBrain is glad to announce that it has been selected, in partnership with IBM, to develop an optimization solution for a leading global automotive player. The solution...
DecisionBrain is proud to announce the release of our Optimization Server. A lean but powerful system designed to support data scientists and developers build and deploy fully...
DecisionBrain is excited to announce a new contract with a leading car manufacturer, considered a global reference in supply-chain innovation. The project, in partnership with...
DecisionBrain is glad to announce that it has been selected by Carhartt a leading US apparel company to develop, in partnership with IBM, a planning solution aimed at...