Blog
Explaining Solutions of Combinatorial Optimization
What is Combinatorial Optimization and Why Do Its Results Need to Be Explained?
Combinatorial Optimization (CO) drives many of the decision-support tools that we use today, across industries like logistics, finance, and healthcare. At its core, combinatorial optimization is about making the best possible decisions from a finite set of choices while respecting constraints. These problems arise in various domains—scheduling employees, routing delivery trucks, designing efficient supply chains, and more. What makes them challenging is their combinatorial nature: as the number of choices increases, the number of possible solutions grows exponentially. Finding an optimal solution often requires advanced algorithms, such as mixed-integer linear programming, constraint programming, or heuristic and metaheuristic methods.
While these techniques can generate high-quality solutions, their complexity can make it difficult for decision-makers to understand why certain choices are made over others. Even those familiar with CO mathematics and algorithmics might find some recommendations surprising or counterintuitive. This can create a gap in understanding, leading to hesitation or even skepticism about implementing the tool’s decisions.
Consider for example an organization using a decision-support system to manage its human resources more effectively, by solving a combinatorial optimization problem called Workforce Scheduling and Routing Problem (WSRP): assigning tasks spread across a region to mobile employees and defining a route-and-schedule pair for each one of them. These pairs detail when each employee starts and finishes their workday, which tasks they must perform, in what order, at what times, etc. Obviously, the solutions must respect various constraints, such as employees’ working hours, tasks’ time-windows restrictions, and skill requirements.
Given the complexity of such a problem, involving both routing and scheduling considerations, it is easy for decision-makers to feel overwhelmed. They might struggle to understand why the suggested solution is the right one. They might have questions about certain aspects of the solution. Imagine for instance that a decision-maker notices that an employee, Anna, is not assigned a task that lies directly along her route between two consecutive tasks that she must perform. From this observation, the decision-maker may wonder: “Why is Anna not handling this task?” There could be several reasons: perhaps Anna lacks the required skills, perhaps she cannot reach the task early enough to perform it while its time-window is still open, or perhaps the heuristic behind the system missed it.
For the designers of CO-based decision-support systems like DecisionBrain, addressing such questions can be a challenge. It requires getting familiar with the decision-makers’ specific data and crafting explanations for answering their concerns – a time-consuming process. Yet, without such answers to their questions, decision-makers may hesitate to use the decision-support system’s results. A promising way to bridge this gap is to develop methods that automatically generate explanations, helping users better understand and trust the solutions provided by CO-based systems.
In this post, we will first introduce a few theoretical elements about explanations from the perspective of social sciences. Then, for those interested in slightly technical aspects, we will give some insights about how we can model and generate two kinds of explanations, contrastive and counterfactual, in a CO context.
Explanations in Social Science and Artificial Intelligence
As described by Miller in [Mil19], an explanation is a process of knowledge transfer between an explainer and an explainee who raises a question about a fact. The explainer makes a diagnosis of causality by determining causes behind the fact that the explainee is investigating. The explainee receives from the explainer a product of this diagnosis. Hence, explanations are part of a conversation, i.e., a social interaction involving speakers making several contributions, which follow some principles in order to form an interaction of quality. According to Grice in [Gri75], contributions must meet, among others, a principle of relation “Only provide information that is related to the conversation” and a principle of manner “Avoid obscurity of expression; avoid ambiguity; be brief (avoid unnecessary prolixity); be orderly”. As a consequence, answering the decision-maker’s question “Why is Anna not handling this task?” by raising the point that the provided solution is mathematically/algorithmically proven optimal is not a satisfying explanation, because such information would probably be obscure to them.
Providing relevant explanations to decision-makers is a challenge. It is a challenge to understand a non-obvious part of a solution that is leaving a decision-maker wondering. It is a challenge to find the right compromise between a long exhaustive explanation detailing point-by-point very specific arguments and a short explanation involving abstract arguments. Finally, it is a challenge to use an adapted vocabulary. Still, we would like to share two key ideas that might help anyone who would like to develop techniques for providing explanations in a combinatorial optimization context. More precisely, we would like to describe two kinds of explanations that are frequently used and are therefore essential to anyone who intends to develop such explanation techniques.
The first kind of explanations widely used are contrastive explanations. As it is argued by Lipton [Lip90], in everyday life contexts, our explanations are often contrastive: they justify something in contrast to something else. Contrastive explanations answer explanations, which are also called contrastive, that have the following form: “Why does this observation occur and not that alternative one?”. “this observation” is called the fact and refers to a detail, a property, that can be observed in the result. “that alternative one” is called the foil and refers to a hypothetical other aspect that the explainee would have expected to observe instead. A contrastive explanation must identify reasons why the fact occurs instead of the foil (because the fact is more relevant, because the foil is impossible, etc.) and communicate them with an appropriate language and vocabulary. In our CO context, we will identify reasons relative to the feasibility/infeasibility and the quality of the solutions.
Note that the opposition between a fact and a foil may sometimes be implicit in a contrastive question, especially when the question is formulated as a negation. For instance, “Why is Anna not handling this task?”. In such a contrastive question, the opposition between the fact “Anna is not handling this task” and the foil “Anna is handling this task” is implicit. We suggest making the opposition between the fact and the foil explicit as much as possible, as it allows us to understand clearly what foil the explainee expected instead of the fact.
The second kind of explanations often used are counterfactual explanations. A counterfactual explanation determines and presents a hypothetical alternative input that would have resulted in a different output such as a user-specified output [WMR18, MZR21]. An explicit way to ask for such counterfactual explanations is to use questions starting with “how to”. For instance, “How to make Anna handle this task?”. As such, counterfactual explanations are sometimes termed “how-to” explanations. In our CO context, a counterfactual must identify alterations of the instance parameters (for instance, reducing the duration of a task, extending the time-window of another, etc.) making possible a decision-maker-specified solution.
Note that a counterfactual explanation could also be used to answer a contrastive question: the answer to “why does this fact occur and not this foil?” could indeed take the form “in order to obtain the foil, the input data should have been different in such a way”. In this case, the counterfactual explanation corresponds to identifying a change in the input that would turn the fact mentioned in the question into the foil. However, for simplicity and clarity, we will consider contrastive and counterfactual explanations as two separate kinds of explanations.
In the following sections, we will give some insights about how we can provide contrastive and counterfactual explanations to decision-makers.
Making a Contrastive Explanation
The first step towards building contrastive explanation generation techniques is to identify what are the different topics that decision-makers may wonder about. Practically, we must identify what are possible contrastive questions that may have decision-makers about a solution, and define a list of different templates of contrastive questions such as:
a) “Why is <employee>, who is performing <task> and <task> consecutively, not performing <task> in between?”
b) “Why does <employee> not perform <task> in addition to the tasks they already-performed even if it means changing the order of their tasks?”
c) etc.
Suppose that we want to answer the question “Why is Anna, who is performing task A and task B consecutively, not performing task C in between?” which is based on template a). In order to answer this question, we transform the current solution into a new one by inserting task C between tasks A and B in Anna’s planning, which can be performed by implementing a hand-crafted insertion algorithm, running in polynomial time. Then, we check the feasibility of the new solution, as well as its quality if feasible. Here, different cases may then occur:
- The new solution is infeasible.
- The new solution is feasible but its quality is lower than the previous one.
- The new solution is feasible and its quality is better than the previous one.
Each case leads to a different explanation.
In case 1), we must pinpoint the constraints causing the infeasibility of the new solution and express them into a text. Is it due to capacity constraints, such as Anna not possessing the necessary skills for task C? Is it due to temporal constraints, such as Anna being unable to arrive at and complete task C before its time-window closes? Etc. For a polynomial-time solution transformation like inserting a task into an employee’s schedule between two consecutive ones, it is relatively straightforward to create a feasibility check algorithm which inspects groups of constraints, one after the other, and provides a textual explanation, as soon as a group of constraints is violated. For instance, “Anna does not perform task C between tasks A and B because, even by finishing task A at the earliest possible time, at 11:00 AM, she cannot reach task C before 11:50 AM, at which point the time-window of task C is closed from 11:45 AM”.
In case 2), the new solution is feasible but its quality is lower than the previous one. The textual explanation is simpler to design here: it consists in pointing out the feasibility of the new solution and commenting on the loss of quality. For instance, “Anna does not perform task C between tasks A and B because, even if she can perform task C, her total travel time would be increased by 1,5 hours, for a task of 20 minutes, which would make the solution less efficient”.
In case 3), the new solution is feasible and its quality is better than the previous one. If the previous solution is optimal, this case cannot occur by definition. But, if the previous solution is close-to-optimality but non-optimal, this case may occur, as by transforming the solution into a new one, we may discover a better one. Still, this case remains unlikely to occur as close-to-optimality solutions are usually hard to improve with small transformations. Anyways, any time that such a case occurs, we would have to acknowledge that the previous solution was not optimal and that the decision-maker was actually right to raise this question. For instance, “Indeed, Anna can perform task C between tasks A and B, and it improves the solution quality by increasing working time by 30 minutes while increasing the travel time by only 20 minutes”.
The approach described above for the question “Why is Anna, who is performing task A and task B consecutively, not performing task C in between?”, which consists in transforming the current solution into a unique new solution using a polynomial-time algorithm, checking in which cases 1), 2) and 3) the new solution falls in, and creating an explanation text in consequence, can be generalized to other contrastive questions. However, depending on the complexity of the contrastive question asked, we may need to adapt the approach in two aspects:
- The current solution may need to be transformed into more than just one new solution to inspect in order to answer the question.
- The solution transformation may require (non-polynomial time) MILP techniques to be computed.
Suppose now that we want to answer the question “Why does Anna not perform task C in addition to the tasks they already-performed even if it means changing the order of their tasks?”, which follows template b). To address this question, we need to adapt the approach described for the first question on the two aspects mentioned just above. First, we need to consider the set of solutions obtained by adding task C in Anna’s schedule and reordering her tasks. Among these solutions, we look for the best new solution, which is either the best feasible solution, if one exists, or for the closest-to-feasibility solution otherwise. Second, rather than explicitly generating each possible schedule permutation and evaluating each corresponding new solution, we leverage MILP to efficiently explore this solution space. More precisely, we define a MILP that first minimizes infeasibility and then maximizes solution quality. Once the best new solution has been identified, we determine whether it falls into case (1), (2), or (3), and generate the explanation accordingly. A key advantage of this approach is that the MILP formulation is significantly smaller than the original optimization problem, as it focuses only on a restricted subset of solutions. This reduction in problem size enhances computational efficiency, making it feasible to solve even in real-time decision-support scenarios. Additionally, we can further improve MILP performance by warm-starting the solver with Anna’s original schedule as an initial solution. Numerical experiments show that explanations can typically be generated in under 15 seconds, making this approach practical for real-time decision-making.
To end this section, let’s come back to the first decision-maker’s question “Why is Anna, who is performing task A and task B consecutively, not performing task C in between?” and suppose that inserting task C causes the infeasibility of the solution so that the answer is “Anna does not perform task C between tasks A and B because, even by finishing task A at the earliest possible time, at 11:00 AM, she cannot reach task C before 11:50 AM, at which point the time-window of task C is closed from 11:45 AM”. Quite naturally, the decision-maker may wonder what slight changes in the instance data could have allowed Anna to perform task C. This decision-maker’s thought leads him to request a counterfactual explanation by asking “How to make Anna, who is performing task A and task B consecutively, perform task C in between?”
Making a Counterfactual Explanation
The first steps towards building counterfactual explanation generation techniques is equivalent to the contrastive case one. What is more, we can even define a list of templates of counterfactual questions by transforming each of the contrastive questions into a counterfactual one, as such:
a) “How to make <employee>, who is performing <task> and <task> consecutively, perform <task> in between?”
b) “How to make <employee> perform <task> in addition to the tasks they already-performed even if it means changing the order of their tasks?”
c) etc.
We recall that to answer such questions, we aim at finding how we can alter some instance parameters in order to make possible the decision-maker’s desired solution (for instance, the solution obtained by inserting task C in between tasks A and B in Anna’s schedule). Therefore, in addition to the counterfactual question, allowable alteration ranges for some instance parameters must be considered. We can think of two ways to define such alteration ranges. Either, alteration ranges are automatically applied to a part of the instance parameters according to some predefined rules. Alternatively, we let decision-makers decide these ranges, which is probably the most relevant, as they are likely to have knowledge about the data that cannot always be implemented into rules. For instance, they may know from their experience that some task durations can be reasonably reduced by 15 minutes because, in practice, they are performed in less time than what the instance data says. Or they may know that it may be easy to ask a certain client to extend the task time-window for 30 minutes more in the afternoon.
Now, suppose that the decision-maker’s question is: “How to make Anna, who is performing task A and task B consecutively, perform task C in between?”, which follows on template a). To address this question, given allowable alteration ranges for some instance parameters, we must explore all possible modifications to the instance data that could enable the desired solution. Instead of testing these alterations one by one and then searching for the best solution in the resulting feasible space, we leverage MILP techniques to efficiently explore both the data modifications and the solution space simultaneously. In this formulation, the instance alterations – such as adjusting task time-window, modifying travel times, or relaxing scheduling constraints – are represented by decision variables. The objective is to find the best set of modifications by following a multi-objective optimization strategy: first minimizing infeasibility, then minimizing both the number and magnitude of alterations, and finally maximizing the solution’s quality.
This process can lead to two possible outcomes. In the first case, no feasible solution can be found within the given alteration ranges, meaning that even the most favorable instance adjustments are insufficient to make task C fit within Anna’s schedule. In the second case, at least one set of instance modifications enables a feasible solution where Anna can perform task C between tasks A and B. In this scenario, the decision-maker must evaluate whether the trade-offs – such as increased travel time or adjustments to employee schedules – are acceptable in terms of overall solution quality.
A key advantage of this MILP-based approach is its computational efficiency. While adding decision variables and constraints to control instance modifications increases the problem’s size, the formulation remains relatively small compared to the original optimization problem, as it focuses on a restricted solution space. Additionally, warm-starting the solver with the initial solution improves convergence speed. Numerical experiments show that most of the time counterfactual explanations can be computed in under 15 seconds, making real-time decision support feasible.
In summary, this approach enables real-time counterfactual explanations, helping decision-makers understand not only why a task was not assigned but also how it could become feasible through controlled modifications to the instance data.
Conclusion
In this post, we explored why DecisionBrain is actively investigating the topic of explaining Combinatorial Optimization (CO) results. Drawing from principles in both Social Sciences and Artificial Intelligence, we highlighted two key types of explanations, contrastive and counterfactual explanations, and discussed their relevance in decision-support systems. We then outlined concrete methods for generating such explanations in a CO context, leveraging polynomial-algorithm-based or MILP-based approaches to efficiently analyze and communicate the reasoning behind optimization decisions. Beyond making CO solutions more transparent, explanation techniques also contribute to increasing trust and adoption among decision-makers. By addressing questions like “Why is Anna not performing task C?” or “How to make Anna perform task C?”, they help bridge the gap between algorithmic decision-making and human intuition.
For readers interested in more technical details, we encourage you to explore [LGMO25], [LGMO23], and [Ler24], which delve deeper into explanation methods in combinatorial optimization.
Looking ahead, exciting research directions include integrating AI-driven text generation to automatically craft detailed, context-aware explanations. Combining machine learning and optimization techniques could also enhance the interpretability of CO models, paving the way for more user-friendly and explainable decision-support systems.
References
[Lip90] Lipton, P., 1990. Contrastive explanation. Royal Institute of Philosophy Supplement 27, 247–266.
[Mil19] Miller, T., 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267, 1–38.
[MZR21] Mohseni, S., Zarei, N., Ragan, E.D., 2021. A multidisciplinary survey and framework for design and evaluation of Explainable AI systems. Association for Computing Machinery Transactions on Interactive Intelligent Systems 11, 3–4.
[WMR18] Wachter, S., Mittelstadt, B., Russell, C., 2018. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology 31.
[LGMO23] Lerouge, M., Gicquel, C., Mousseau, V., Ouerdane, W., 2023. Counterfactual explanations for Workforce Scheduling and Routing Problems. In Proceedings of the 12th International Conference on Operations Research and Enterprise Systems (ICORES), SCITEPRESS, pp. 50–61.
[LGMO24] Lerouge, M., Gicquel, C., Mousseau, V. and Ouerdane, W., 2025. Modeling and generating user-centered contrastive explanations for the Workforce Scheduling and Routing Problem. International Transactions in Operations Research.
[Ler23] Lerouge, M., 2023. Designing and generating user-centered explanations about solutions of a Workforce Scheduling and Routing Problem. Ph.D. thesis, Université Paris-Saclay.
Mathieu Lerouge
Post-Doctoral Researcher, DecisionBrain
About the Author
Mathieu Lerouge is a Post-Doctoral Researcher in Computer Science at the University of Bologna and has been a research partner of DecisionBrain for 5 years. He obtained a PhD in Computer Science from the Université Paris-Saclay. His research works bridge various disciplines like Operations Research, Artificial Intelligence and Social Sciences. He has published several articles in international journals and conference proceedings and has presented his work in various national and international conferences. You can reach Mathieu at: [email protected]
At DecisionBrain, we deliver AI-driven decision-support solutions that empower organizations to achieve operational excellence by enhancing efficiency and competitiveness. Whether you’re facing simple challenges or complex problems, our modular planning and scheduling optimization solutions for manufacturing, supply chain, logistics, workforce, and maintenance are designed to meet your specific needs. Backed by over 400 person-years of expertise in machine learning, operations research, and mathematical optimization, we deliver tailored decision support systems where standard packaged applications fall short. Contact us to discover how we can support your business!
Ready to transform your operations?
Contact DecisionBrain today to discover how our solutions can help your business thrive.