Prof. Gene Freuder wrote a position paper “Complete Explanations” for The Second Workshop on Progress Towards the Holy Grail that will be held on Aug 27, 2018 during CP 2018 in Lille, France. Gene writes: “As AI becomes more ubiquitous there is a renewed interest in computers being able to provide explanations, and the European GPDR provides special impetus.” Geen’s paper concentrates on constraint satisfaction problems (CSPs), but, as I’d shown in my 2016 paper, we may consider a decision model as a CSP, and everything Gene is talking about directly applies to business decision modeling.
Does not matter what technique we use to find a solution(s) for the problem (constraint propagation, machine learning, or inferential rule engines), Automated decisions always require explanations! Gene sets the goal:
“The position taken here is that it can be worthwhile to start with truly complete explanations and abstract and limit from there. The goal is to provide a high-level “big picture” of the problem, in a form readily meaningful to a human user. The hope is that this may, as well, lead to general insights into constraint satisfaction problem structure… We need not just scalable algorithms but effective human-computer interfaces, including visualization tools, that help users grasp the big picture and explore their options.“
For classical CSPs explanations are difficult when a constrain satisfaction problem is unsolvable. As Gene wrote: “To misquote Tolstoy: Solvable CSPs are all alike; every unsolvable CSP is unsolvable in its own way.” The workshop will include interesting presentations which hopefully bring progress to the goal stated by Gene. So, if you can, I highly recommend to attend Gene’s workshop.
When we deal will operational decision models, we are usually more lucky as they are either solvable or conflicting business rules can be found relatively easy. We, at OpenRules, always considered decision explanations as a key functionality of our product. Today we provide two analyzers that help business users to explain the behavior of their business decision models:
The latest OpenRules Decision Model Analyzer allows a business user to analyze different decision models to better understand why certain decisions were made. A user may chose a decision model’s goal, a test case, and the analyzer will generate a table of actually executed rules showing the values of all involved decision variables on the moment of execution. Real-world experience confirms that such explanations are crucial for authors of the complex decision models helping them to build and to maintain these models. The Analyzer comes with a collection of predefined decision models and allows business users to add, execute, and analyze their own decision models. It is freely available online without any downloads or registration.
- What-If Analyzer
OpenRules What-If Analyzer allows business users to analyze the behavior of their decision model by converting them to CSPs and allowing a user to graphically activate/deactivate different business rules (constraint). The analyzer uses constraint propagation to immediately show how these actions change the domains of affected decision variables. Additionally a user may find a feasible solution that satisfies all currently active rules, find and navigate through multiple solutions, and even find an optimal solutions based on various business objectives. This analyzer is also freely available online.
P.S. Last year Prof. Freuder was a presenter at DecisionCAMP-2017 in London: his presentation related Constraints and Rules with an emphasis on explanations. Don’t miss next DecisionCAMP-2018 in Luxembourg – the term “explainable decisions” is among the most popular terms in the event’s program.
P.S.S. I had an honor to work with Gene in Cork (Ireland) for 5 years, and I am really glad to see that he continues to be a visionary.
Pingback: Decision Automation and Explanations |
Jan Purchase published an article “Better AI Transparency Using Decision Modeling“, in which he describes several possible ways to provide reasonable explanations to the outcome of machine learning algorithms. I added a comment about our experience of using machine learning and business rules in “ensemble” – see http://www.luxmagi.com/2018/08/better-ai-transparency-using-decision-modelling