OpenRules + Apache Spark: 6M Decisions per Second!

Our customers frequently select OpenRules for their decision management needs because of two important factors: 1) Ease of Use; and 2) Performance and Scalability. We have large customers who use OpenRules to create very complex decision models capable to handle large payloads – see an example with 17M records.

Recently we received a request to create a decision service capable to handle 1B records. Luckily this large corporation already uses Apache Spark for scalable computing as thousands of other companies, including 80% of the Fortune 500.

Within a few days, our team built a POC that put an OpenRules-based decision service inside an Apache Spark application. The performance results were really impressive: the total execution time for 1 billion records was under 7 minutes averaging 6 million decisions per second! Read more in the new manual “OpenRules-Spark Integration“.

When we converted a POC into a real decision service that handles more than 30,000 complex rules, we received the following execution results:

Big Decision Tables

For years OpenRules was among the fastest rule engines. When last year we moved from run-time interpretation to design-time code generation, we, like our colleagues at Red Hat Drools, managed to further improve the overall performance and provide support for practical decision microservices. As a result, we dramatically minimized start-up time, went from 50-100 milliseconds per transaction to 5-10 milliseconds, made memory footprint small. These are really good results needed by modern enterprise decision-making systems.

However, I knew that we have multi-year customers that use really big (!) decision tables with 10 and even 30 thousands of rules. How to improve their performance? Continue reading