As models get increasingly powerful, they also get harder to understand, creating potential compliance issues within financial services. Explainable machine learning aims to let you have your cake and eat it too. 🎂

The need to “show your work” is taught in grade school math classes across the country. The ultimate answer is important, but the process of determining the answer is equally critical. The same concept applies in the business world, where we often need to explain why we made a decision. In regulated industries, these decisions and explanations are often scrutinized by regulators to prevent bias and financial risks, so precise explanations are not just important – they’re a requirement.

A good example of this is consumer lending in the US, where laws such as the Equal Credit Opportunity Act (“ECOA”) and its implementing regulation, Regulation B, place specific requirements on lenders, including producing an “Adverse Action Notice” for every rejected application. These notices must include the specific reasons the application was rejected, and it’s not enough to say “Too high risk” or “Did not pass the requirements” – the reasons need to be precise, such as “Excessive obligations in relation to income” or “Delinquent past or present credit obligations with others”.

For these reasons, when financial services companies look to adopt machine learning, “explainability” is often the top impediment. The techniques used in machine learning produce extremely accurate predictive models, however the resulting algorithms are too complex to visualize or easily explain in plain English. There’s no magic – the trained algorithms are simply “code” – however they contain thousands or millions of individual factors, and the multivariate nature of these models makes them difficult to comprehend. Therefore, while these models provide more accurate decisions, they lack the explanation or “show your work” component that’s critical.

Explainable Machine Learning Decisions

Explainable machine learning is a new concept that’s being pioneered by companies like DigiFi. It’s a simple idea – harness the power of next-generation machine learning while clearly explaining every decision – however the implementation is very complicated for the reasons outlined above.

The majority of DigiFi’s customers are regulated financial institutions and, over the past year, we’ve been working hard to make fully explainable machine learning a reality. We’ve approached the problem in a unique way that solves many of the typical challenges by using sophisticated inference analysis that clearly identifies which factors impacted every decision.

At a high level, our inference analysis works as follows:

  • Step 1: We run the decision data through the machine learning model, producing the decision
  • Step 2: We run up to thousands of permutations of the decision data through the model – each time with minor adjustments – to simulate the impacts
  • Step 3: We compare the initial decision with the adjusted decisions, dynamically identifying which variables are having the largest positive and negative results on the decision

The output of this process (displayed below) is a clear set of information that precisely explains the decision and the factors that determined it.

The Results

Here’s the output from a typical, non-explainable machine learning decisioning process. It contains the answer, but it doesn’t attempt to explain it. For some types of decisions this is fine, but regulated businesses will struggle to use this type of decision in a Production environment.

Here’s the output that DigiFi delivers. The decision itself is the same, however we also detail the variables that had the largest positive and negative impact on the result, similar to the reason codes provided by credit scores such as FICO that lenders are familiar with.

Underlying these factors is the detailed results from simulations we performed and numeric factors that provide a comprehensive decision audit trail. These factors can be used to explain the decision and meet compliance and regulatory requirements.

Bottom Line

Predictive machine learning models can drive significant benefits to lenders including higher approval rates, more accurate decisions and increased automation of previously manual tasks. Financial services companies have faced challenged in adopting machine learning, largely due to compliance concerns, and businesses like DigiFi are working to directly address this challenge with explainable machine learning models. If you’re interested in discussing how our explainability can accelerate your company’s adoption of machine learning, we’d love to chat.


About DigiFi

DigiFi is a technology company that helps businesses make better automated decisions.

Our platform lets businesses easily use automated machine learning and rules management to optimize critical decisions with no coding or technical expertise required. Repetitive work that used to take hours can now be completed in minutes, letting your team focus on what matters most.