Let’s say you’re on the hunt for a bank robber who just pulled off an ingenious heist. So far, no one has been able to crack the case, so you bring in two sources of outside help: The first one is a police psychic—you don’t believe that he really has extra-sensory perception, and his track record of identifying criminals is excellent, but he only ever explains the cases in terms of the spirit world. The other one is Sherlock Holmes—an iconic master of deductive reasoning who has a flair for the dramatic reveal.
The two arrive on the scene, and within a short while they’re ready to present their hypotheses. The two accounts of the crime have some overlapping ideas, but they diverge in a few key details. Holmes explains those points of divergence in rational terms with his characteristic flair, and the physic explains again what the voices from the beyond told him. Which account are you going act on as you try to apprehend the criminal?
For most of you, we’re guessing the answer is Holmes, and not just because of his reputation. Rather, you probably picked his explanation because he provided a train of reasoning that led to the conclusion. No doubt, that reasoned explanation inspired much more confidence than the psychic’s unexplainable train of thought. And you’re right to choose the version that can be explained—which begs the question, why do so many supply chain technology users settle for analytics-based advice that offers no rationale for its conclusions?
Black-Box vs. User-Centered AI
With the admittedly-whimsical story above, our goal was to illustrate a point about two different approaches to AI: black-box vs user-centered. Right now, black-box AI—i.e. technology solutions that take in data and put out predictions or suggestions without giving you a glimpse into their inner-workings—is the standard in most industries, including manufacturing and supply chain management. That said, explainable AI (or user-centered AI) is starting to gain some notice. What is explainable AI exactly? Well, it’s pretty much what it sounds like: instead of spitting out an answer without any reasoning and expecting planners to accept it as gospel (like our police psychic above), user-centered AI is able to “show its work” and present users with some background on the decision. This might be as simple as displaying a decision tree with the branches pruned down, or it might be something more complicated.
So far, many of the use cases where people are interested in explainable AI for matters where bias is a concern. Amazon, for instance, recently rolled out a black-box recruitment AI that turned out to be biased against women; similar things have cropped up in limited or hypothetical use cases in criminal justice—but the upshot is that in these situations people are increasingly advocating for AI algorithms that can at least partially show human decisionmakers how they arrived at their estimates or suggestions. We’d contend that these considerations have the potential to be just as relevant in a production or supply chain context.
AI in the Supply Chain
Okay, but what’s the salient difference between black-box and explainable AI deployments from the perspective of, say, a production planner trying to optimize her weekly production runs. On some level, the big difference here can come down to UI: With a black-box AI solution, the planner might connect the algorithm to a few data sources, plug in some numbers as needed, and sit back while the program comes up with a solution. This solution might present a complete plan—and in an Industry 4.0 context it might even be able to automatically sequence and schedule that plan. This is obviously convenient, but for a planner who might want to consider multiple options, or who might have concerns or constraints that aren’t easy to quantify, it could lead to disruptions. Or, it could lead to low engagement—i.e. the planner won’t want to use the tool and will instead keep creating plans by hand.
With explainable AI, this situation might look very different. Rather than spitting out a complete plan, the AI might show a visualization of the different considered planning scenarios and their various pros and cons. It could show you what it considers to be the optimal plan, along with information about what makes it the best option—which the planner can then consider or even tweak with the AI’s help. In this way, if something fishy seems to be happening with your data, it will stand out and you can take actions to address it. Likewise, if it makes an unorthodox suggestion, you can gain some understanding of why it thinks that’s an optimal suggestion. Thus, you can come the right conclusion faster and with more confidence.
Production Planning in the Industry 4.0 Era
In the scenario we sketched out above, the explainable AI was able to act as more like an aid to human production planning than a replacement for it. In general, black-box AI algorithms (which might rely on neural nets or similar technology) can perform more complex calculations, but the trade-off is that it only offers planning assistance on its own terms. In this way, explainable AI gets us much closer to the Industry 4.0 dream of cyber-physical systems. In other words, AI that can act as a kind of trusted advisor for narrowing down the right options has the potential to bring about true “smart factories” that use data to power a combination of automated and manual workflows in planning and deployment.
Ultimately, we’re likely to see more and more scenarios where systems are able to make and carry out autonomous decisions within the production chain going forward. At the same time, those decisions will always be in support of human efforts in terms of larger tactics and strategy. At the end of the day, there’s real value to be gained by considering AI technology that takes this fact into account. Industry 4.0 can’t exist without things like artificial intelligence, machine learning, and advanced analytics—but it can’t exist without production planners either.