Because you’ve been doing this so long and you’ve experienced a lot of different orders, your guesses are not far off, and most of your pre-made omelets get eaten—but not all. As you gain more experience, you might be better at predicting the right ratios in advance, because you’d have more information to draw. But what if you had a database of billions of different omelet orders at your disposal? How much more accurate would your omelet predictions be then? How much more agile would the entire omelet station be?
Nearly two-thirds of supply chain executives think of big data analytics as an important, disruptive technology. And the above example, though a little bit whimsical, should give you some indication why. At every touch point on the value chain, decision-makers are constantly making choices based on limited or anecdotal data; as that changes, supply chain management as a whole is poised to change along with it.
By the very nature of supply chains, supply chain management tends to produce massive amounts of data. Information from vendors about prices and quantities, data collected along production lines, customer data, and traffic and weather information for transit routing all leave a trail of information that, until recently, could only be put to limited use. As big data analytics are becoming more prominent, this is beginning to change, and one of the most significant ways in which it’s impacting supply chain management is in terms of risk management and response. By using machine data collected on your factory floor, for instance, you can feed predictive algorithms that estimate likely machine breakdowns in advance, giving you the chance to schedule maintenance downtime proactively in a way that retains the maximum possible value.
By the same token, prescriptive analytics processes could hypothetically be trained on all of your production data in order to uncover areas of waste and inefficiency that would otherwise remain invisible to human planners, increasing efficiency in such a way as to reduce the likelihood of disruptions and increase the speed with which those disruptions can be responded to. In both of these examples, the large amount of data that’s typically collected in an Industry 4.0 environment can be leveraged into operational improvements that lead to more resilient, cost-effective supply chains.
Let’s think back to our omelet stand for a second. How would you have gotten the most value out a gigantic cache of data if you’d had access to one? Simple: predictive analytics. In order to move through the line as quickly as possible, you’d want to forecast customer demand far in advance, so that you could pre-cook a large number of omelets (having an accurate prediction of demand would also help you source your ingredients more effectively, bringing you that much closer to a lean culinary supply chain). In an industrial context, the same reasoning applies. The better your forecasts, the more lean and agile you can be—meaning reduced costs and improved profit margins. Thus, as you accumulate and manage more and more demand data and your big data analytics processes create smarter and more accurate forecasts, you can tailor your production and transport workflows to your data-driven expectations. By taking the guesswork out of your production ratios, you give yourself a chance to reduce the need for buffer stock (both for sourced materials and finished products), potentially saving considerably on storage space and inventory management.
So far, we’ve looked at some specific instances where big data can have a big impact on supply chain management and production efficiency. This may have given you the impression that big data and its associated analytics processes are the types of things that you use on an ad hoc basis, implementing them when a particular problem or challenge comes around. This section is meant to disabuse you of that notion. In order for big data to reach its full potential within an Industry 4.0 context, you need to embed it within your existing supply chain operations. This means that collecting, storing, and analyzing data should be an integral function every piece of IT or OT in use across your entire value stream. Studies have shown that doing so can have an accelerating effect on order cycles and delivery times, while broadly bolstering supply chain efficiency.
Of course, this is easier said than done. But by creating a digital infrastructure that prioritizes visibility and connectivity, it’s possible to create a critical mass of usable data, such that every digital process is able to access mission critical information in large quantities. In this way, you can work towards a planning environment in which every decision is fundamentally data-driven.