This relatively new concept involves a virtual representation of a physical product. It builds on older types of simulation such as digital manufacturing and computer aided engineering (CAE) but goes further in two important ways. First, a digital twin models multiple elements and dynamics of the physical product. This is in contrast to previous methods that used a range of simplified models to simulate individual aspects of the product, such as toolpaths with collision avoidance or structural stress. Secondly, a digital twin is updated with sensor data so that it represents the best current understanding of the physical product. Digital twins may do this using Internet of Things (IoT) sensors within an Industry 4.0 architecture that uses analytics and artificial intelligence to condition data before the digital twin is updated.
So that’s what a digital twin is, but what can it do for you? How do you go about creating one and what can you use it for? This article will reveal all, taking a nuts and bolts approach.
First of all, a reality check. Digital twins might not be able to simulate every aspect of a product’s dynamics. For example, Chris O’Connor, head of sales for IBM’s Watson Internet of Things, said in an often-referenced presentation: “It’s an understanding of all of its dynamics. Whether those are electrons that move, or whether it’s the device that’s moving itself.” However, in reality, we do not have computers capable of simulating every electron in a device. There are sophisticated multi-physics models which might, for example, be used to simulate electromagnetic forces in a motor with the resulting mechanical stress and thermal loads. However, even these models make simplifications and they typically run much more slowly than real-time. If we were to start trying to simulate every aspect of our products, down to the quantum level, we would need many times the world’s total computing power, as well as vast quantities of energy to power it.
Real digital twins are actually much more like simulations we’re already familiar with. They often model components and sub-systems as black boxes, only considering the inputs and outputs which are significant. A conventional block representation of a system might use analytical equations, derived from first principles, to model the relationship between the inputs and outputs. In a digital twin, this model would serve as a starting point, but the parameters in the equations could be optimised to fit the actual observed outputs, allowing the digital twin to learn from experience operating the real device. Going a step further, machine learning might be used to create a mapping from inputs to outputs which doesn’t use any predefined equation. This approach has been explained clearly using the example of a MATLAB model of a pump (www.is.gd/relahi).
The road from conventional simulation and modelling to digital twins can be thought of as taking place along two axes. One is greater integration of multiple models; the other is increasing degrees of communication between the real and the digital worlds. Traditional multiphysics models already integrate multiple models, predictive maintenance already updates models of component failure using condition monitoring data from the real world. Digital twins combine both of these aspects.
Often only limited real-world data is used to update the digital twin models, such as triggers that occur when measurements exceed a tolerance value. This greatly reduces the amount of data transfer and computing time required. The approach can be implemented in two ways. One is to run a simulation multiple times in advance, covering the expected variation in important parameters. This effectively builds a lookup table which can be used to trigger actions when measured parameters reach predefined values. The second, more involved, approach is to actually rerun the simulation if there is a significant divergence from what was expected.
Digital twins can greatly improve predictive maintenance. Ali Nicholl, an expert in digital twins at data science business Iotics, gave an example from his experience in the rail industry said to be typical of high-value complex products in service. He says: “When we started to look at predictive maintenance in train engines, we knew that there were already carefully-simulated models of how these components should perform in different operating environments. But there remained significant variance, in deployment, of performance across seemingly identical products.
“For example, pollen clogs the air filters of an operating train engine, which wreaks havoc to train timetables if trains cannot be scheduled to reach maintenance depots in time to have air filters cleaned or replaced ahead of the next day’s service. It was a logistics nightmare trying to get the trains and people to the right service locations. The solution was to take the existing air filter service life models and create a digital twin that also included a model of the train position along the train line. This uses dynamic information on which engines are operating, and a highly simplified weather model with just enough information to predict the pollen reaching the filter. We took three key parameters from the weather stations closest to the train lines – the temperature, wind direction and the pollen count.”
The train air filter model requested by Rolls-Royce was accurate enough to optimise scheduled maintenance and track where trains were predicted to be at the end of the day to avoid disruption of services. Digital twins are iterative, allowing models to be extended and refined according to operational pressures or requirements. Weather predictions can be added; satellite data on trackside vegetation species can enhance pollen modelling, which enables future supply chain pressures for replacement filters to identified further in advance. This would reduce inventory, increase supply chain resilience (timely in a post-COVID world) and improve staff utilisation, according to Nicholl.
Another similar example is predictive maintenance and fault detection models for pumps that are used globally in many different environments and with different fluids. Using a digital twin approach, the model can automatically adjust parameters, learning from its own sensor data. For example, the model might predict a particular flow rate for a given set of input parameters covering temperature and input current. The model can adjust control parameters until the actual flow rate consistently matches the predicted flow rate across a range of operating conditions.
HOW TO MAKE ONE
The key to creating a digital twin is to start simple, with what you already know, and aim to immediately add value to the process. For an established process, you probably already know what the important parameters are. These are the things you’d look at on a dashboard to tell you the health of the operation.
Giving the digital twin algorithms visibility of these parameters is the right place to start. Once the algorithm starts to learn how these can predict the outputs of interest, this becomes an asset that can be easily shared and duplicated across sites in a way that human experience cannot be. It is then possible to start thinking about what other parameters might have an influence and adding them in, increasing the complexity of the digital twin in a step-by-step way.