How do artificial intelligence (AI) algorithms work? According to Philipp Wallner at MathWorks, they are pretty similar to humans; they observe what’s happening and find patterns. Once engineers have seen a bearing break up in a pump a couple of times, they can recognise the sort of noise and vibration it makes just by resting their hands on the casing. Wallner says: “This is what AI would do as well. In the pump example, AI gets continuous vibration data. Then what needs to be done when designing the algorithm is to tell it which is a healthy pump, and which is a pump with a broken bearing or seal leakage. Then, if you have a couple of these datasets for healthy and different unhealthy states identified up front, algorithms can tell in advance when the machine is developing into a state that looks like a bearing is breaking.”
As this is a new area for engineering companies, the natural corporate approach is to hire some data scientists to carry out the work. But that’s not enough, he points out. “There are pitfalls, particularly when these companies keep the data scientists in their own world, in their own silo, and keep them away from the engineers who understand what is happening in the machine. They are domain experts, of machinery and processes. If companies are serious about introducing predictive maintenance, they need to bring these people together.”
Once the AI project kicks off, the system will need to be fed with data to grow and develop. Wallner explains how it works: “Predictive maintenance algorithms that contain some kind of statistics or AI, like machine learning, typically require a lot of pre-recorded data to train the algorithm and verify behaviour. Machine operators should have plenty of data on existing machines, which may have been running for years, but actually what is typically missing is failure data. To train an algorithm for detecting failure scenarios, you need some recorded data of these, especially for the more severe events, which don’t happen often.” The lack of such information can ruin the project.
Short of intentionally breaking a system to provide failure data – which is not unheard of, points out the industry manager – an alternative is to fake it. He says: “If you have engineers with domain expertise, they can model the equipment in Simulink or another simulation tool, and then run a failure scenario, and generate synthetic failure data, and use that to train the algorithm. That’s an application that we have seen gaining more traction.”
Companies should also consider how they want to deploy the algorithm, once it is up and running. Wallner adds: “It’s nice if someone knows how to pre-process the data and run machine learning on that to identify patterns, but at the end of the day, what you really need is an algorithm embedded into the entire system, a predictive maintenance app that runs 24/7. For that, it’s important to have deployment options. How do you generate C-code or structured text on a PLC or industrial PC, or how do you generate functions that run on the cloud,” he asks.
Wallner points out that, compared to couple of years ago, many more predictive maintenance projects are being implemented now, because of better understanding of the business case – the way that the company can recoup the expense involved in setup. He offers two use cases. One is plant operators that run equipment 24/7. For them, the business case is to avoid unplanned outages. Preventing even a single event can pay off the entire investment, such is the value of their production (see also box).
Wallner continues: “For companies that build machinery, the business case is in service revenue. An equipment operator company might not even notice when the machine is going to break; the machine calls home to the equipment builder, and it orders the spare parts and arranges the engineer visit to replace them. I’ve also heard of a next step, moving toward only selling services. Instead of selling a compressor, moving to sell m3 of compressed air; not a lift, but lifting hours. Over the next few years, more and more business models are moving that way.”
BOX: BUILDING TRUST
Wallner at MathWorks also points out that no matter how sophisticated an AI system, it is worthless if no-one heeds its advice. AI scepticism is all too common among industry customers, he says. Explainable AI not only provides an output – such as, ‘this will fail in the next four hours’ – but also extra context: ‘it will fail due to a broken bearing because I detect some spikes in the vibration profile’. “Being more explicit about the reason behind those decisions is a big topic of development now.”
BOX: CASE STUDY
Packaging and paper manufacturer Mondi Gronau developed a health monitoring and predictive maintenance application using MATLAB. The extrusion and other machines at Mondi’s plant are large and complex, measuring up to 50m long and 15m high. Each machine is controlled by up to five programmable logic controllers (PLCs), which log temperature, pressure, velocity, and other performance parameters from the machine’s sensors. Each machine records 300–400 parameter values every minute, generating 7Gb of data daily.
Mondi faced several challenges in using this data for predictive maintenance. First, plant personnel had limited experience with statistical analysis and machine learning. They needed to evaluate a variety of machine learning approaches to identify which produced the most accurate results. They also needed software that presented the results clearly and immediately to machine operators. Lastly, they needed to package this for continuous use in a production environment.
Mondi worked with MathWorks Consulting and Andreas König of the Technical University of Kaiserslautern, Germany, to develop and deploy health monitoring and predictive maintenance software in MATLAB. The Mondi team had previously set up a database to collect data from all the machines in the plant via an ethernet network. They used Database Toolbox to access this database from within MATLAB. Next, the team developed MATLAB scripts to clean the data by removing outliers and invalid values. They developed an application in MATLAB to query the database and present the results graphically. For example, an operator can use the application interface to plot the pressure measured by a particular sensor over a period of minutes, hours, or weeks. To enhance the application, they added statistical process control (SPC) to alert operators to sensor values that are outside normal operating ranges.
Using ‘Statistics’ and ‘Machine Learning Toolbox’ and ‘Deep Learning Toolbox’, Mondi and MathWorks consultants evaluated several machine learning techniques. The tests showed that an ensemble of bagged decision trees was the most accurate model for their data. Once implemented, the system is saving the company an estimated €50,000/year on eight machines.
“As a manufacturing company we don’t have data scientists with machine learning expertise, but MathWorks provided the tools and technical knowhow,” says Mondi’s Michael Kohlert.