Put the Power of Machine Learning in the Hands of Operational Experts
Introducing TrendMiner’s MLHub
Machine learning projects have long been the realm of central data science groups, and their use cases for deep analysis of process performance are plentiful. Yet, data scientists are overloaded with projects. Furthermore, operational experts often lack the statistical and programming knowledge needed to put the power of machine learning in their own hands and set up machine learning models. Production sites see improvement possibilities left on the table as a result.
What if companies could put the power of machine learning in the hands of all operational experts? And what if they could improve collaboration between the central group and local sites to solve more complex use cases and exceed business goals?
Both are possible using advanced analytics software. Here is how an integrated notebooks environment can democratize machine learning for everyone.
Common Data Science Challenges
Frequently, there is no smooth symbiosis between process experts and central data science groups. Completely centralized groups are responsible for operational improvements. They often have the experience and tools necessary to handle more complex projects, but data scientists just as frequently do not fully understand the process they are analyzing.
Meanwhile, completely decentralized groups have no facilitation and the duty of improving operational performance falls in the hands of individual business units. This includes plant operations. When data management is completely decentralized, process experts have no overall strategy to solve more complex challenges. They also lack analytics expertise.
The best solution is a blended model that uses a federated central group as a facilitator. Individual business units then take responsibility for making continuous improvements. In this environment, centralized groups can deploy machine learning models inside an advanced analytics solution and make them available to everyone in the organization. Simultaneously, data scientists are kept in the loop to help solve the most complex process anomalies.
Resolving Challenges Using TrendMiner
TrendMiner bridges the collaboration gap between operations and central data science groups. The Next Generation Production Client now includes MLHub: A notebook environment for deploying machine learning models that help accelerate business, performance, and sustainability objectives. It puts the power of machine learning in the hands of operational experts.
MLHub reduces the demand on central analytics teams. This drastically improves the adoption rate of data science projects. The integrated environment for deploying machine learning models also fosters efficient collaboration between operations and the central team. With data democratized to anyone in the organization, software users can share and reuse machine learning models and notebooks across the organization.
All pre-processed time-series data and its contextual operational information is available to data scientists and operational experts. They can use it to create advanced visualizations and machine learning models, which can be deployed quickly into operations for iterative improvements. Unlike most machine learning platforms or artificial intelligence solutions that keep data locked in silos, TrendMiner empowers engineers to address and solve more complex use cases that provides their company with a competitive advantage.
Key Benefits of TrendMiner’s MLHub
The solution adds benefits throughout an analysis. MLHub gives data scientists access to open source libraries that can be used to strengthen anomaly detection. The models and notebook sections can be shared so analysts can train and deploy them quickly. MLHub also provides support for scalable computer power thanks to elastic architecture.
For time-series analytics, data scientists can prepare machine learning projects through searching, filtering, and saving views as input. Calculations and digital tags can be saved in views for further processing. Machine learning model tags also can be added as new digital tags to apply all TrendHub capabilities.
MLHub includes contextual data. Contextual data can be the saved results of process monitoring or information contained in third-party business applications (such as batch run data, computerized maintenance management systems, and laboratory information management systems). These items that help put operational performance into context also can be used as input for machine learning models. Conversely, the output of machine learning models can be saved as contextual data for additional insights.
Data scientists and engineers can see notebook cell outputs as tiles in dashboards. They also can generate interactive graphs that can be shown directly on dashboards and display forecasted model outputs on value tiles (such as for predicted maintenance).
Power of Machine Learning in Use: Anomaly Detection
In the process manufacturing industry, operational experts can define machine learning as the application of statistical models to predict important process parameters as a function of one or more independent variables. One such use case is stronger anomaly detection.
Some anomalies are much more difficult to detect and predict. In certain situations, for example, an anomaly might occur even when a process appears to be functioning normally. For these more involved cases, data scientists can apply a machine learning technique known as an anomaly detection model. They can identify rare items, events, or observations that deviate significantly from the rest of the time-series dataset and that are known to be outside normal operating behavior. They can be used to set up monitors and alerts when special deviations occur.
In fact, an anomaly detection model can help prevent a complete plant shutdown. At one chemical plant, fluid from one process periodically would leak into a compressor on another process. When this occurred, the compressor eventually would become damaged. The only way to repair the damaged compressor is to shut down the plant entirely. Engineers already had determined that vibrations were causing the leak. However, they had no way to monitor the problem. Furthermore, once the leak started, operational experts were unable to correct the anomaly before the compressor was damaged.
First, engineers installed vibration sensors near the compressor. They then collected time-series data from those sensors and analyzed it in TrendHub for periods of good behavior. Data scientists then loaded the data into MLHub to train the machine learning model with different types of vibration patterns. Once complete, engineers created a new machine learning model tag from the trained model. They then activated a monitor on the new tag, which could detect irregular vibrations. Finally, they used TrendMiner’s monitoring and alert capabilities to notify key stakeholders. This should allow enough time to intervene before the compressor is damaged. See Figure 1.
Figure 1: This flowchart shows the process of creating an anomaly detection model using MLHub. Operational experts prepare time-series or contextual data before it is loaded into MLHub. Data scientists then select an algorithm, train the model and the deploy it by creating a new machine learning tag and loading the tag with a monitor. When the process is running fine, engineers can visualize its progress on a dashboard. When there are deviations, process experts receive an alert in time to intervene.
The new soft sensors helped avoid a complete plant shutdown. When a leak occurred, process experts got an alert with enough time to stop it before it damaged the compressor. And while the machine learning model tag picked up the right kind of vibrations to indicate a problem, engineers would have been unaware of it any other way. All other process parameters appeared normal.
Companies that use TrendMiner can be sure that improvement projects will not be left on the table. Through its MLHub, the software places machine learning in the hands of every process engineer.
- Better collaboration between engineers and data scientists,
- More efficient use of central groups and operational experts,
- Advanced anomaly detection and monitoring models, and
- An improved competitive edge.
Deploying machine learning models to analyze, monitor, and predict process behavior has never been easier.