The Hidden Power in Your Process Data
Imagine reducing product waste by 37% overnight, or predicting critical lab values in real time, eliminating hours of costly waiting. These are real results achieved by process engineers wielding the power of advanced data analytics.
In today’s complex manufacturing landscape, the difference between thriving and merely surviving often lies in how effectively you can harness your data. But with terabytes of information flowing through your systems daily, where do you even begin?
This article cuts through the noise, presenting six real-world customer success cases where process engineers tackled critical challenges head-on using innovative data analytics approaches. We’ll explore their journeys – including the critical bottlenecks they faced – and uncover practical learnings you can apply in your own operations.
Unlocking the Potential in Your Data
These case studies demonstrate the transformative power of turning raw process data into actionable insights. Here’s a quick reference guide to the problems solved:
- Solution
- Golden batch monitoring
- Custom hourly averages
- Automated dosing monitoring
- ML-based soft sensor
- Visual consumption dashboard
- Root cause analysis
- Key Result
- Waste reduction, quality improvement
- 12% energy cost reduction
- 37% fewer complaints
- Real-time quality predictions
- Autonomous optimization
- Preventative action capability
Case Study 1
A 7-Recipe Success Story from Your Own Data
Background
In our batch process, ensuring optimal operating conditions is critical for maintaining production, product quality and worker safety. Monitoring various process parameters such as low average temperature, constant pressure, high end- concentration and low energy consumption in real-time is essential to maintain consistency and quality across batches.
Challenge
Our challenge was to implement a comprehensive system for monitoring operating conditions in a batch process with 7 regularly changing recipes. Over the past month, we have experienced problems with parameters deviating from optimal process conditions, resulting in poor product quality. We needed a solution that would accurately monitor the trend profiles at each step of the process and alert operators when these deviations occurred. The alert should provide actionable insight for operators and supervisors to optimize production processes and minimize product loss.
Solution
1. Visualizing the trends
A list of tags was added to a TrendHub view, including the temperature, pressure, concentration, energy consumption, end-product quality from the laboratory and the information about the product type. It was super fast and allowed us to see the last two campaigns, which we then used for analysis.
2. Identify batches with optimal process conditions
The value-based search was the perfect functionality to search for similar product types, returning a list of all events of one product type in the past.
From there we added the following calculations to the search results:
- Total energy consumption (Integral)
- Maximum concentration
- Average Pressure
- Average Temperature
- End value of End-product quality
The event analytic functionalities allowed us to see all aggregated values in histogram and parallel coordinate plots from where we could easily refine the results and visualize 15 optimal batches as a representative sample.
3. Golden Batch Fingerprint
Utilizing the fingerprint functionality, we capture the minimum and maximum values of active tags and create a hull curve that represents the optimal process conditions.
4. Fingerprint Monitoring
In the final step, we activated a fingerprint monitor that sends email alerts to the team. Whenever the alert is triggered, the recipient can click the link in the email, directly see which parameter went off and initiate a feedback cycle to correct the deviations.
Value
TrendMiner’s real-time monitoring capabilities allowed us to maintain product quality standards by detecting deviations and ensuring consistent performance. As a result, we were able to optimize our production process and maximize throughput. The optimized operations led to significant cost savings by reducing downtime, minimizing energy consumption, and reducing product loss.
Case Study 2
Innovation in Reducing Low-Weight Complaints
Background
The final product dosing process relies on a tank with rotors, maintaining a constant level and rotors speed during production, but the level drops significantly when changing products. This drop in tank weight slightly reduces dosage, potentially leading to customer complaints about low product weight.
Operators manually increase rotor speed during tank emptying to compensate, but monitoring this action is crucial.
Challenge
- Automatically keep track of the number of times and dates when emptying happens and whether or not the speed of the rotors was increased, for traceability purposes.
- Create a daily dashboard to be used during operating meetings
- Only emptying events of more than 15 minutes are relevant.
Solution
- Create aggregations: Range of the rotors and Max of the tank level in the last 20m.
- Value-Based Search for good and bad emptying periods of at least 15m: max level>60 & current level<50 + (rotors range>0/range=0).
- This search will find periods where the tank used to be full in the last 20m but now is dropping (recipe change). Those changes where the rotors range is higher than 0 means that operators changed the value to a higher speed. If the range is equal to 0, the value didn’t change and, thus, could cause weight complaints.
- Automatically contextualize those good and bad recipe changes.
- Create a dashboard with counters of good & bad events, trends, and current values of the main variables.
Value
- Detect quality losses, leading to improvement in the quality of the product
- Reduction of customer complaints: 37% of low-weight complaints reduced
- Traceability improvement
- More efficient data analysis and reporting
Case Study 3
Calculations of Hourly Average Consumption Values
Background
In the chemical industry, calculating e.g. hourly consumption rates of critical resources such as energy, steam, electricity, and others is important. Accurate calculations enable efficient resource utilization and optimal process control. Continuous monitoring of these consumption rates allows for early detection of potential bottlenecks and facilitates proactive measures to enhance efficiency.
Challenge
Consumption can be calculated both as rolling hourly averages and as fixed hourly values. Subsequently, the derived consumption values serve as the foundation for resource utilization monitoring, identifying efficiency potentials, and implementing targeted measures for process optimization.
Solution
- Calculating rolling hourly averages
The rolling hourly average consumption is calculated using aggregation in the tag builder menu. We select our tag of interest, choose the average as the operator, set the direction to backward, and specify a one-hour aggregation window. With these settings, we can create and save a new tag, allowing us to retrieve the average consumption of the past hour at any given time.
- Calculating fixed hourly averages
The fixed hourly average consumption is calculated by creating a new formula in the tag builder menu:
if(and(a<>b,b=c),AGGREGATION_1, if(and(a=b,b<>c),AGGREGATION_2, sqrt(-1))).
For the variable assignment, we assign a, b, and c to the TM_hour_(timezone) tag, where a is shifted by 1 second (1s) and c is shifted by -1s. This shift enables us to precisely capture the transition from one hour to the next. AGGREGATION_1 and AGGREGATION_2 are assigned to our aggregation created in step 1, with AGGREGATION_1 shifted by -1h and AGGREGATION_2 also shifted by -1s in time, as we need the endpoint of the fixed hour to visualize the average consumption for the entire hour. With the sqrt(-1) function, interpolation is performed between data points, allowing us to create a step for each hour. This formula is then saved, creating a new tag that displays the average consumption for each fixed hour.
Value
Two new tags have now been created, representing the hourly consumption as a rolling average and as a fixed value for each hour. These can be utilized to set up monitoring, such as threshold exceedances, or to create (energy) dashboards. Additionally, these values can serve as a basis for making optimizations if needed. Of course, the calculation can be adjusted not only on an hourly basis but also to determine consumption per day.
Case Study 4
Predicting Key Process Values with Soft Sensors
Background
Soft sensors, also known as inferentials, play a crucial role in industrial processes by predicting important lab values or other slow-sampled variables. In many manufacturing environments, the process involves monitoring various parameters that are critical for quality control or process optimization. However, some of these parameters, such as lab values, are slow to sample due to the time required for testing. This delay can hinder real-time decision-making and potentially lead to inefficiencies or quality issues. Soft sensors offer a solution to this challenge by using fast-sampled inputs as inputs to a machine-learning model. This model can then predict the slow-sampled variables, providing real-time insights without the need to wait for hours for the next lab sample to be tested. Operators can monitor these predictions and make timely adjustments to the process, optimizing operations and ensuring quality standards are met.
Challenge
Our primary challenge is to develop a machine learning model that can accurately predict and mimic a true label tag, specifically a slow-sampled tag such as a lab value. The goal is to create an output tag that closely tracks all future lab values, ensuring that increases in the output tag correspond to increases in the lab results. To achieve this, we need to carefully select the inputs to the machine learning model, ensuring that they are relevant to the prediction task.
One of the key considerations is the independence of the input tags. We need to ensure that the inputs are independent of each other to avoid issues of collinearity, where two or more inputs are highly correlated. Collinearity can lead to instability in the model and make it difficult to interpret the importance of individual features.
Another challenge is the selection of an appropriate machine learning model and the tuning of hyperparameters. We plan to experiment with different models, such as linear regression, decision trees, and neural networks, to find the best-performing model for our use case. Additionally, we will tune the hyperparameters of the selected model to optimize its performance and ensure that it can accurately predict future lab values.
Overall, the challenge lies in developing a machine-learning model that can effectively predict and mimic the slow-sampled lab values, ensuring that the output tag closely tracks the true lab values and provides valuable insights for process optimization and quality control.
Solution
To address the challenge of predicting slow-sampled variables in real-time, we implement a soft sensor solution using TrendMiner’s functionalities. The first step involves identifying the fast-sampled inputs that can serve as predictors for the slow-sampled variable, such as a lab value. These inputs are selected based on their correlation and relevance to the target variables.
Next, we use TrendMiner’s machine learning capabilities in MLHub to train a predictive model using the selected inputs. The model is trained to accurately predict the slow-sampled variables based on the fast-sampled inputs, taking into account the process dynamics and interdependencies between variables.
Once the model is trained and validated, it is deployed in TrendHub, where it continuously updates based on the fast-sampled inputs and provides real-time predictions for the slow-sampled variable. Operators are able to access these predictions through a user-friendly interface, allowing them to monitor the process and make adjustments as needed to optimize operations and ensure quality standards are met.
By implementing this soft sensor solution, we can overcome the limitations of traditional sampling methods and improve our ability to predict and control critical process parameters in real time.
Value
The implementation of the soft sensor solution using TrendMiner’s functionalities can result in significant value for operations teams. Firstly, the real-time prediction of slow-sampled variables, such as lab values, can enable operations and engineering to make proactive decisions and adjustments to the process, leading to improved efficiency and quality. By eliminating the need to wait for lab samples to be tested, operations can experience reduced delays and downtimes, resulting in increased productivity.
Furthermore, the predictive insights provided by the soft sensor can optimize processes and reduce waste. Operators can quickly identify and address potential issues before they escalate, leading to cost savings and improved overall performance. Additionally, the user-friendly interface of the soft sensor solution makes it easy for operators to access and interpret the predictions, empowering them to take informed actions in real time.
Overall, the implementation of the soft sensor solution can enhance operational efficiency, improve product quality, and reduce costs, demonstrating the value of leveraging advanced analytics in industrial processes.
Case Study 5
Driving Cost Savings in Resource Consumption
Background
This use case shows how to report the daily specific consumption of a resource and make a comparison between them daily. At this point we have no way of comparing or keeping track of the specific consumption of a resource. If we would have a tag for this it would be a big help in terms of process and resources optimization, as well as bring economic value. The daily specific consumption will be reported as spikes at the end of each day. As a starting point we have a tag containing our specific consumption of the last 24h. By being able to have an overview of the spikes of all days we can optimize our process.
Challenge
1. Get a spike at the end of each day with the 24h of total consumption.
2. Get the value of different days to compare them.
Solution
- Creating daily consumption spikes
The daily consumption spikes tag is created by using the formula tagbuilder, we use an if statement where the check is day1=day2 with day 1 current time tag (day) and day 2 1m delayed time tag (day) when it is true it gives back 0 and when it is false (=23h59) it gives the specific consumption of the last day.
- Comparing consumption of different days
The most straightforward way of making the comparison of the specific consumption is by creating a view containing the spikes of the wanted period. This way a visual comparison can be made to see how the specific consumption of resources evolves over time.
Results and value
Keeping better track of consumption enables us to track our usage more closely, reducing the likelihood of overconsumption and fostering a more sustainable approach. This proactive management not only prevents waste but also streamlines resource allocation, ultimately leading to more efficient processes and reduced environmental impact. By optimizing consumption habits from the outset, individuals can contribute to a more responsible and mindful utilization of resources.
Case Study 6
Crystallization Issues Root Cause Analysis
Background
This use case is centered on a mixing tank, where the streams of two distinct products are joined. The intermediate products are continuously added to the mixing tank until it reaches a certain level. At that point, a sample is taken for analysis, and the tank is emptied into a larger storage tank.
Challenge
Over a few weeks, there have been crystallization issues with one of the two products. This undesired crystallization interferes with sensors leading to unreliable measurements, which can lead to quality issues as the mixture is processed further.
We needed to find the root cause of these crystallization issues.
Solution
- Visualizing the problem
The composition of the samples from the mixing tank was displayed in TrendMiner. Here we could clearly see the concentration of the solvent product decreasing, which is an obvious cause of the other product crystallizing. The question thus becomes: why are we receiving less solvent from our production process? - Hypothesis generation
Since there were no obvious hypotheses to check, we resorted to the Cross-Correlations functionality in TrendMiner, which allows for searching all tags in TrendMiner for correlations. Using this functionality on the slowly decreasing trend of solvent concentration yielded a significant correlation with the drop in level in a tank elsewhere in our process. - Hypothesis investigation
Zooming out in TrendMiner, we noticed that the level in this particular tank first sharply increased and then slowly decreased. This matches the addition of an external intermediate product to the process, which is then mixed into the internally produced product in this tank. As this external product is processed and the level in the tank drops to normal levels, the solvent concentration downstream in our process decreases. This points to the addition of external intermediate products as a cause for the process issues. Looking at historic data when an external intermediate product was added, the same issues occurred there, confirming this hypothesis. - Uncovering the failure mechanism
Further investigation in TrendMiner revealed the mechanism by which the external intermediate product influenced the solvent stream: a higher recycle flow was implemented upstream in the process to deal with the different product compositions. This increased recycle also impacted other properties, however, which eventually led to a different split ration of solvent and other components. As less solvent eventually reaches our mixing tank, crystallization issues start to occur.
Value
A clear root cause and its mechanism have been uncovered using TrendMiner. Many of the steps required to come to a full understanding of this issue would not have been possible in other tools.
Now that the cause of the issues is known, corrective action can be taken whenever external intermediate product is added to the process.
Unlocking the Potential in Your Data
Conclusion
As these cases clearly demonstrate, the difference between costly setbacks and streamlined success often lies in the ability to predict issues before they escalate and make informed decisions in real-time. TrendMiner empowers your team to do just that—turning your process data into a competitive edge that drives your business forward.
Are you ready to uncover similar hidden potential in your operations?
Take the Next Step
Start your Free Trial today or reach out to discover how TrendMiner can optimize your manufacturing processes and lead you to continuous improvement and innovation.