Smarter access to analytics for process engineers

Access to analytics

How pattern recognition software leverages historian data to automate information gathering for plant engineers and managers.

In today’s information age, data is everywhere. Plant and operations managers receive vast amounts of both structured and unstructured data every day. But how can they improve operational efficiencies and manage this data as usable nuggets of information? In this article we explain how process data can be accessed quickly as useful information and affordably improve performance.

Predicting process performance today

In order to run a plant smoothly, process engineers and operators need to be able to accurately predict process performance or the outcome of a batch process, while eliminating false positives. Accurately predicting process events that will likely happen in a plant or facility requires accurate process historian or time-series search tools and the ability to apply meaning to the patterns identified within the process data.

Historians serve as a repository for process data from many systems, making them a good source for advanced analytics. However, process historian tools are not ideal for automating the analysis of the data or search queries. They are ‘write’ optimized and not ‘read/analytics’ optimized. Finding the relevant historical event and building the process context is usually a laborious time consuming task. They often require a great deal of interpretation and manipulation and are less than automated, performing rear-looking trends or export raw data in Microsoft Excel. The tools used to visualize and interpret process data are typically trending applications, reports and dashboards. These can be helpful, but are not particularly good at predicting outcomes.

Predictive analytics & data science

Predictive analytics, a relatively new dimension to analytics tools, can provide valuable insights about what will happen in the future based on historical data, both structured and unstructured. Many predictive analytics tools start by using a more enterprise approach and require more sophisticated distributed computing platforms such as Hadoop or SAP Hana. These are powerful and useful for many analytics applications, but represent a more complex approach to managing both plant and enterprise data. Companies that use this enterprise data management approach must often employ specialized data scientists to help organize and cleanse the data. In addition, data scientists are not intimately familiar with the process like engineers and operators, which limits their ability to achieve the best results.

Limitations of data modelling software

Many of these advanced tools are perceived as engineering-intensive “black boxes” in which the user only knows the inputs and expected outcome, without any insight into how the result was determined. Understandably, for many operational and asset related issues, this approach is too expensive and time consuming and require a highly skilled data scientist. This is why many vendors target only the 1 percent of critical assets, ignoring many other opportunities for process improvement.

“There is an immediate need to search time-series data and analyze that data in context with the annotations made by both engineers and operators to be able to make faster, higher quality process decisions. If users want to predict process degradation or an asset or equipment failure, they need to look beyond time series and historian data tools and be able to search, learn by experimentation and detect patterns in the vast pool of data that already exists in their plant.”

Peter ReynoldsSenior Consultant, ARC Advisory Group

Analyzing big data without a data scientist

A level of operational intelligence and understanding of the process data are required to improve plant performance and overall efficiency. Process engineers and other personnel must be able to search time series data over a specific time frame and visualize all related plant events quickly and efficiently. The time series data is generated by the process control and automation systems, lab systems and other plant systems and may include annotations and observations made by operators and engineers.

Today’s industrial process data analytics solutions are taking a different approach by leveraging unique multi-dimensional search capabilities. This approach combines the ability to visualize process historian time-series-data, overlay similar matched historical patterns and provide context from data captured by engineers and operators. The ideal pattern recognition solution quickly and easily integrates with the plant historian database archives and provides a scaleable architecture to communicate with available enterprise distributed computing platforms.

These new solutions use “pattern based discovery and predictive process analytics” targeting the average user. It is typically easily deployed (“plug and play”), delivering immediate value, with no data modeling solution or data scientist required. The solution is designed to provide targeted capabilities and not all the bells and whistles of larger systems requiring in-depth education and training. By combining both the search capabilities on structured time series process data and data captured by operators and other subject matter experts, users can predict more precisely what is occurring or what likely will occur within their continuous and batch industrial processes. This “self-service analytics” software puts the power into the hands of the process experts, engineers and operators, who can best identify and annotate areas for improvement.

Searching for trends

Using pattern recognition and machine learning algorithms permits users to search process trends for specific events or to detect process anomalies, unlike traditional historian desktop tools. Much like the music app Shazam, self-service analytics work by identifying significant patterns in data or “high energy content” and matching it to similar patterns in its database instead of trying to match each note of a song.

These technologies form the critical basis of the new systems technology stack because it makes use of the existing historian databases and creates a data layer that performs a column store to index the time series data. These next generation systems also work well with leading process historian suppliers including OSIsoft, AspenTech, and Yokogawa. Typically, they are designed to be simple to install and deploy via a virtual machine (VM) without impacting the existing historian infrastructure.

Give your engineers smarter access to insights

The technology landscape for manufacturers and other industrial organizations is rapidly changing. To remain competitive, companies must use analytics tools to uncover areas for efficiency improvements.

If you’d like to know more about the measurable benefits that self-service analytics can offer, please contact us.

 

 

More information?

ARC webinar Industrial Process Analytics

Deep dive into the impact of digital transformation on process data, industry challenges to adoption and the key role of predictive analytics in avoiding unplanned downtime with Peter Reynolds of the ARC Advisory Group.