Self Service Continuous Improvement 4.0

Utilizing Self-Service Analytics to fuel Continuous Improvement Projects

Six Sigma is not just one of the most well known methods to transform whole organizations into a data-driven and continuously improving companies but also a philosophy that should be ingrained in day to day thinking of everyone. At the heart of Six Sigma is the well known Define Measure Analyze Improve Control (DMAIC) Cycle which is at center stage in order to achieve organizational wide continuous improvements. Even if you do not follow the Six Sigma method, the DMAIC cycle is a valuable approach for any continuous data-driven improvement project.

Download a quick refresher of the DMAIC cycle.

DMAIC is a good start – but the tooling does not deliver in the age of Big Data.

At first glance, this structured approach seems like a perfect fit for continuous data-driven improvement within the organization and therefore, facilitates a strong involvement of the Six Sigma philosophy in daily operations. In our current world of big data comes a plethora of real time data streams, as well as a widespread IT landscape. The tools and methods used within the DMAIC cycle prove to have limitations for the Six Sigma stakeholders on plant local process Experts (LPEs) as well as on central-level or Central Process Experts (CPEs) when dealing with this reality.

If you examine an improvement project through the eyes of a LPE Six Sigma stakeholder, (who is most likely to be on a green belt level) the following struggles occur:

  • Priority-driven environments resulting in time constraints
  • Tools offering no live connection to the data and no interaction with existing IT landscape leading to tedious data gathering
  • Tools that are expert-aimed and that use statistical methods and in turn require “under the hood” knowledge to use and easily interpret
  • Blocked projects due to lack of hypothesis generation capabilities in order to provide new insights hidden in the process data

But not only do LPE Six Sigma stakeholders experience struggles during projects. For CPE Six Sigma stakeholders (with a high likelihood of being on a black belt level), improvement projects come with their own set of difficulties:

  • Lack of specific process knowledge in order to interpret the data from multiple plants easily
  • Overload of projects pushed from the plants
  • Long communication cycles with plant personnel
  • Tools offering no live connection to the data and no interaction with existing IT landscape including the comparison of different plants on global level leading to tedious data gathering
  • Blocked projects due to lack of hypothesis generation capabilities in order to provide new insights hidden in the process data
  • Difficulties in closing the loop of the project so that improvements are actually implemented

Those difficulties from both perspectives make it clear that as much as the structure of the DMAIC cycle is suited for data-driven analysis, the tooling is not up for the challenges in this day and age of big data. The impact for the organization and everyone involved in continuous improvement projects are:

  • Underutilization of the process expertise of LPEs
  • Missed improvement opportunities
  • Long project cycles
  • Lack of smooth symbiosis between plant and central level stakeholders

All the above mentioned points do not just create frustration for the stakeholders involved but can be directly transferred into financial losses for the organization.

This shows a clear need to provide a common way of analyzing process data between LPEs and CPEs that lowers the threshold for starting an improvement project while significantly empowering LPEs to utilize their great process expertise and while enabling the CPEs to easily integrate with expert tooling if needed.

Self-Service Analytics and DMAIC – A perfect match to jump start continuous improvement.

One solution to avoid the problematic situations above is by employing self-service analytics. 

Self-service analytics is a new approach to provide industrial process data analytics for various stakeholders throughout the organization. The approach combines the necessary elements to visualize a process historian’s time-series data, overlay similar matched historical patterns, and enrich with data captured by engineers and operators.

A form of “process fingerprinting” then provides operations and engineering with greater process insights to optimize the process and/or predict unfavorable process conditions based on predefined baselines. Furthermore, unlike traditional approaches, performing this analysis doesn’t require the skill set of a data scientist or data expert since the user is always presented with interpretable results in an iterative fashion.

Key elements of this approach include:

  • A system that brings together deep knowledge of process operations and data analytics techniques to minimize the need for specialized data scientists/data experts or complex, engineering-intensive data modeling which can transfer human intelligence to machine intelligence to gain value from operational data already collected.
  • A model-free, predictive, process-analytics (discovery, diagnostic and predictive) tool that complements and augments, rather than replaces, existing historian data architectures.
  • A system that supports cost-efficient virtualized deployment and that is “plug-and-play” within the available infrastructure, yet also has the ability to evolve into a fully scalable component of corporate Big Data initiatives and environments.

Based on its key elements, self-service analytics is exactly what is lacking in the current DMAIC framework in order to get everyone  involved and enabled in order to contribute towards continuous data-driven improvement of the organization.

Independent of whether a project is smaller in scope or has a longer termed focus that aims at working through the DMAIC framework and that also comes with analytics needs. Some examples can be the reducing column flooding of a particular column which is tackled by Six Sigma stakeholders amongst the LPEs and achieving a five percent increase in column utilization in peroxide plants which is a long term project usually conducted by CPEs with a Six Sigma background. 

So how exactly does the DMAIC framework with its analytics needs match up with what self-service analytics has to offer? Let’s take a look. 

Define

In the Define phase, the important points are the defining and scoping of the problem/project and prioritizing based on a cause/effect relationship. Reaching those goals comes with specific analytics needs associated to the Define phase. In order to examine an issue, one important point is to have quick access to various data sources. Solutions in the space of self-service analytics should have live connections to various data sources encountered in the ecosystem of process industry so that all the necessary data can be used as needed.

The first exploration of a problem often is very visually driven. This means that visual analytics plays a very important role. A good self- service platform will provide the enticing visuals that are easy to interpret, relevant for process industry and fast to create.

As it is known to everyone who works with data in any capacity analyzing the cause/effect relationship of a symptomatic upset starts with rigorous data preparation. It is of central importance that self-service analytics provides an easy way to provide an answer to the question “Which process conditions are representable of my symptom at hand?”. Only then is an unbiased view on the problem possible. In order to retrieve all relevant symptoms which can be seen as similar, the predefined data space process data needs to be made searchable including context information from the various roles involved in the production. Only then can a clear picture of the problem/project emerge based on the complete history of the data.

Since prioritization usually is about business impact, it may be necessary to calculate relevant KPIs if not present in the historian. Here, an easy-to-use formula editor enables engineers and operators to create those KPIs and make them available for any further analytics.

Measure

Now that problem/project at hand is defined, scoped, and prioritized, a baseline performance as well as objectives to reach based on that baseline need to be established. It is quite clear that in order to do so, access to all kinds of tags relevant to the production is needed which means similar to the Define phase, importance of live connections to various data sources is crucial for a self-service analytics platform at this point.

In order to define an accurate baseline performance, either expert knowledge is needed to assess similar situations or context information on top of the data is to be provided. Again a search engine for process data including context information plays a central role for self-service analytics solutions in this phase. It will give the power to the LPEs and CPEs to quickly retrieve the situations that matter, utilizing the complete relevant history of the data recorded including the captured process expertise and storing a baseline performance of the situation at hand. In general, this step requires that the measurement systems providing data for the data historian are continuously checked in the field.

Analyze

As the Analyze phase is all about determining root cause(s) that contribute towards achieving the project’s objective, focus here will be on the analytics needs in the diagnostic space. The involved stakeholders either want to check predefined hypothesis or are in need of generating new insights from the data history. Since processes in the process industry are of non-Markovian nature, allowing for time shifts is vital for both hypothesis testing and generation.

The first will happen within a predefined smaller set of tags where the second will need to be conducted either within a undefined or predefined big number of tags. A self-service analytics platform that really wants to make a difference in this crucial phase of the DMAIC framework not only needs to provide live connection to all relevant tags from various sources but also needs to provide the stakeholder with interpretable results in an iterative way. This is a very important aspect since full control will empower the various stakeholders to utilize the whole power of analytics – not only to test already known hypothesis quickly without statistical knowledge but also to generate new insights using the power of advanced analytics. The results should not be black box but easy to grasp and therefore, trusted and acted upon.

Improve

During the Improve phase, focus is put on testing and confirming a solution. An example is altering the control concept of a column to see if the previously occurring flooding behavior is eliminated. In process industry, this is usually done while production is running. The analytics needs arising from this phase are concentrated in online data monitoring while optionally enabling a comparison against a stored baseline performance. This means for the requirements of a well-suited self-service analytics platform to offer possibilities of easy configuration of real time performance, monitors where a stored baseline of any kind can be utilized if needed. This enables various stakeholder(s) to check if the performed action/change in the process yielded the desired results which means the Control phase can begin.

Control

In the Control phase, the benefits of the successful project can be reaped continuously for years to come. This holds true on a local as well as on a global scale. The analytics needs are similar to the Improve phase which would be online data monitoring including a baseline if needed. An additional need in this phase is the ability to monitor against signatures of early indicators that have been identified in the Analyze phase so that not just an early detection is ensured but a preventive warning is issued.

This leads to avoidance of the upset and is of greatest value to the plant as well as to the organization. This means that a self-service analytics solution next to enabling easy configurable performance monitors including previously stored performance baselines needs to provide the ability to mark signature patterns available for real time monitoring. This should build a great foundation for following through on any gained insights based on low thresholds of adoption and common trust in the results obtained.

Benefits & Conclusion

The DMAIC cycle is a good way of providing structure for data-driven continuous improvement but lacks the tooling to support an organizational change. Self-service analytics is a perfect solution to involve everyone (LPEs/CPEs) according to his/her primary skill set. In addition, applying self-service analytics within the DMAIC cycle significantly shortens projects times while lowering the threshold of starting a project. Crucial differentiators are:

  • (Live) Connectivity
  • Integration with specialist tooling
  • Intuitive and iterative work approaches
  • Search engine for process data to enable connecting the knowledge of the process experts to the data in no time
  • Generation of new insights into problems from process data history
  • Creation of an overarching knowledge base
  • Increased collaborative work between teams
  • Closing the loop with performance monitoring

Fully embracing the power of self-service analytics should lead to a organizational change towards data-driven process continuous improvement with no opportunity left on the table.