Four practical use cases for Industrial DataOps
John Harrington
is the Chief Product Officer of HighByte, focused on defining the company’s business and product strategy. His areas of responsibility include product management, customer success, partner success, and go-to-market strategy. John is passionate about delivering technology that improves productivity and safety in manufacturing and industrial environments. John received a Bachelor of Science in Mechanical Engineering from Worcester Polytechnic Institute and a Master of Business Administration from Babson College.
Most manufacturing companies realize the benefits of leveraging industrial data to improve production and save costs, but they remain challenged as to how to scale-up their pilots and small-scale tests to the plant-wide, multi-plant, or enterprise level. There are many reasons for this including the time and cost of integration projects, the fear of exposing operational systems to cyber-threats, and a lack of skilled human resources.
At the root of all of these problems is the difficulty of integrating data streams across applications in a multi-system and multi-vendor environment, which has required some degree of custom coding and scripting. Standardizing data models, flows, and networks is hard work. Unlike an office environment with its handful of systems and databases, a typical factory can have hundreds of data sources distributed across machine controls, PLCs, sensors, servers, databases, SCADA systems, and historians—just to name a few.
Industrial DataOps provides a new approach to data integration and management. It provides a software environment for data documentation, governance, and security from the most granular level of a machine in a factory, up to the line, plant, or enterprise level. Industrial DataOps offers a separate data abstraction layer, or hub, to securely collect data in standard data models for distribution across on-premises and cloud-based applications.
These four use cases illustrate how Industrial DataOps can integrate your role-based operational systems with your business IT systems as well as those of outside vendors such as machine builders and service providers.
At the root of all of these problems is the difficulty of integrating data streams across applications in a multi-system and multi-vendor environment, which has required some degree of custom coding and scripting. Standardizing data models, flows, and networks is hard work. Unlike an office environment with its handful of systems and databases, a typical factory can have hundreds of data sources distributed across machine controls, PLCs, sensors, servers, databases, SCADA systems, and historians—just to name a few.
Industrial DataOps provides a new approach to data integration and management. It provides a software environment for data documentation, governance, and security from the most granular level of a machine in a factory, up to the line, plant, or enterprise level. Industrial DataOps offers a separate data abstraction layer, or hub, to securely collect data in standard data models for distribution across on-premises and cloud-based applications.
These four use cases illustrate how Industrial DataOps can integrate your role-based operational systems with your business IT systems as well as those of outside vendors such as machine builders and service providers.
1. “I need to accelerate and scale an analytics deployment.”
Let’s say you have several injection molding lines and want to run analytics comparing 20 data points from each line to measure KPIs, perform OEE, and run analytics to optimize performance across the fleet. The problem is that the machinery was purchased decades apart from different vendors. Likewise, the controls are from various vendors and have been modified and customized over the years—like the databases they connect with.
Despite efforts to standardize and integrate critical aspects of this infrastructure, the context and data structures vary. Even if they all use pressure, temperature and optical sensors, the vendors, technologies, communication protocols and even units of measure vary.
Instead of embarking on a costly and downtime-inducing rip-and-replace project, or writing custom code to massage the data, process or controls engineers can connect their machines’ OPC UA tags to standard information models in an Industrial DataOps hub. The hub runs on a variety of platforms at the edge, from a single-board IoT gateway, Raspberry Pi, industrial switch, or any Linux device up through Windows 10 and Windows Server Platforms. For scalability, isolation, and security, hubs can be installed at the machine, line, or facility level.
Now, those injection molding machines have streamlined, optimized data that Operations Technology (OT) can easily hand-off to local systems on the edge network as well as data scientists who rely on cloud-based systems for Artificial Intelligence (AI) and other advanced analytics. Data models are fully contextualized and standardized, not just dropped in the Cloud where data scientists—who spend upwards of 80 percent of their time cleaning and preparing data—can get right to work, doing actual data science. Bandwidth and cloud storage are reduced and analytics deployment time is greatly accelerated.
Despite efforts to standardize and integrate critical aspects of this infrastructure, the context and data structures vary. Even if they all use pressure, temperature and optical sensors, the vendors, technologies, communication protocols and even units of measure vary.
Instead of embarking on a costly and downtime-inducing rip-and-replace project, or writing custom code to massage the data, process or controls engineers can connect their machines’ OPC UA tags to standard information models in an Industrial DataOps hub. The hub runs on a variety of platforms at the edge, from a single-board IoT gateway, Raspberry Pi, industrial switch, or any Linux device up through Windows 10 and Windows Server Platforms. For scalability, isolation, and security, hubs can be installed at the machine, line, or facility level.
Now, those injection molding machines have streamlined, optimized data that Operations Technology (OT) can easily hand-off to local systems on the edge network as well as data scientists who rely on cloud-based systems for Artificial Intelligence (AI) and other advanced analytics. Data models are fully contextualized and standardized, not just dropped in the Cloud where data scientists—who spend upwards of 80 percent of their time cleaning and preparing data—can get right to work, doing actual data science. Bandwidth and cloud storage are reduced and analytics deployment time is greatly accelerated.
2. “I need remote facility visibility and want to perform multi-site analysis.”
In industries such as pulp & paper, data flows from multiple sites vary broadly from “wet” continuous processes to hybrid batch and discrete packaging processes. The same goes for industries such as specialty chemicals and food & beverage.
To meet the challenge of integrating data from multiple systems across multiple plants—and also overcome the difficulty of recruiting and maintaining technical support teams at each site—many companies maintain a centralized, corporate Engineering and IT team. This team needs access to data to monitor, maintain, and optimize assets to meet their enterprise-wide goals.
To achieve this level of performance analysis, the corporate group defines uniform models and sends them to the distributed plants, which can then install them in an edge-native Industrial DataOps hub.
Engineers map their local data points to the standard models as systems are modified or added. If a new plant is acquired, data can be easily mapped to the models as well. As a result, the company avoids the downtime caused by traditional/legacy methods or rip-and-replace projects.
Now, operational users are able populate data models and make connections without writing a single line of code, and data scientists receive uniform, high-quality data. Analytics cycle time accelerates, and enterprise-level digital transformation gains the momentum it has previously lacked.
To meet the challenge of integrating data from multiple systems across multiple plants—and also overcome the difficulty of recruiting and maintaining technical support teams at each site—many companies maintain a centralized, corporate Engineering and IT team. This team needs access to data to monitor, maintain, and optimize assets to meet their enterprise-wide goals.
To achieve this level of performance analysis, the corporate group defines uniform models and sends them to the distributed plants, which can then install them in an edge-native Industrial DataOps hub.
Engineers map their local data points to the standard models as systems are modified or added. If a new plant is acquired, data can be easily mapped to the models as well. As a result, the company avoids the downtime caused by traditional/legacy methods or rip-and-replace projects.
Now, operational users are able populate data models and make connections without writing a single line of code, and data scientists receive uniform, high-quality data. Analytics cycle time accelerates, and enterprise-level digital transformation gains the momentum it has previously lacked.
3. “I need to distribute industrial data to multiple business systems.”
Manufacturing companies need data to flow not just vertically from real-time systems to the front office, but across facilities and enterprises. These systems include SCADA, MES, ERP, laboratory/quality systems, asset/maintenance systems, cyber-threat monitoring systems, custom databases, dashboards, spreadsheet applications, and of course the IIoT infrastructure that enables analytics, machine learning, and AI investigations.
For decades, integration has been achieved through APIs and custom scripting/coding from application to application, instead of to a unified environment through which all data sources flow. This approach to APIs buries the code inside applications, making integrations hard to maintain. Inevitable changes to products, automation, and business systems can “break” integrations, resulting in undetected bad or missing data for weeks or even months.
Industrial DataOps prevents such breakdowns from occurring because integrations no longer hide in custom code between applications; they are all maintained through a solution that provides a common abstraction layer. With systems connected through a single integration hub, OT gains an agile environment to proactively manage and distribute data.
Now companies have a faster, easier, and more robust way to establish and maintain their many integrations with a solution that provides data visibility, maintenance, documentation, governance, and security.
For decades, integration has been achieved through APIs and custom scripting/coding from application to application, instead of to a unified environment through which all data sources flow. This approach to APIs buries the code inside applications, making integrations hard to maintain. Inevitable changes to products, automation, and business systems can “break” integrations, resulting in undetected bad or missing data for weeks or even months.
Industrial DataOps prevents such breakdowns from occurring because integrations no longer hide in custom code between applications; they are all maintained through a solution that provides a common abstraction layer. With systems connected through a single integration hub, OT gains an agile environment to proactively manage and distribute data.
Now companies have a faster, easier, and more robust way to establish and maintain their many integrations with a solution that provides data visibility, maintenance, documentation, governance, and security.
4. “I need to securely provide customers with data from my machines.”
Machine builders face the continual challenge of reducing the time and cost of developing, integrating, and maintaining their machinery. Agility and flexibility are key to preventing project time and cost overruns. This is especially critical when they must integrate their systems with upstream/downstream assets such as conveyors, robots, drives, and other assets (perhaps including the customer’s HMI/SCADA system) at the customer’s facility. This integration traditionally involved customizing machine code or ladder logic programming to support a customer’s systems or telling the customer to “figure it out”.
Now, with Industrial DataOps, the vendor can standardize their programming and perform the required customizations through information modeling in the hub. When the customer needs “these five data points to go to the SCADA system, these 10 to my MES, and these 7 to the Cloud for corporate,” they are easily able to define the models and route the information.
Beyond the customer site, Industrial DataOps connects machine builders with their manufacturing customers’ hubs and, in turn, their HMI/SCADA, MES and other systems. The ability to provide remote monitoring, diagnostic, OEE, and predictive maintenance services at any scale brings added value to the supplier-customer relationship.
When information is in a standard form that’s easy to maintain, all parties gain broader integration, whether at a single site, multiple sites, or remotely through communications with corporate organizations or external partners.
Now, with Industrial DataOps, the vendor can standardize their programming and perform the required customizations through information modeling in the hub. When the customer needs “these five data points to go to the SCADA system, these 10 to my MES, and these 7 to the Cloud for corporate,” they are easily able to define the models and route the information.
Beyond the customer site, Industrial DataOps connects machine builders with their manufacturing customers’ hubs and, in turn, their HMI/SCADA, MES and other systems. The ability to provide remote monitoring, diagnostic, OEE, and predictive maintenance services at any scale brings added value to the supplier-customer relationship.
When information is in a standard form that’s easy to maintain, all parties gain broader integration, whether at a single site, multiple sites, or remotely through communications with corporate organizations or external partners.
Industrial DataOps provides an abstraction layer for independent management of information streams that frees organizations from relying on any one vendor’s platform. Just as importantly, it allows OT to make changes to assets, controls logic, and systems without “breaking” integrations or interrupting the existing operational infrastructure.
Wrap Up
These four use cases illustrate a new way of thinking about integrating data connections, models, and flows at the machine, line, plant, and enterprise level using Industrial DataOps.
To learn how this approach can serve your specific organization, please watch this short video introducing Industrial DataOps and the HighByte Intelligence Hub or request a live demo.
To learn how this approach can serve your specific organization, please watch this short video introducing Industrial DataOps and the HighByte Intelligence Hub or request a live demo.
Get started today!
Join the free trial program to get hands-on access to all the features and functionality within HighByte Intelligence Hub and start testing the software in your unique environment.