Manage the form and delivery of modeled data to consuming applications using a simple graphical interface in the Intelligence Hub.
Pipelines provide the ability to better curate and optimize data payloads for specific applications using a simple but flexible graphical user interface.
Build out stages in a pipeline to buffer, transform, or format data flows to optimize delivery for consuming application and device nodes. Easily adapt and reuse Pipelines instead of maintaining disparate and overlapping data models. |
Make modeled, contextualized data more accessible to more nodes.
When connections are established for an integration, they are created with a specific intent in mind, influencing how data is modeled and moved over time. The original intent might have been to simply consume source data and then move it as-is into a target system. Or perhaps the intent was to blend data with additional context or transform it into some logical structure. But as new target systems are introduced to consume this data, the project scope likely needs to evolve due to the unique requirements or limitations of how these target systems consume data.
Sometimes, it may be the model itself. Other times, it may be the presentation and delivery of the model.
In practice, context is introduced as source data traverses organizational hierarchy and systems of record. This data and context, which is governed by a model, needs to be consumable by a wide range of devices and applications. Not only can HighByte Intelligence Hub ingest and model data with context, but it can flexibly adapt its presentation and delivery for target systems. Pipelines retain the semantics of a model while transforming the presentation and delivery to the unique needs of the systems consuming it. To summarize, Pipelines make modeled, contextual data more accessible to more nodes.
Sometimes, it may be the model itself. Other times, it may be the presentation and delivery of the model.
In practice, context is introduced as source data traverses organizational hierarchy and systems of record. This data and context, which is governed by a model, needs to be consumable by a wide range of devices and applications. Not only can HighByte Intelligence Hub ingest and model data with context, but it can flexibly adapt its presentation and delivery for target systems. Pipelines retain the semantics of a model while transforming the presentation and delivery to the unique needs of the systems consuming it. To summarize, Pipelines make modeled, contextual data more accessible to more nodes.
Breakup Complex Payloads
Some target systems have limitations in how they process data. They may need to consume data in a curated form. They may be incapable of parsing or filtering structured data. With Pipelines, Intelligence Hub can dynamically break up objects and arrays as well as discard unnecessary elements to facilitate easy consumption by application nodes. Instead of constructing integrations arbitrarily, Pipelines employ meta data to dynamically curate the presentation of data to the needs of the systems consuming it.
Some target systems have limitations in how they process data. They may need to consume data in a curated form. They may be incapable of parsing or filtering structured data. With Pipelines, Intelligence Hub can dynamically break up objects and arrays as well as discard unnecessary elements to facilitate easy consumption by application nodes. Instead of constructing integrations arbitrarily, Pipelines employ meta data to dynamically curate the presentation of data to the needs of the systems consuming it.
Buffer Data
Beyond the presentation of data to target systems, one must also consider the delivery of data. Some systems consume records in batches. Some systems reside on constrained or variable cost infrastructure that would benefit from ingesting data at a specific cadence. Pipelines in the Intelligence Hub can buffer the delivery of industrial data based on time or record count. This enables efficient consumption by target systems regardless of how source systems produce and transmit industrial data.
Beyond the presentation of data to target systems, one must also consider the delivery of data. Some systems consume records in batches. Some systems reside on constrained or variable cost infrastructure that would benefit from ingesting data at a specific cadence. Pipelines in the Intelligence Hub can buffer the delivery of industrial data based on time or record count. This enables efficient consumption by target systems regardless of how source systems produce and transmit industrial data.
Publish Into a Single Payload
Some systems are unable to consume structured data. Instead, they consume data as “flat” lists of name-value pairs. With Pipelines, the Intelligence Hub can “flatten” data structures into a single payload for target systems while preserving model context within the topic or attributes names.
Some systems are unable to consume structured data. Instead, they consume data as “flat” lists of name-value pairs. With Pipelines, the Intelligence Hub can “flatten” data structures into a single payload for target systems while preserving model context within the topic or attributes names.
Use HighByte Intelligence Hub to complete the following tasks:
✓ Transform and buffer modeled data for the unique data consumption needs of target systems
✓ Employ metadata and logic in sequenced stages to dynamically shape the presentation and delivery of data
✓ Compose custom transformation stages with JavaScript expressions to satisfy advanced use cases
✓ Manage and monitor data processing stages from the graphical Pipelines builder interface
✓ Employ metadata and logic in sequenced stages to dynamically shape the presentation and delivery of data
✓ Compose custom transformation stages with JavaScript expressions to satisfy advanced use cases
✓ Manage and monitor data processing stages from the graphical Pipelines builder interface
Ready to see more?
Curious to learn more and see a live demo of HighByte Intelligence Hub? In this demo, we will show you how to deploy HighByte Intelligence Hub at the Edge to access, model, transform, and securely flow industrial data to and from your IT applications without writing or maintaining code.