Skip to main content
🎥 HighByte Office Hours: MCP Services for Industrial AI. Learn More & Register. >

HighByte Case Study​: Moving from Batches to Streaming​

Oct 09 | On-Demand

HighByte Webcasts HighByte Case Study​: Moving from Batches to Streaming​

About this event

In this session during DataOps Day Boston, David Garrison shares how National Grid modernized its data pipeline architecture using HighByte Intelligence Hub and Snowflake to transition from batch to real-time data processing.

He walks through the company’s evolution from legacy ETL processes—limited to hourly updates and 60,000 assets—to a scalable, streaming pipeline capable of processing data from hundreds of thousands of devices with less than 15-minute latency. David highlights both the business and technical lessons learned throughout the project, including managing upstream bottlenecks, implementing change data capture (CDC), and aligning business units around a shared data strategy.

Key takeaways include:
  • How streaming data with HighByte and Snowflake reduced latency and improved scalability
  • The business impact of moving from batch ETL to real-time data for forecasting and operations
  • Lessons learned in addressing upstream system constraints and driving organizational change
  • The importance of early planning for contextualization, CDC, and multi-team collaboration