Oct 09 | On-Demand
Share
In this session during DataOps Day Boston, David Garrison shares how National Grid modernized its data pipeline architecture using HighByte Intelligence Hub and Snowflake to transition from batch to real-time data processing.
He walks through the company’s evolution from legacy ETL processes—limited to hourly updates and 60,000 assets—to a scalable, streaming pipeline capable of processing data from hundreds of thousands of devices with less than 15-minute latency. David highlights both the business and technical lessons learned throughout the project, including managing upstream bottlenecks, implementing change data capture (CDC), and aligning business units around a shared data strategy.