Skip to main content
🤖 Experimenting with Agentic AI? You will need an MCP server. Get Started >
Release Notes HighByte Intelligence Hub Version 4.2 Beta
Caution: The HighByte Intelligence Hub version 4.2 beta is intended to demonstrate new features, receive feedback, and adjust those features for the release. It is not intended for production, and we provide no guarantee that work done in beta will be ported to release. 
New Features:
  • Added support for an embedded Model Context Protocol (MCP) Server, configurable under the REST Data Server settings. The server exposes Pipelines using the 'API Trigger' as accessible tools for AI Agents.
  • Added support for retrieving defined parameters of an API Callable Pipeline via the REST Data Server. These definitions can be leveraged by users and external agents to better understand the requirements for interacting with a specific pipeline.
  • Added support for backing up Intelligence Hub configuration to Git. The intelligencehub-deployment.json file stored in Git contains all configuration used by the hub. This feature is enabled in Settings. Added command line export to manually create intelligencehub-deployment.json.
  • Added support for defining Git repositories in the deployment settings files. Deployment fragments can reference multiple different repositories in order to build deployment configurations from one or more files stored in Git.
  • Added a new start-up option that uses the deployment file specified in an environment variable. This deployment settings files can be used to generate hub configuration from several JSON files from the local file system or Git repository(s).
  • Added an AWS Bedrock connector.
  • Added an OpenAI connector.
  • Added support to OpenAI and Bedrock connectors for setting the endpoint, allowing customers to host LLMs on local or custom endpoints.
  • Added support for reading files from S3.
  • Added support for filtering the S3 List inputs by update time.
  • Added functionality to list the files in an S3 bucket to support iterative searching and conditional processing. Results return the associated name, type, and metadata.
  • Added Input support to the Kafka connector.
  • Added Output support for the Snowflake SQL connector.
  • Added Change Data Capture (CDC) support for Oracle SQL.
  • Added Keep Alive options for SQL connections.
  • Enhanced Snowflake Streaming outputs to support the ability to update tables.
  • Added new connector to TimescaleDB, including the ability to create and write to hypertables and standard relational tables.
  • Added OpenTelemetry support for publishing JVM metrics.
  • Added typed parameters to Inputs and Instances. Input and Instance templates will be migrated to rely on these new parameters where applicable.
  • Added new Reference field that supports drag and drop to simplify passing parameters to parameterized references. This is available in Instance attributes with “Reference” type expressions and the new Read stage. Existing instance attribute “Reference” types will be migrated to the new reference format.
  • Added support for uploading certificates and private keys in multiple formats, including PEM (file and text-based) and PKCS12, with additional support for encrypted PKCS8 keys.
  • Added new command-line option to set and rotate the password used to encrypt the PKCS12 file.
  • Added support for uploading certificate chains alongside private keys, enabling customers to validate entire certificate chains rather than just the root CA.
  • Added reference drag and drop support to Namespaces for updating existing nodes.
  • Added the ability to reorder Namespace nodes.
  • Added side-by-side before/after event value diff to Pipeline Replay mode.
  • Pipeline stage errors now persist, allowing administrators to easily view, track, and dismiss pipeline and stage errors without searching logs.
  • Added a new Read stage to read each reference independently and process it through the pipeline, improving the experience around reusing pipelines for multiple inputs and instances. Renamed existing read stage to “Merge Read” and migrated existing projects to use Merge Read.
  • Added new Error Handler feature to Pipelines where trigger and stage failures will be sent to a specified try/catch pipeline. Each error message sent to the global try/catch pipeline will contain the corresponding stage type of the errored pipeline. Using the stage type, users can now build special handling behavior for a set of specific stage types. For example, users can build a try/catch pipeline to persist values that write stages failed to write out.
  • Added persist flag to On Change stage to remember on change state between restarts.
  • Added support for limiting the size of the PI System and AspenTech IP.21 agent log files. The size is limited to 3, 100MB files by default.
  • Enhanced PI point read functionality to support retrieval of additional metadata. This allows customers to access extended metadata beyond the default set when needed.
  • Enhanced the filter stage to allow for building complex filters and filtering beyond the top level of objects. Added four new types of filters: Starts With, Ends With, Contains, and REGEX.
  • Added support for reading FLOAT and DOUBLE_PRECISION data types using the Oracle SQL connector.
Fixes:
  • Fixed an issue in the UNS Client that allowed publishing to topics that contained invalid characters.
  • Fixed an issue where failed bulk deletes of models would show “Invalid Name” instead of the model’s name.
  • Fixed an issue in the instance creation stepper when navigating back while an attribute default was set to an invalid JSON.
  • Fixed a rare issue where bulk deleting users could result in the removal of the root user.
  • Fixed an issue where SQL output writes were attempted even if the incoming payload didn't match any columns in the table definition.
  • Fixed an issue with Sparkplug 'Include Properties' features not working.
  • Fixed the generated error message when attempting to write out an array of simple values over SQL outputs.
  • Adjusted Postgres column data types.
  • Added support for case sensitivity in Oracle tables.
  • Improved the error message for reference resolution errors to include the reference that could not be resolved.
  • Improved the error message when attempting to import a project with an invalid file format.
  • Improved the error message when the reference cannot be resolved during connection initialization to include the name of the missing system secret or variable.

Security Patch Updates:

Web Application

  • CVE-2025-43864: Defect that could lead to cache poisoning and impact application availability.
  • CVE-2025-27789: Defect that would generate inefficient code when compiling certain regular expressions with capture groups.
  • CVE-2025-27152: Requests vulnerable to possible SSRF and Credential leakage when using absolute URL.

REST Client and S3 Table Connections

  • CVE-2025-27820: Bug in PSL validation logic that could impact domain checks, cookie management, and host name verification.
Parquet Connector and S3 Tables
  • CVE-2025-30065 & CVE-2025-46762: Schema parsing bugs in parquet-avro module that could allow bad actors to execute arbitrary code.