Added support for Model Context Protocol (MCP). The MCP Server can be enabled under the REST Data Server settings. The server exposes Pipelines as tools for AI Agents.
Added support for managing Intelligence Hub projects with Git. This includes the ability to back up a project to a Git repository manually and automatically, as well as the ability to deploy an Intelligence Hub that pulls its configuration from one or more Git repositories.
Added OpenTelemetry (OTel) support for publishing JVM metrics, event logs, and Pipeline and Connection statistics.
Added the ability to use LLMs to create Instances and contextualize data. Users can now manually create and map a single Instance and then use it as an example for an LLM to find other similar Instances in an OPC UA server or other address space.
Added LLM connectors, including OpenAI, Azure OpenAI, Google Gemini, and Amazon Bedrock. These can be used with cloud and locally hosted LLMs.
Added new connector to Databricks with the ability to write files to Databricks Volumes.
Added new connector to TimescaleDB, including the ability to create and write to hypertables and standard relational tables.
Added Amazon S3 Inputs for reading and listing files in S3.
Added Input support for the Apache Kafka Connector.
Added Output support for the Snowflake SQL Connector.
Added Change Data Capture (CDC) support for Oracle Database Connector.
Enhanced Snowflake Streaming outputs to support the ability to update tables.
Added new feature to send Pipeline errors to another Pipeline for generic error handling. This allows for building a common error handling pipeline to perform custom logic, like opening an internal support ticket.
Added support for setting the HTTP response status code of a Callable Pipeline when invoked via the REST Data Server or MCP Server.
Added support to the Amazon S3 Connector to specify custom URLs for connecting to S3-compatible services like MinIO.
Added support for configuring a custom Proxy Endpoint in the Amazon S3 Connector.
Added Connection Keep-alive settings for SQL connections.
Added a new Read stage in Pipelines to support reading from a single source and outputting the result of the read. The prior Read stage has been changed to a Merge Read.
Added persist flag to On Change stage to retain state between restarts in Pipelines.
Changed parameters for Inputs, Instances, and Pipelines to be explicitly defined with data types and default values. Parameters are now separate from template settings and are passed as a JSON object when used in a reference (e.g., Connection.opc.tag(address=xyz)) is now Connection.opc.tag({“address”:”xyz”}). When referencing a source that accepts parameters, the UI now shows parameters and allows users to easily set them.
Changed Callable Pipelines from being an option on the Pipeline to the new API Trigger and Callable Trigger. The API Trigger exposes the Pipeline to the REST Data Server and MCP Server. The Callable Trigger allows the Pipeline to be called internally from a Namespace or Sub-Pipeline Stage. Both stages can also define parameters.
Added support for uploading certificates and private keys in multiple formats, including PEM (file and text-based) and PKCS12, with additional support for encrypted PKCS8 keys.
Added new command-line option to set and rotate the password used to encrypt the PKCS12 file.
Added the ability to drag & drop sources on existing Namespace nodes to update the node reference.
Added the ability to reorder Namespace nodes.
Updated the Pipeline Replay UI to allow for side-by-side event comparisons like Pipeline Debug.
Pipeline stage errors now persist, allowing administrators to easily view, track, and dismiss pipeline and stage errors without searching logs.
Added connection and output level statistics for completedWrites and writePerSecond.
Added support for limiting the size of the PI System and Aspen IP.21 agent log files. The size is limited to 3, 100MB files by default.
Enhanced PI Point Read and Point Browse inputs to support retrieval of additional point metadata.
Enhanced the Pipeline Filter Stage to allow for building complex filters like starts with, ends with, and contains.
Added support for reading Float and Double data types using the Oracle Database Connector.
Added support for file inputs to read an absolute path.
Added support for passing parameters to OPC UA numeric identifiers.
Added support for using system and environment variables to configure the port field in OPC UA, SQL, Modbus, MQTT, and Sparkplug connections.
Fixes:
Fixed an issue with Sparkplug 'Include Properties' features not returning properties.
Fixed an issue where a Namespace with multiple nodes that reference the same source would result in null values on read.
Fixed an issue in the UNS Client that allowed publishing to topics that contained invalid characters.
Fixed an issue where failed bulk deletes of models would show “Invalid Name” instead of the model’s name.
Fixed an issue during Instance creation that would prevent saving if you tried to navigate back/forward while a default attribute was set to invalid JSON.
Fixed a rare issue where bulk deleting users could result in the removal of the root user.
Improved the error message when SQL output writes were attempted and failed due to the attribute names in the payload not matching any columns in the table definition.
Improved the error message when attempting to write an array of simple values to a SQL output. Simple values (e.g., Integers) are not supported.
Adjusted Postgres column data types. The following changes were made: int8 and uint8 are now int2, uint16 is now int4, uint32 is now int8, uint64 is now numeric, and real32 is now float32.
Added support for case sensitivity in Oracle tables.
Improved the error message for output reference resolution errors (e.g., topic: {{this.topic}}) when the reference cannot be replaced. The error now includes the reference name.
Improved the error message when attempting to import a project with an invalid file format.
Improved the error message in cases where a connection setting uses a reference (e.g., {{System.OPCPort}} that is invalid or does not exist to include the name of the missing secret or variable.
Fixed an issue that would show a stale connection status after a connection was saved.
Fixed an issue where connections would lose references to system secrets during synchronization.
Changed the “waitTime” statistic to no longer include the execution time of the Pipeline.
Added validation to prevent the creation of a root Namespace Node named “HighByte” due to conflict with internal namespace.
Breaking Changes:
If a project is using a System Secret or Variable in a Connection, and the setting is hidden in the UI (i.e., the feature the setting is under is disabled), the Connection will still try to resolve the reference when the Connection is used. If the reference is invalid (i.e., the System Secret does not exist) the Connection will fail with an error. To get around this, clear the reference from the UI. In version 4.1 and older, the reference would be ignored.
In versions 4.1 and earlier, leading or trailing spaces in parameter values were automatically trimmed unless the value was quoted. For example, {{Connection.opc.tag(id= 4)}} used in an address like plant.plc.group{{this.id}}.tag would become plant.plc.group4.tag. Starting in version 4.2, spaces are preserved. This change can cause unexpected behavior or read errors if parameter values include unintended spaces. This can also be caused by template expansion that includes rogue spaces. For example, a template id=1,2,3, 4,5 would result in the parameter " 4". To prevent issues, check and clean up parameter values and templates to remove rogue spaces. If you encounter a problem and can’t resolve it, please contact support.
If a Pipeline manually builds parameter keys and values (e.g., stage.setMetadata(“myParams”, “param1=1,param2=2”)) and passes these to a source using a Read Stage (e.g., {{Connection.opc.tag({{event.metadata.myParams}})}}), this will need to be manually refactored. The source must be re-added to the Pipeline Read Merge Stage and the parameters must be mapped using the new method where param1=event.metadata.param1 and param2=event.metadata.param2.
Security Patch Updates:
UI/Frontend
CVE-2025-43864: Defect that could lead to cache poisoning and impact application availability.
CVE-2025-27789: Defect that would generate inefficient code when compiling certain regular expressions with capture groups.
CVE-2025-27152: Requests vulnerable to possible SSRF and Credential leakage when using absolute URL.
REST Client and S3 Table Connections
CVE-2025-27820: Bug in PSL validation logic that could impact domain checks, cookie management, and host name verification.
Parquet Connector and S3 Tables
CVE-2025-30065 & CVE-2025-46762: Schema parsing bugs in parquet-avro module that could allow bad actors to execute arbitrary code.
REST Data Server and Configuration API
CVE-2025-48734: Improper Access Control vulnerability in Apache Commons
Kafka Connection
CVE-2025-27817: Addresses bug that could cause Kafka clients to expose arbitrary data from disk.