Version 2.3.9 | Aug 21, 2025

Overview

ZBrain Builder 2.3.9 offers enhanced monitoring, retrieval, and automation capabilities, providing greater transparency and efficiency. Users can now filter runs by multiple flows by name, track tokens, cost, and credits at both step and run levels, and configure input sources with a persistent object dropdown, such as for Salesforce. The evaluation framework adds instant monitoring from retrieval history, manual test inputs, and automatic log refresh. Guardrails gain threshold-based input filtering with GuardAI, while RAG introduces Gemini Embedding 001 for improved retrieval accuracy. Agent crews now include HTTP and Deep research tools for autonomous API operations and advanced research. Together, these updates boost control, precision, and operational agility across the ZBrain Builder platform.

ZBrain Builder 2.3.9 release overview

Component
Capability
What it delivers

Flows & Pieces

Multi-select filter by Flow name in the Runs table

Enables users to quickly isolate execution logs for one or more flows

Persistent Object dropdown for Salesforce in Agent Input Source setup, displaying integration-specific objects when available or a clear “No objects available” message when none are found

Ensures seamless and consistent configuration and immediate visibility of the selected object

Tracks token usage and cost for executed model within Agent Activity

Provides granular visibility into resource consumption for each executed model step in a Flow

Ability to track and display token consumption and cost per run for all Flow executions, within the Runs and Logs tables

Provides clear visibility into resource usage and the associated cost of each Flow execution

Tracks and displays the total credit cost of each ZBrain Flow run, along with step-wise credit usage for every piece in the execution

Provides full transparency into resource consumption by showing the total and per-step credit costs in the run summary

Evaluation framework

Ability to monitor a specific retrieval test query directly from the History table in the Retrieval Testing step of the knowledge base

Enables users to verify how the retrieved chunks behave under monitoring metrics and configurations.

Adds a “Manually input test value” toggle in the Test panel of Evaluation Settings for flexible test execution.

Gives users flexible control over monitoring tests by enabling evaluations with either manually entered messages or live system outputs.

Enables auto refresh of Monitor logs every 30 minutes.

Provides real-time visibility into monitoring activity.

Guardrails

Enables configuration of the Input Checking guardrail with category-specific thresholds for unsafe content.

Enable apps to detect and block harmful inputs by adjusting filtering sensitivity via sliders.

RAG

Adds Gemini Embedding 001 as a selectable embedding model when creating a Knowledge Base with the Vector Store option

Enables users to leverage Gemini’s embedding strengths for more accurate and context-aware retrieval.

Agents

Adds the HTTP and the Deep research to the default toolset available during Agent Crew creation

Streamlines complex workflows by enabling agents to autonomously select and utilize tools at runtime, thereby enhancing response depth, operational efficiency, and contextual accuracy.

New features

Flows & Pieces

Filter Runs by Flow name

Users can now filter the Runs table by selecting one or more Flow names, making it easy to focus on the executions that matter for debugging, audit, and reporting. The Filter by Flow dropdown lists all available flows and supports multi-select. Users can select multiple flows and deselect a selected flow name by clicking on it again. Selected items appear as chips/tags above the table and next to the filter label with a count. Runs table updates in real time as users filter to show only matching rows. Clicking outside should close the dropdown and display the number of selected flows as tags next to the filter label. Users can use Clear Filters to remove all selections and restore the full run list.

Navigation Flows → Runs → Filter by Flow

Key outcomes

  • Enables faster triage by narrowing results to specific flows.

  • Reduces noise for audits and reviews.

  • Streamlines reporting without leaving the Runs screen.

  • Allows clear, easily interpretable context via selection chips/tags.

Per-activity token usage and cost tracking in Agent Activity

ZBrain Builder now introduces detailed tracking of token usage and cost for each executed model within the Agent Activity view. This enhancement provides users with transparent insights into the exact resource consumption for each executed model step, enabling informed optimization and budget control.

When viewing an Agent run, users can drill down into the Agent Activity screen and see the "Tokens Used" and "Cost ($)" metrics displayed directly beneath each model execution. These figures are dynamically calculated based on actual execution data and will adjust accordingly as actions are performed during the run.

Navigation

Agents → Agent Dashboard → Select Specific Agent Run → Agent Activity → Select Model

Key outcomes

  • Pinpoints the exact cost contribution of each model, improving transparency.

  • Identifies high-cost steps and adjusts prompts, models, or execution strategy to reduce expenses.

  • Monitors spending in real-time at the most detailed level, ensuring cost efficiency without compromising performance.

  • Uses precise cost and usage insights to select models or flow designs that maximize value.

Token consumption & cost tracking in Flow executions

ZBrain Builder introduces two new columns, Token Used and Cost, to the Runs table and the Logs table. The capability enables users to track the exact token usage and dollar cost for each execution, providing deeper visibility into the resource consumption of Flows.

Navigation path

  • For Runs table - Flows → Runs

  • For Logs table - Flows → Flow Name->Logs

Key outcomes

  • Helps identify high-cost flows or executions, allowing for optimization of prompts, steps, or models.

  • Sorting by tokens or cost enables quick detection of anomalies or excessive usage.

  • Runs and Logs tables view ensures ease of navigation and data interpretation.

Credit cost tracking for Flow executions

ZBrain Builder introduces comprehensive credit cost tracking for every Flow execution, enabling full transparency into resource consumption. Users can now view the total duration, total credit cost, token usage, fixed cost, and step-wise credit breakdown for each piece in a Flow run—whether executed as an agent or invoked flow. The credit and cost data are integrated into the Flow run and execution logs, providing consistent visibility into usage across the platform.

Navigation path

Flows-> Build

Key outcomes

  • Identifies which steps or pieces are consuming the most credits, helping optimize flows for cost efficiency.

  • Shows clear cost separation when Flows are triggered within an agent, enabling more accurate billing and tracking.

  • Utilizes detailed consumption metrics to forecast expenses and allocate resources efficiently and accurately.

Evaluation framework

Monitor retrieval test queries from History table

ZBrain Builder introduces a ‘Monitor’ button within the History table of Retrieval Testing. It enables users to initiate monitoring for a specific test query directly from the history view, streamlining the process and reducing the number of steps required for navigation. By preloading the query and retrieved chunks into the Monitor Logs view, users can quickly configure and apply relevant metrics without leaving the context of their retrieval test results.

Navigation

Knowledge -> Select Knowledge Base → Retrieval Testing

Key outcomes

  • Starts monitoring directly from the retrieval test results, eliminating the need to navigate to a separate monitor setup screen and saving time effectively.

  • Provides context-aware monitoring by automatically loading query details and retrieved chunks.

  • Displays only applicable metrics for KB retrieval testing, reducing setup complexity.

  • Disabling the monitor button when no chunks are retrieved prevents wasted setup effort.

Manual test input for event monitoring

ZBrain Builder introduces a Test Evaluation Settings panel within Event Monitoring, enabling users to choose between providing their own manual test input or using system-generated LLM outputs during evaluation tests. This enhancement offers greater flexibility in simulating test scenarios across agents, apps, and reasoning configurations. When enabled, users can supply a custom reference message for comparison against the LLM output; when disabled, the system should trigger evaluation using only the system-generated LLM output and metrics. The behavior differs by entity type, agents require manual input by default, whereas apps and reasoning start with manual input disabled.

Navigation path: Monitor → Select Event → Event Settings → Test

Key outcomes

  • Detects and addresses monitoring issues before deploying to production, ensuring improved quality assurance.

Automatic monitor logs refresh

ZBrain Builder introduces an automated refresh mechanism for monitoring logs, ensuring that users always view the most recent monitoring data without needing to reload the view manually. The system automatically updates logs every 30 minutes in both the global monitor logs and agent-specific monitor logs views. A screen loader animation appears during each refresh to indicate activity, and the manual refresh option remains available for on-demand updates.

Navigation path:

Monitor

Monitor → Agent-specific entity → Monitor Logs

Key outcomes

  • Eliminates the risk of making decisions based on old log information.

  • Improved workflow continuity as automatic updates occur in the background without disrupting ongoing interactions.

Guardrails

Customizable Input Checking guardrail

ZBrain Builder enables users to add and customize input validation rules for their applications, ensuring that harmful or policy-violating content is detected and managed before it reaches downstream processing. This input checking guardrail operates at the application input level, providing users with precise control over which types of unsafe content to block and at what sensitivity level.

When Input Checking is enabled under the Add Guardrails section, users can configure parameters for four predefined unsafe content categories: Dangerous Content, Harassment, Hate, and Sexual Explicit, each with adjustable thresholds. The configuration interface features intuitive sliders that represent the “Block None,” “Block Few,” and “Block Most” settings, allowing users to fine-tune the filter.

Navigation path

Apps → Create New or Select existing app → Configure your Bot → Add Guardrails → Input Checking

Key outcomes

  • Provides granular content control through the ability to adjust filtering thresholds per category, enabling alignment with organizational content policies.

  • Improves safety & compliance, blocks unsafe inputs before they can trigger harmful or non-compliant responses.

  • Allows flexible moderation with the scope to choose to target specific content types or apply a comprehensive filter across all categories.

Agents

HTTP & Deep research Tool added to the Default Toolset for Agent Crew creation

ZBrain Builder introduces the HTTP and the Deep research Tool as part of the default tools available during Agent Crew creation. These tools, configured during crew setup, enable agents to autonomously perform API-driven actions and in-depth analytical research based on user queries, and are automatically invoked whenever relevant conditions are met—without requiring manual tool selection at runtime. Each HTTP request configuration should support: a request name, a description explaining its purpose, HTTP method selection (GET, POST, PUT, DELETE) and fields to input URL, headers, and body. Configuration options should allow users to define the model, the summary type and temperature and other search parameters.

Navigation path

Agents → Create New → Create Agent Crew → Define Crew Structure → Agent Tools → Default Tools

Key outcomes

  • Provides enhanced autonomy to agents, enabling them to intelligently decide when and how to use HTTP calls or conduct in-depth research without user prompts.

  • Eliminates the need for external integrations for these functions, reducing dependency and latency.

  • Workflow simplification with single-point configuration during crew creation ensures seamless integration into the agent’s reasoning loop.

RAG

Gemini Embedding 001 model for vector store knowledge bases

ZBrain Builder introduces Gemini Embedding 001 as a new embedding model option, available exclusively when creating knowledge bases with vector store as RAG definition. This enhancement expands model choice, allowing users to leverage Google Gemini’s advanced embedding capabilities for improved semantic understanding and context-aware retrieval.

Navigation path Knowledge → Knowledge Base → Data Refinement Tuning -> Embedding Model drop-down

Key outcomes

  • Gemini’s embedding model enhances semantic matching, enabling the delivery of more contextually relevant and precise responses.

Improvements

Persistent Object dropdown in agent input source setup

This update enhances the agent configuration process by ensuring users have consistent and predictable information about the objects when configuring integrations, such as Salesforce. If Objects are available, it lists all fetched objects from the integration, allowing the user to select one. The chosen object is displayed in the Object section. Even if no objects are returned, the dropdown remains visible with a clear “No objects available” message, eliminating confusion and improving configuration clarity. This enhancement ensures that integration object selection is transparent and intuitive, reducing setup errors and keeping the user informed throughout the configuration process.

Navigation

Agent → Agent Details → Add an Input Source → Search → Salesforce object

Key outcomes

  • Improves user confidence by ensuring the Object field is always visible, avoiding uncertainty about missing elements.

  • Saves setup time and reduces errors by providing a consistent interface that clearly indicates whether integration objects are available, eliminating unnecessary troubleshooting.

  • Enhances usability and clarity when working with integrations like Salesforce or similar data sources.

Last updated