Version 2.3.1 | May 30, 2025
Last updated
Last updated
ZBrain v2.3.1 introduces significant enhancements across agent orchestration, knowledge base configuration, and evaluation monitoring. This release introduces agent crew, a multi-agent orchestration feature, knowledge graph support for enriched Knowledge Base creation, and real-time evaluation monitoring for greater insight into agent performance. Additional enhancements include streamlined data migration, a more immersive experience for use case discovery, and expanded admin configuration capabilities for company-specific setup within the Center of Intelligence (CoI). Together, these updates strengthen ZBrain’s commitment to enabling secure, scalable, and intelligent AI adoption across the enterprise.
ZBrain introduces agent crew, enabling users to build, configure, and manage a group of agents that work collaboratively to perform multi-step tasks.
Highlights:
Framework support:
ZBrain offers flexibility in multi-agent orchestration through two advanced frameworks:
LangGraph: A stateful, graph-based orchestration engine ideal for building structured, multi-agent workflows with clear interaction logic and memory-aware paths.
Mastra: A lightweight orchestration framework optimized for reactive, event-driven agent interactions, suitable for high-speed execution and dynamic task routing.
Model compatibility:
Compatible with a wide range of leading large language models, including: OpenAI, Google Gemini, Claude AI, Groq, Meta AI, and custom models
Agent management:
Add agents from an existing agent library or create new agents with defined roles and responsibilities
Define instructions per agent and assign behavioral logic
Visually connect agents to design the interaction flow
Custom tool integration:
Attach tools using an integrated code editor
Manage tools under "My tools" and associate them with agents
MCP server configuration:
Add and configure MCP servers for backend processing
Agent crew dashboard:
Upload and track input data
View output responses and execution sequences
Access performance metrics, logs, and agent activity traceability in a centralized view
This update enhances multi-agent development, making it more scalable, traceable, and production-ready across various enterprise use cases.
ZBrain now allows users to simulate flow behavior with sample input data, enhancing testing precision during agent and flow design.
Key capabilities:
When creating or editing any output step in the flow, simply switch on “Use input sample for testing” to reveal additional sample data options.
Sample data sources
Text – Manually paste or type raw text to simulate user input.
File – Select a file to extract text content automatically into the flow’s test context.
URL – Provide a web link (URL) so that the flow can fetch and parse content from that location.
Dedicated “Generate sample data” Tab in catch webhook
Within this tab, users can:
View and edit the raw sample content (JSON format)
Download the generated Output.json
for offline inspection or reuse
Reference the sample data in subsequent flow steps by connecting to the trigger. input.content
fields
Automatic content extraction
File upload: When a file is uploaded (PDF, image, etc.), the system extracts textual content and populates the content
field in Output.json
.
URL fetching: For URLs, the flow retrieves and parses the page’s text automatically, storing it under the same content
key.
Use in downstream steps
Sample data is made available throughout the flow. For example:
In the “Catch webhook” step, users can map content
directly into the processing logic.
Any node that consumes input can reference this preloaded sample without requiring an external data source during development.
Users can now build knowledge bases (KBs) using two distinct retrieval methodologies, offering more flexibility in document retrieval and knowledge extraction.
Vector embeddings: Used to perform similarity-based retrieval, enabling fast and scalable search across large datasets using dense vector representations. Well-suited for high-volume, unstructured data environments requiring flexible, semantic search.
Knowledge graph: Used to represent relationships between data nodes, allowing context-aware retrieval that captures connections and hierarchies within the knowledge base. Ideal for use cases where the structure and interrelations of information are critical.
Retrieval strategy configuration: Choose from multiple retrieval modes:
Local - Employs targeted keyword retrieval to deliver specific, context-dependent information about particular entities.
Global - Provides comprehensive relationship-based information to understand connections and broader conceptual frameworks.
Hybrid - Integrates both entity-specific and relationship-based retrieval approaches to deliver detailed information with broader contextual understanding.
Mix - Integrates parallel knowledge graph and vector search capabilities with temporal metadata for comprehensive multi-dimensional analysis.
Embedding model selection: Select from supported embedding models:
text-embedding-3-large
text-embedding-ada-002
text-embedding-3-small
Knowledge visualization: Visualize the KB structure using the knowledge graph in the document review step. For knowledge graph based KBs, node sources are displayed as unique IDs rather than original document names.
The monitoring module now supports detailed, session-level visibility into agent execution and performance.
Capabilities:
Session-level tracking of agent executions
Prompt and response inspection
Logging of execution time and I/O flow
Structured logs for issue identification and auditability
Supported file types for agent input:
.DOCX
, .TXT
, .JSON
, .PDF
A new admin-facing interface under “Configure your company” allows centralized setup and management of company metadata.
Includes:
Company name and profile details
Vertical and business function mappings
Use case priority
Goals & Objectives (G&O)
Integration: Automatically linked when a new use case or opportunity is created, company metadata is sent with the first interaction.
Users can now upload files directly into the discovery report chat interface for enriched context and dynamic exploration.
Highlights:
File attachment icon embedded in chat input bar
File picker supports multiple uploads per session
Supported formats: .pdf
, .docx
, .txt
, .xlsx
, .png
, .jpg
Max total size per request: 128 MB
ZBrain now supports in-chat rendering of diagrams and charts to enhance the understanding of use case logic and outcomes. These visuals are conditionally loaded based on the use case content and render cleanly without any distortion, ensuring a seamless and informative user experience.
Visual types supported:
HTML-based flowcharts
Apache ECharts visualizations
New capabilities allow users to address incomplete data imports by identifying missing links and uploading dependent files. This improves data continuity and simplifies onboarding for complex datasets.