ZBrain
The ZBrain piece in ZBrain Flow provides direct access to advanced AI capabilities and knowledge management features of ZBrain. This powerful component allows you to leverage AI models, search through knowledge bases, query applications, and run automated agents within your workflows. With ZBrain integration, you can enhance your automations with intelligent decision-making, natural language processing, information retrieval, and autonomous task execution - bringing AI-powered capabilities to every part of your workflow process.
How to Use ZBrain Piece in ZBrain Flow?
Step 1: Select ZBrain as Your Connection
Click on the '+' button in the Flow and search for ZBrain.
Select ZBrain.
Decide on the action you need, then select it. ZBrain Flow provides several options:
Knowledge Base Search – Search and retrieve information from your knowledge bases.
Query App – Send queries to your ZBrain applications.
Ask AI Model – Directly interact with AI models.
Run Agent – Execute autonomous agents to perform complex tasks.
App Previous Conversations – Retrieve query sessions and conversation history for a specific app within ZBrain.
Update Knowledge Base – Modify the details of an existing knowledge base in ZBrain.
Update App – Edit key properties of an existing ZBrain app.
How to Search a Knowledge Base?
Step 1: Configure API Connection
If you haven't connected ZBrain yet, click on the ‘API Key’ field, then select ‘Create connection.’
In the popup window that appears:
Enter a ‘Connection Name’ to identify this connection.
Paste your ‘API Key’ from your ZBrain account (found in Settings > My Account).
Click ‘Save’ to create the connection.
If already connected, you'll see your ZBrain account name with a ‘Reconnect’ option if needed.
Step 2: Select Knowledge Base
From the ‘Knowledge Bases’ dropdown, select which knowledge base you want to search. You can choose from all knowledge bases available in your ZBrain account.
Step 3: Enter Search Query
In the ‘Query’ field, enter the search terms or questions to look for in your knowledge base. This can be a direct question, keywords, or a specific phrase.
Step 4: Set Maximum Size
Use the ‘Max Size (tokens)’ field to limit the amount of content returned. The default is 2000 tokens.
How to Query ZBrain Apps?
Step 1: Configure API Connection
For connecting your ZBrain account, refer to Step 1 in the "How to Search Knowledge Base?" section.
Step 2: Select App
From the ‘Apps’ dropdown, select which ZBrain application you want to query. This displays all apps available in your ZBrain account.
Step 3: Enter Query
In the ‘Query’ field, enter the question or prompt you want to send to the selected app. Frame your query according to what the app is designed to answer.
Step 4: Specify Conversation ID (Optional)
If you want to maintain context from previous interactions, enter a ‘Conversation ID’. Leave this blank for a new conversation without prior context.
How to Ask an AI Model?
Step 1: Configure API Connection
For connecting your ZBrain account, refer to Step 1 in the "How to Search Knowledge Base?" section.
Step 2: Select AI Model
From the ‘Model’ dropdown, choose which AI model you want to interact with. Different models may have different capabilities and specialties.
ZBrain supports a wide range of powerful LLMs to cater to diverse enterprise needs. Below is the complete list:
OpenAI Models
GPT-3.5 Series
Model
Key functionalities
GPT-3.5 Turbo
General-purpose conversational AI
Good balance of performance and cost
4K token context window
Suitable for chatbots, content generation, and simple queries
GPT-3.5 Turbo 16K
Extended 16K token context window
Same capabilities as GPT-3.5 Turbo
Better for processing longer documents
GPT-3.5 Turbo 1106
November 2023 update with improved instructions following
Better JSON mode
Reduced hallucinations
16K context window
GPT-3.5 Turbo 0125
January 2025 update
Enhanced reasoning abilities
Improved instruction following
16K context window
GPT-4 Series
Model
Key functionalities
GPT-4
Advanced reasoning capabilities
Strong performance on complex tasks
Better at understanding nuance
8K token context window
Multi-modal capabilities (with vision)
GPT-4 0125 Preview
January 2025 update
Improved reasoning and instruction following
Enhanced actuality
128K token context window
GPT-4 1106 Preview
November 2023 update
Better instruction following
Improved JSON mode
128K token context window
GPT-4 0614
June 2024 update
Reduced hallucinations
Better system message understanding
8K token context window
GPT-4 Turbo
More cost-effective than standard GPT-4
Faster response times
Knowledge updated through April 2023
128K token context window
GPT-4 Turbo Preview
Preview version with the latest improvements
Experimental features
128K token context window
GPT-4o
Optimized version with near GPT-4 performance
Significantly faster response times
Cost-effective
128K token context window
Multi-modal capabilities
GPT-4o Mini
Smaller, more efficient version of GPT-4o
Balance of performance and cost
128K token context window
ChatGPT-4o Latest
Consumer-facing implementation of GPT-4o
Optimized for conversational use cases
Includes the latest model updates
GPT-4.1 Series
Model
Key functionalities
GPT-4.1
Next generation full-size model<br>• Advanced reasoning and coding abilities<br>• Improved factuality and reduced hallucinations<br>• 128K context window
GPT-4.1 Mini
Smaller version of GPT-4.1<br>• Better performance-to-cost ratio<br>• Suitable for most enterprise applications<br>• 128K context window
GPT-4.1 Nano
Highly efficient, compact model<br>• Fast inference speeds<br>• Good for deployment in resource-constrained environments<br>• 32K context window
GPT-4.5 Series
Model
Key functionalities
GPT-4.5 Preview
Experimental preview of next-generation capabilities
Advanced reasoning and planning
Enhanced creative abilities
256K context window
Anthropic Models
Model
Key functionalities
Claude 3 Haiku
Fastest and most compact Claude 3 model
Efficient for high-volume applications
Good balance of speed and intelligence
200K token context window
Claude 3.5 Sonnet
Mid-range model with advanced capabilities
Strong reasoning and instruction-following
Excellent document analysis and summarization
200K token context window
Meta Models
Model
Key Functionalities
Llama 3-8B-Instruct
Open-weight 8 billion parameter model
Good performance for model size
8K context window
Suitable for deployment on edge devices
Llama 3-70B-Instruct
Large 70 billion parameter model
Strong performance across tasks
8K context window
Good for complex reasoning tasks
Meta/Llama 3-2-3B-Instruct-v1.0
Compact 2.3B parameter model
Optimized for efficiency
Good performance for size
Suitable for mobile and edge applications
Google Models (Gemini)
Model
Key functionalities
Gemini 1.5 Pro
Advanced multimodal reasoning
Strong performance across text, code, and vision tasks
1 million token context window
Excellent for complex multi-step tasks
Gemini 1.5 Flash
Faster, more efficient version of Gemini 1.5
Good performance-to-cost ratio
1 million token context window
Gemini 2.0 Flash
The latest generation efficient model
Improved reasoning and instruction following
Enhanced multimodal capabilities
1 million token context window
Gemini 2.0 Flash 001
Updated version of Gemini 2.0 Flash
Improved performance and reliability
1 million token context window
Gemini 2.0 Flash Exp
Experimental version with the latest features
Advanced capabilities being tested
1 million token context window
Gemini 2.5 Pro Preview 03-25
Preview of next-generation capabilities
Enhanced reasoning and planning
Superior multimodal understanding
2 million token context window
Gemini 2.5 Pro Exp 03-25 Free
Experimental free version
Similar capabilities to the preview version
2 million token context window
Mistral AI Models
Model
Key functionalities
Mistral Large
High-performance generalist model
Excellent reasoning capabilities
Strong at following complex instructions
32K token context window
Mistral Large 2411
November 2024 update
Improved factuality and reasoning
Enhanced instruction following
32K token context window
Pixtral Large 2411
Multimodal version with vision capabilities
Strong image understanding and reasoning
November 2024 update
32K token context window
Anthropic O Series
Model
Key functionalities
O1
Flagship model with strong reasoning
Excellent at complex problem-solving
Superior instruction following
128K token context window
O1 Preview
Preview version with the latest capabilities
Experimental features
128K token context window
O1 Mini
Smaller, more efficient version
Good balance of performance and cost
32K token context window
O3 Mini
Next generation compact model
Advanced capabilities in a smaller package
Improved reasoning and problem-solving
32K token context window
Step 3: Set System Instructions
In the ‘System Instructions’ field, define the AI's behavior and context. Default is "You are a helpful assistant" but you can customize this for specific roles.
Step 4: Enter Your Prompt
In the ‘Prompt’ field, enter the question, instruction, or content for the AI to respond to.
Step 5: Add Images (Optional)
If your model supports image analysis, you can add ‘Image URLs’ by clicking "Add Item".
Step 6: Adjust Model Parameters
Temperature: Control randomness (0-2). Lower values for more deterministic responses.
Maximum Tokens: Set the length limit for the generated response.
Response format: Choose between "Text" or other available formats.
Top P: Adjust nucleus sampling parameter (alternative to temperature).
Frequency penalty: Control repetition of phrases (-2 to 2).
Presence penalty: Control topic diversity (-2 to 2).
Messages: Set the number of message exchanges to include.
How to Run an Agent?
Step 1: Configure API Connection
For connecting your ZBrain account, refer to Step 1 in the "How to Search Knowledge Base?" section.
Step 2: Select Agent
From the ‘Agents’ dropdown, select which ZBrain agent you want to execute. This displays all agents available in your ZBrain account.
Step 3: Specify URL (Optional)
If your agent needs to interact with a specific web resource, enter the URL. Leave blank if not required for your agent's operation.
Step 4: Provide Input
In the ‘Input’ field, enter any data or parameters the agent needs to perform its task. The format will depend on what your specific agent expects.
How to Fetch Previous Conversations of an App
Step 1: Configure API Connection
Enter the API key of an existing ZBrain connection. If you have not created one, refer to Step 1 in the "How to Search Knowledge Base?" section for instructions.
Step 2: Select App
Choose the desired app from the dropdown. This will load the relevant app data from your ZBrain account.
Step 3: Select Session
Select the session you would like to view. This displays previous query sessions associated with the selected app.
Step 4: Enter Limit
Specify the number of recent conversations you want to retrieve by entering a limit. This helps narrow down the results based on your requirement.
How to Update a Knowledge Base
Step 1: Configure API Connection
Enter the API key of an existing ZBrain connection. If you don’t have one, refer to Step 1 in the "How to Search Knowledge Base?" section to create a new connection.
Step 2: Select Knowledge Base
From the dropdown, choose the specific knowledge base you want to update.
Step 3: Enter Title
Provide the new title for the knowledge base entry you are updating. This will help identify the entry within the knowledge base.
Step 4: Enter Content
In the content field, input the new or updated content for the knowledge base entry. This will overwrite any existing content associated with your selected knowledge base.
How to Update an App
Step 1: Configure API Connection
Enter the API key of an existing ZBrain connection. If you don’t have one, refer to Step 1 in the "How to Search Knowledge Base?" section to create a new connection.
Step 2: Select App
Choose the app you want to update from the available list in your ZBrain account.
Step 3: Enter App Name
Provide the new name for your app or modify the existing name if needed.
Step 4: Enter Description
Update the app description to reflect any changes or provide additional details.
Step 5: Select the Model
Choose the model that your app will utilize. Select from the available options based on your app’s requirements.
Step 6: Enter the Temperature
Input the temperature value for the app's model settings. The temperature controls the randomness of the model's responses (higher values make the output more random).
Step 7: Enter the Context Max Token
Set the maximum token limit for the app’s input context. This defines the amount of data the model can consider while generating responses.
Step 8: Enter the Response Max Token
Specify the maximum token limit for the app's output response. This determines the length of the model's generated output.
Last updated