How to leverage knowledge base(s) for app creation
After you have chosen knowledge base as your orchestration method in the App Details page, click the 'Next' button at the bottom right corner to proceed to the Configure Bot page, where you will connect and manage your knowledge sources.
1. Configure Bot
This section lets you refine your bot settings by integrating the chosen orchestration method and configuring key parameters to match your unique requirements. These customizations ensure the bot functions seamlessly within your workflow, delivering an efficient, personalized, and purpose-driven experience.
Connecting and managing knowledge base(s) in your app
Click the ‘Add’ button to choose one or multiple knowledge bases you wish to connect to your app


You can use the search function to quickly find specific knowledge bases from the list. After selecting the knowledge base(s), click ‘Add’ to connect them.
The connected knowledge base(s) will immediately appear in the table with their names. To enable the Advanced Reasoning feature, tick the checkbox and choose a schema from the available options for the connected knowledge base to conduct detailed data queries.

To remove a connected knowledge base, slide the bar to the right and click the trash icon to delete it.

Establishing system instructions
Define specific instructions or guidelines that the app must follow during user interactions.
Methods to add instructions
Manually – Type or edit the text directly in the System Instructions box.
Generate – Click ‘Generate’, provide a short prompt, and let the LLM create the instruction or provide a draft of the instruction that the LLM will refine.


Library – Click ‘Library’ to browse and import one of your pre-built prompt templates from ZBrain’s prompt library.
Click ‘Add from Prompt Library’, choose the desired prompt from the list, and then click ‘Use’ to apply it.

Check the 'Import Prompt Settings' box to automatically include the prompt text along with its associated settings, such as model type, temperature, top-p, and max token limits. This helps reduce manual setup, maintain consistency across applications, and streamline prompt reuse.

Writing guidelines (Applicable to all methods)
Be specific and detailed– Clearly define how the bot should behave, including any specific instructions or constraints it must follow.
Use natural, conversational language – Avoid unnecessary jargon unless the use case requires it.
Incorporate context awareness – Instruct the bot to refer to previous turns or stored data so replies remain relevant and personalized.
Customizing application settings and parameters
Users can modify the application settings to customize default configurations according to their preferences.
Accessing bot settings
Navigate to the bot settings to view current configurations, including the selected model, temperature setting for response variability, and maximum token limit.

Click ‘View all settings’ to access additional parameters and their preset settings:
Top P: Also known as nucleus sampling, controls response diversity by considering only the top probability mass tokens.
Presence penalty: Adjusts the likelihood of introducing new topics by discouraging repeated words.
Frequency penalty: Reduces the probability of repetitive phrases by penalizing frequent token occurrences.
Context max token: Defines the limit for contextual memory, determining how much past conversation is retained.
Response max token: Specifies the maximum token count for each AI-generated response.
Model: The specific AI model used for generating responses.
Guardrails: Safety constraints applied to AI-generated responses to ensure appropriate content.
Static input: Automatically appends predefined text to every prompt sent to the LLM. This input serves as consistent context or instruction, helping shape responses. Useful for maintaining tone, domain relevance, or specific behavior across interactions.
Rerank: Improves search result relevance by prioritizing the most relevant responses.
Follow-up conversation: Enables the model to recall past interactions within the same session.
Source: Allows the model to summarize documents as part of its responses.

Editing model parameters
Click on the pencil icon to edit settings.
Select the preferred model.
Adjust parameters using the provided controls:
Click ‘Load Presets’ and select Creative, Balanced, or Precise.
Load Presets provides predefined configuration settings optimized for different response styles:
Creative: Generates more imaginative and diverse responses.
Balanced: Maintains a mix of creativity and precision.
Precise: Focuses on accuracy and concise responses.

Use the slider to configure each parameter based on your preferred output behavior.
Refer to the definitions above for guidance on how each parameter affects responses.

Enable or disable features as needed:
Reranking model: Enhances the relevance of search results.
Follow-up conversation: Allows the model to remember previous conversations within a session.
Guardrails (Optional): Applies safety constraints to AI responses.
Static input (Optional): Adds static text for consistent context.
Source (Optional): Allows the model to summarize documents in responses.

Saving and reverting settings
Update settings: Click to save customized configurations for future use.
Default settings: Click to revert to default configurations.
Advanced manual configuration
Users can also make manual changes by clicking the ‘Edit’ button in the manual configure box.
This option is applicable only for advanced users.
Proceeding to the next step
Click ‘Next’ to move forward to the set appearance page.
Adding guardrails
Guardrails act as the app’s built-in protective layer, ensuring every interaction stays compliant, secure, and aligned with the organization's standards. Enabling them proactively helps prevent policy violations and jailbreak attempts before they ever reach production. To add a guardrail, follow the steps below:
Click ‘+ Add a guardrail parameter’.
A right-hand side panel titled Add Guardrail Parameter appears.

Toggle on one or both options:
Input Checking – screens every user prompt for policy violations (jailbreak detection, hate, self-harm, disallowed content) and blocks or sanitizes the request/response.
Jailbreak Detection – looks for prompt-injection patterns that try to override your system instructions and blocks or sanitizes the request/response.
Once guardrails are enabled, they appear in the main panel with a lock icon, indicating that they are active. You can reopen the panel at any time to disable or re-enable them.

Note: The ‘+ Add a guardrail parameter’ button remains avaiable so you can attach future guardrail types as they become available in ZBrain.
2. Set appearance

You can customize the visual and interactive aspects of your app by configuring the following:
Welcome message: Add a welcome message that users will see when they first access the app.
App name and description: Provide a name and a brief description of your app.

Sample questions: Add up to nine sample questions to guide users on how to interact with the bot. You can also generate these questions automatically.
Upload logo and app theme: Enhance your app’s branding by uploading a logo and selecting a theme.
Bot name and icons: Customize the bot's name and icons to match your app’s branding.
After clicking ‘Done,’ your app will be created, and you'll be directed to an overview page to manage various aspects of your application. Alternatively, click ‘I will do later’ to set the appearance at a later time.
Last updated