How to create a prompt

Creating a prompt in ZBrain Prompt Manager is straightforward, allowing you to steer your AI application's and agent’s responses with precision. Follow the steps below to create and configure a prompt effectively:

Step 1: Initiate prompt creation

  • Click on 'Prompts' and navigate to the right side of the Prompt Manager interface.

  • Click the ‘Create’ button to open the prompt configuration page and start creating your new prompt.

Step 2: Configure prompt details

In the prompt panel:

Provide a clear title

  • Enter a title: Provide a clear and descriptive name for your prompt. Click the ✏️ (pencil) icon to edit the title.

Select the provider and model

  • Choose the appropriate AI provider (e.g., OpenAI, Claude AI) from the dropdown menu.

  • Select the specific model you want to use (e.g., GPT-4, Claude 2).

  • Click the settings (⚙️) icon next to the model selection to configure parameters like temperature, max tokens, and Top-P values, etc.

Step 3: Configure the prompt

Static prompt type: System (predefined)

System is the default and static prompt type. The system message defines instructions that guide the model's behavior or tone throughout the session.

To define the prompt, follow the steps below:

Add instructions

You have two options for entering the prompt instructions that will guide the model:

Option 1: Manual entry

  • Type the prompt instructions directly into the text field.

  • These instructions can include detailed tasks, specific formatting rules, tone guidelines, etc.

Option 2: Auto-generate using LLM

  • Click the ‘Generate’ button to create instructions using a large language model automatically.

  • A dialogue box will appear titled ‘What would you want to update?’

Within this box, you can:

  • Create a new prompt: Describe the function or output you want the AI app/agent to perform (e.g., “Summarize long documents in bullet points”).

  • Optimize an existing prompt: Paste an existing prompt you'd like to refine or improve using AI suggestions.

  • After entering your input or requirements, click ‘Create’ to generate the prompt.

  • Review and update the generated instructions as needed.

Define input variables

Variables allow you to insert values into prompts at runtime dynamically. This approach supports reusability and flexibility in prompt behavior.

Option 1: Add variables using the interface

  1. Click ‘+ Add’ to define variables.

  1. Select from the following predefined variable types:

    • App:

      • ID, Name, Description

    • Flow:

      • ID, Name

    • User:

      • ID, Name, Email

    • Time and date:

      • Current date and time, Day, Timezone

    • Session:

      • ID, query, context, feedback, input files

    • Knowledge base Choose from the knowledge base(s) you have previously created to use in this prompt

Option 2: Add dynamic placeholders in text

  • Use curly brackets {{}} to add dynamic placeholders directly into your prompt.

  • These placeholders are populated with actual values at runtime.

Example:

"Summarize the following content: {{content}}"

  • The {{content}} variable will be replaced by the actual input content when the prompt is executed.

This enables the same prompt to be reused across different scenarios.

Note: The variables you define using {{}} will automatically appear in the right-side panel, where you can input the values before running the prompt.

Add evaluators for your prompts

In the right-hand panel, you get the 'Evaluators'. Evaluators help you assess the quality and performance of your prompt outputs by providing automated scoring and analysis.

  1. Click '+ Add Evaluator' to access the evaluator selection menu

  2. Choose the appropriate evaluators based on your assessment needs

  3. Configure the selected evaluators according to your requirements

Types of evaluators

  1. LLM-based evaluators

These evaluators use large language models to assess content quality and accuracy:

Response relevancy

  • Measures how well the AI's response addresses the user's query

  • Evaluates whether the output stays on topic and provides pertinent information

Faithfulness

  • Assesses whether the response accurately reflects the source material or knowledge base

  1. Non-LLM Metrics

These evaluators use traditional computational methods for objective measurement:

Health check

  • Determines whether an entity (e.g., app or agent) is operational

  • Verifies the ability to respond correctly to requests

Exact match

  • Determines if the output exactly matches the expected results

  • Ideal for scenarios requiring precise, predetermined responses

F1 score

  • Combines precision and recall to measure overall accuracy

  • Particularly useful for classification tasks or information extraction

  • Provides a balanced view of performance across different aspects

Levenshtein similarity

  • Compares the expected and actual output to see how similar they are

  • Calculates the minimum number of character changes needed to transform one string into another

  • Helpful for assessing responses that should be similar but may have minor variations

Rouge-L score

  • Evaluates the longest common subsequence between the reference and the generated text

  • Commonly used for summarization and text generation tasks

  • Measures structural similarity and content overlap

  1. LLM-as-judge evaluators

These evaluators use LLMs to assess subjective qualities of responses:

Creativity

  • Evaluates the originality and innovative aspects of responses

  • Assesses whether outputs demonstrate creative thinking

  • Valuable for content generation and brainstorming applications

Helpfulness

  • Measures how useful and actionable the response is for the user

  • Considers practical value and problem-solving effectiveness

  • Important for customer service and assistance applications

Clarity

  • Assesses the clarity, comprehensibility, and structure of the response

  • Evaluates readability and communication effectiveness

  • Essential for educational content and user-facing applications

Choosing evaluators

  • Multiple evaluators: Add as many evaluators as needed to assess your prompt performance comprehensively

  • Complementary metrics: Combine different types of evaluators (LLM-based, non-LLM, and LLM-as-judge) for comprehensive evaluation

  • Task-specific choice: Select evaluators that align with your specific use case and quality requirements

Step 4: Add roles

  • Click ‘+ Prompt’ to define the different parts of your prompt (such as System, User, or Assistant). Select the prompt type based on your use case:

    • System: Sets the behavior, tone and rules for the assistant. It provides high-level instructions or context that guide how the model should respond.

    • Assistant: Represents the model’s response to the user input. It generates answers based on the user prompt and the system instructions.

    • User: Represents the user's query, request, or input. It is the prompt the assistant is expected to respond to.

Step 5: Test the prompt

Testing is crucial to ensure the prompt works as intended and yields the desired outputs.

Enter a test query

  • Input sample data or a query that the AI should respond to. This query should represent real-world input that your AI will handle.

  • Click ‘Run Prompt’ to execute the configured prompt and generate output based on the provided input.

Review the response

  • Analyze the output.

  • View the evaluation summary showing how many evaluators passed (e.g., "1/2 passed")

  • Each evaluator will display its specific score with color-coded indicators:

    • Green: High scores indicating good performance

    • Orange/Yellow: Medium scores suggesting room for improvement

    • Red: Low scores indicating areas needing attention

  • If needed, refine:

    • Instructions

    • Model settings

    • Roles or variables

Iterate and improve

  • Test multiple scenarios or input variations to ensure the prompt works consistently.

  • Based on the testing results, return to the configuration panel to make further adjustments.

  • After making changes, republish the prompt and retest until the desired performance is achieved.

Step 6: Publish the prompt

After configuring all necessary settings:

  1. Review your prompt instructions, variables, and roles.

  2. Click the ‘Publish’ button to save and activate your prompt.

  3. The prompt is now ready for use in your apps.

By following these steps, you can easily create high-quality, structured prompts that align with your application’s goals and ensure consistency across AI interactions.

Last updated