Azure OpenAI
Azure OpenAI in ZBrain Flow is a suite of AI services provided by Microsoft Azure that enables advanced natural language processing and text generation. It leverages GPT-based models to handle tasks such as drafting text, summarizing content, and answering questions, offering powerful conversational and language understanding capabilities within your workflows.
How to Integrate Azure OpenAI with ZBrain Flow?
Click the “+” Button in the Flow Open your ZBrain Flow and select the plus sign (+) to add a new step.
Search for “Azure OpenAI” Type “Azure OpenAI” in the search bar to view the available tasks.
Choose the Desired Task Select the Ask GPT action to interact with Azure OpenAI’s GPT-based language models.
Ask GPT
Pose any question or provide any prompt to the AI model, and receive a context-aware, generated response. This can be used for tasks such as drafting emails, generating summaries, or brainstorming ideas.
How to Configure the "Ask GPT" Action with Azure OpenAI in ZBrain Flow?
Step 1: Add the “Ask GPT” Step in ZBrain Flow
Insert a New Step
In your flow, click the + button to add a new step.
Search for “Azure OpenAI”
Type “Azure OpenAI” in the search bar and select Ask GPT from the available options.
Step 2: Create an Azure OpenAI Connection
Enter Connection Details
Connection Name: Give your connection a recognizable name (e.g., “Azure OpenAI”).
Endpoint: Provide the endpoint URL for your Azure OpenAI resource (e.g.,
https://<resource-name>.openai.azure.com
).API Key: Paste the API key retrieved from your Azure Portal.
Save the Connection
Click Save to finalize the connection settings.
Your new Azure OpenAI connection is now ready to use.
Step 3: Configure the “Ask GPT” Action
Deployment Name
Enter the name of your model deployment from Azure. This is the deployment ID you created for your GPT-based model in Azure.
Question
Type in the prompt or question you want GPT to answer. This can be dynamic, using data from previous steps or a static text prompt.
Temperature
Controls the “creativity” of the model’s output. A lower temperature yields more focused results; a higher temperature produces more varied responses.
Maximum Tokens
Sets the maximum length of the generated answer. The default is often 2048, but you can adjust based on your needs and token limits.
Top P
Another way to control sampling. A lower value focuses on the highest-probability tokens, while a higher value broadens the range of possible outputs.
Frequency Penalty
Adjusts the model’s tendency to repeat lines or phrases. Ranges from -2.0 to +2.0.
Presence Penalty
Encourages the model to talk about new topics. Ranges from -2.0 to +2.0.
Memory Key (Optional)
Assign a key to store conversation history if you want to maintain context across multiple steps in your flow.
Roles (Optional)
Add or edit role-based messages (e.g., “system,” “assistant,” “user”) to guide the conversation. This is especially useful for advanced prompt engineering or multi-turn conversations.
Step 4: Save and Test
Save the Configuration
Once you’ve filled in all required fields, click Save or Done to confirm.
Run the Flow
Test your flow to confirm the Ask GPT step is communicating properly with Azure OpenAI.
Check the output to ensure it matches your expected response.
Review and Iterate
If the response needs adjustment, tweak parameters such as Temperature or Top P.
For more context, update the Roles or Question fields.
Last updated