How to configure a monitoring event for apps
Step 1: Access the monitoring configuration
To set up monitoring for an application:
Access the app session:
Navigate to the Apps page
Click on your desired application
Go to the query history section of the app
Select a specific user session from the list to monitor (shows session ID, user info, prompt count)

Review the conversation:
View the session details and chat history. You can configure monitoring events at the query level. For instance, in a conversation containing multiple queries, each query can have its own monitoring event
Examine the response options (copy, conversation logs, edit annotation reply and feedback)

Access conversation logs:
Click 'Conversation Log' to see the interaction details of a specific query
Review status, time, and token usage
Check the input, output, and metadata

Enable monitoring:
Click the ‘Monitor’ button in the overview tab
Click ‘Configure now’ when prompted with ‘Added for monitoring’

Step 2: Configure event settings
You will be redirected to the Events > Monitor page. In the last status column, click ‘Configure’ to open the event settings page. On the event settings screen:


Review entity information
Entity name: The name of your application
Entity type: The type of entity being monitored (e.g., App)
Verify monitored content
Monitored input: The query or prompt being evaluated
Monitored output: The response being assessed
Set evaluation frequency
Click the dropdown menu under "Frequency of evaluation."
Select the desired interval (Hourly, Every 30 minutes, Every 6 hours, Daily, Weekly, or Monthly)

Configure evaluation conditions
Click ‘Add metric’ in the Evaluation Conditions section
Select a metric type from the available metric types:
LLM-based
Non-LLM
Performance
LLM-as-a-judge

Choose the evaluation method (is less than, is greater than, or equals to)
Set the threshold value (0.1 to 5.0)
Click ‘Add’ to save the metric

Set the "Mark evaluation as" dropdown to fail or success

Configure notifications

Toggle the ‘Send Notification’ option to enable alerting for this monitoring event.

Click ‘+ Add a Flow from the Flow Library’. This opens the ‘Add a Flow’ panel. This interface allows you to configure notification flows through a dual-tabbed layout.
The side panel includes two tabs:
Default: Contains predefined ZBrain template Flows.
Custom: Displays custom-built Flows created.
Choose a Flow from either the Default or Custom tab based on your notification needs.
The interface prevents duplication by restricting the addition of the same Flow across multiple notifications.
Flows that have already been assigned will be unavailable for selection, ensuring each notification uses a unique Flow.

In the panel, search for the desired notification flow and select it.
You can configure multiple notification flows for a single monitoring event, allowing you to send alerts simultaneously across various channels.
The system prevents adding the same flow more than once to avoid redundant alerts or conflicts.
Upon meeting the evaluation trigger conditions, all attached flows are executed in parallel.

You can directly edit linked notification flows from the Event Settings screen, allowing you to quickly update or troubleshoot notification logic without needing to navigate away from the monitoring configuration, as it directly redirects you to the Flow page.
Click on the pencil icon
beside the selected Flow.
It will redirect you to the defined Flow page, and here you can modify the logic.

Click the Play ▶️ button to run a delivery test.

If the flow succeeds, a confirmation message appears: "Flow Succeeded".
If the flow fails, inline error messages will be displayed, along with a link to Edit Flow for troubleshooting.

Note: Users cannot update the event settings until a valid notification flow passes the delivery test. Once the flow passes the delivery test, notifications will be sent via the chosen communication channel whenever the associated event fails.
Test Evaluation Settings in event monitoring
You can test the evaluation settings configuration from the Test Evaluation Settings panel within the page. This panel allows you to choose between using your own manual test input or relying on system-generated LLM outputs when running evaluation tests. The feature provides flexibility in simulating different test scenarios across Agents, Apps, and Reasoning configurations.
Note: Apps & Reasoning: Manual input is optional and turned off by default.
To test the settings
Navigate to Monitor and select an event.
Click Event Settings, which will open Event Monitoring Settings
Once you configure all settings, click the Test button to open the Test Evaluation Settings panel on the right.
Toggle Manually input test value to ON. A Test Message input field will appear.
When Manual Input is ON, the evaluation compares the LLM output with your custom message using configured metrics.
Enter your custom reference message for comparison.
When Manual Input is OFF, the evaluation runs only against the system-generated LLM output and metrics.

Click the ‘Reset’ button to clear the input if needed.
Run the Test and review results
Save your configuration and click ‘Update’ to save and activate your monitoring event
Last updated