Skip to main content

Documentation Index

Fetch the complete documentation index at: https://prismeai-docs-next.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Analytics show how users interact with your published agents. Use these insights to improve agent quality and demonstrate value.

The Analytics Page

Open any agent and go to the Analytics section. You’ll see metrics for the selected time period.

Time Periods

Select a period to analyze:
  • Today - Current day
  • Last 7 days - Past week
  • Last 30 days - Past month
  • Last 90 days - Past quarter
  • Custom - Specific date range

Key Metrics

Users & Engagement

MetricDescription
Active UsersUnique users who chatted with the agent
ConversationsTotal chat sessions started
MessagesTotal messages exchanged (user-facing only, excludes tool-calling iterations)
LLM CallsTotal LLM invocations (includes tool-calling and async tasks)
Avg Messages/ConversationDepth of conversations (shown as subtitle)

Quality

MetricDescription
Average RatingUser feedback score (1-5)
Ratings CountNumber of ratings received
Rating DistributionBreakdown by star level
Resolution RatePercentage of resolved queries

Performance

MetricDescription
P50 Response TimeMedian response latency
P95 Response Time95th percentile latency
Error CountFailed requests
Error RatePercentage of failures

Cost

MetricDescription
Input TokensTokens from user messages
Output TokensTokens in agent responses
Total CostEstimated API cost
Carbon (kgCO2eq)Estimated carbon footprint

Trend Charts

Visualize metrics over time:
  • Users trend - Growth or decline in user base
  • Conversations trend - Usage patterns by day
  • Messages trend - Engagement over time

Tool Usage

See which capabilities are used most:
ToolCallsSuccess Rate
Knowledge Search1,23498%
Web Search56795%
Calendar89100%
Use this to identify:
  • Underutilized capabilities (consider removing)
  • High-failure tools (investigate issues)
  • Most valuable integrations

Model Usage

If your agent uses multiple models, a breakdown is displayed showing:
  • Model name - Each model used
  • Calls - Number of LLM invocations per model
  • Tokens - Total tokens consumed per model
  • Cost - Estimated cost per model

Interpreting Results

Healthy Signs

  • Steady or growing active users
  • Ratings above 4.0
  • Low error rates
  • Good avg messages per conversation

Warning Signs

  • Declining active users
  • Ratings below 3.5
  • Increasing error rates
  • Very low messages per conversation (users giving up)

Action Items

Based on analytics, consider:
ObservationPossible Action
Low engagementImprove welcome message, add suggested prompts
High errorsCheck tool configurations, review logs
Poor ratingsReview feedback, improve instructions
Slow responsesConsider faster model or simplify tools
Low tool usageUpdate instructions to use tools more

How It Works

Pipeline Overview

 ES events                    agent_metrics (Mongo)              API response
 ─────────                    ────────────────────              ────────────

 agents.conversations.created ─┐                                ┌─ summary (cached)
 analytics.agent.rated ────────┼─→ Bulk Aggregator ─→ hourly ───┤
 analytics.llm.completion ─────┘      (cron)          daily     └─ series[]
                                                      summary
Analytics flow through three stages:
  1. Event emission — As agents are used, Elasticsearch events are emitted by agent-factory and llm-gateway
  2. Aggregation — A scheduled job reads these events and writes pre-aggregated rows into the agent_metrics collection
  3. Read — The analytics endpoint reads from agent_metrics, self-heals if needed, and returns the result

Event Sources

EventEmitted byContains
agents.conversations.createdagent-factoryUser ID, conversation ID, agent ID
analytics.agent.ratedagent-factoryRating value (1–5)
analytics.llm.completionllm-gatewayModel, token counts, cost, duration, tool names, call type

Caching

Summary metrics are cached for 1 hour in the agent_metrics collection. With warm cache, response time drops from ~1.5s to ~100ms.

Self-Healing

The analytics endpoint performs two checks:
  • Current interval refresh: The current in-progress hour or day is re-aggregated if stale (15-minute threshold for hourly, 1-hour for daily)
  • Gap detection: If fewer intervals exist than expected (e.g., cron missed a run), a full re-aggregation is triggered

Scheduled Jobs

JobScheduleDescription
Aggregate metrics30 1 * * * (daily at 01:30 UTC)Aggregates daily metrics for all active agents
Cleanup old metrics0 3 * * * (daily)Deletes metric rows older than 30 days
Refresh agent counters*/15 * * * * (every 15 min)Updates per-agent conversation and message counters

Refreshing Data

Analytics are cached for performance. To see the latest data:
  1. Click the Refresh button
  2. Wait for metrics to update
Analytics may have a delay of up to 15 minutes for very recent activity.
You can also force a re-aggregation via the API: POST v1/agents/:agent_id/refresh-metrics with an optional hours_back parameter (max 168 = 1 week).

Exporting Data

Export analytics for reporting:
  1. Select your time period
  2. Click Export CSV
  3. Download the file

Privacy Considerations

Analytics are aggregated and anonymized:
  • Individual conversations are not exposed
  • User identities are not revealed in metrics
  • Only owners and admins see analytics
For detailed conversation review, use Insights with appropriate permissions.

Next Steps

Improve based on feedback

Use insights to refine your agent’s instructions

Insights

Deep dive into conversations and user feedback