Skip to main content

Monitoring FAQs

Q: What aspects of LLM usage does the Monitoring feature track?

A: Monitoring tracks API calls, token usage, request trends, response times, and allows for custom metric tracking.

Q: How can I visualize the monitoring data?

A: The Monitoring Dashboard provides various charts and graphs, including daily request trends, total requests by feature, and token usage breakdowns.

Q: What is the Validation Flow in the Monitoring feature?

A: The Validation Flow is a customizable sequence of checks to ensure LLM outputs are safe, relevant, and high-quality. It includes validators for bias, toxic language, PII detection, and more.

Q: Can I set up alerts based on monitoring data?

A: Yes, you can configure alerts for various metrics, such as high token usage, increased error rates, or specific validation failures.

Q: How granular is the monitoring data?

A: Monitoring provides data at various levels, from overall system performance down to individual API calls and their associated metrics.

Q: Can I export monitoring data for external analysis?

A: Yes, LLMOps allows you to export monitoring data in various formats for further analysis or integration with other tools.

Q: How does the Monitoring feature help with compliance?

A: It provides audit logs, tracks data access, and includes validators to ensure outputs comply with predefined rules and regulations.

Q: Can I customize the Monitoring Dashboard?

A: Yes, you can create custom dashboards with the metrics and visualizations most relevant to your use case.

Q: How far back does the monitoring data go?

A: LLMOps typically retains detailed monitoring data for 30 days, with aggregated data available for longer periods. Retention periods may be customizable based on your plan.

Q: How can I use monitoring data to optimize my LLM applications?

A: Analyze usage patterns, identify performance bottlenecks, and use insights from the Validation Flow to improve prompt design and model fine-tuning strategies.