
“Observing AI agents requires visibility not only into model calls but into the full chain of reasoning, tools, and code paths they activate, so devops can quickly identify hallucinations, broken steps, or unsafe actions,” says Shahar Azulay, CEO and co-founder of Groundcover. “Real-time performance metrics like token usage, latency, and throughput must sit alongside traditional telemetry to detect degradation early and manage the real cost profile of AI in production. And because agents increasingly execute code and access sensitive data, teams need security-focused observability that inspects payloads, validates integrations like MCP, and confirms that every action an agent takes is both authorized and expected.”
4. Ensure AI agent observability addresses risk management
Organizations will recognize greater business value and ROI as they scale AI agents to operational workflows. The implication is that the ecosystem of AI agents’ observability capabilities becomes a fundamental part of the organization’s risk management strategy.
“Make sure that observability of agents extends into tool use: what data sources they access, and how they interact with APIs,” says Graham Neray, co-founder and CEO, Oso. “You should not only be monitoring the actions agents are taking, but also categorizing risk levels of different actions and alerting on any anomalies in agentic actions.”

