
Nikhil Mungel, head of AI at Cribl, recommends several design principles:
- Validate access rights as early as possible in the inference pipeline. If unwanted data reaches the context stage, there’s a high chance it will surface in the agent’s output.
- Maintain immutable audit logs with all agent actions and corresponding human approvals.
- Use guardrails and adversarial testing to ensure agents stay within their intended scope.
- Develop a collection of narrowly scoped agents that collaborate, as this is often safer and more reliable than a single, broad-purpose agent, which may be easier for an adversary to mislead.
Pranava Adduri, CTO and co-founder of Bedrock Data, adds these AI agent design principles for ensuring agents behave predictably.
- Programmatic logic is tested.
- Prompts are stable against defined evals.
- The systems agents draw context from are continuously validated as trustworthy.
- Agents are mapped to a data bill of materials and to connected MCP or A2A systems.
According to Chris Mahl, CEO of Pryon, if your agent can’t remember what it learned yesterday, it isn’t ready for production. “One critical criterion that’s often overlooked is the agent’s memory architecture, and your system must have proper multi-tier caching, including query cache, embedding cache, and response cache, so it actually learns from usage. Without conversation preservation and cross-session context retention, your agent basically has amnesia, which kills data quality and user trust. Test whether the agent maintains semantic relationships across sessions, recalls relevant context from previous interactions, and how it handles memory constraints.”

