
If you want reliable agents, you need to apply the same rigor to their memory that you apply to your transaction logs:
- Sanitization: Don’t just append every user interaction to the history. Clean it.
- Access control: Ensure the agent’s “memory” respects the same row-level security (RLS) policies as your application database. An agent shouldn’t “know” about Q4 financial projections just because it ingested a PDF that the user isn’t allowed to see.
- Ephemeral state: Don’t let agents remember forever. Long contexts increase the surface area for hallucinations. Wipe the slate clean often.
My Oracle colleague Richmond Alake calls this emerging discipline “memory engineering” and, as I’ve covered before, frames it as the successor to prompt or context engineering. You can’t just add more tokens to a context window to improve a prompt. Instead, you must create a “data-to-memory pipeline that intentionally transforms raw data into structured, durable memories: short term, long term, shared, and so on.”
The rebellion against robot drivel
Finally, we need to talk about the user. One reason Breunig cites for the failure of internal agent pilots is that employees simply don’t like using them. A big part of this is what I call the rebellion against robot drivel. When we try to replace human workflows with fully autonomous agents, we often end up with verbose, hedging, soulless text, and it’s increasingly obvious to the recipient that AI wrote it, not you. And if you can’t be bothered to write it, why should they bother to read it?

