
Flatten your architecture
Most AI adoption today is happening on architectures never designed for this level of volatility, says Richard Copeland, CEO of cloud services provider Leaseweb. “Everyone wants the magic of AI, but the moment they scale it, they’re confronted with the messy reality of data gravity, latency budgets, and storage economics,” he adds. “Teams are trying to secure endpoints, expand pipelines, add GPUs, and increase bandwidth but none of that stops the operational chaos if the foundation beneath it isn’t intentionally resilient.”
You’ll almost certainly need more storage to support AI and not just for training sets, he points out. “You’re storing embeddings, vector indexes, model checkpoints, agent logs, synthetic datasets, and the agents themselves are producing new data every second,” he says. So spend the time to work out how much of that you actually need to store, where, and for how long.
But designing for continuity means treating resilience as a design principle, not an insurance policy. Organizations that stay ahead are flattening architectures, pushing compute closer to data, automating lifecycle policies, and building environments where AI pipelines can fail over without anyone breaking a sweat, says Copeland.

