They Were About to Buy the Wrong Switches. Nobody Had Told Them.
My camera died halfway through the recording. The prep didn't always happen. And somehow, the most useful moments almost always come from exactly that kind of unguarded space. This one's about the server refresh that nearly bought the wrong switches — and nobody had told them.
Behind the Research: WWT Data Center Priorities — Priority 1: Infrastructure Refresh
My primary camera died halfway through the recording. I had a backup, so we salvaged the audio, but I spent most of the conversation staring slightly left of where I should have been. Professionally embarrassing. Probably not the last time.
I'm telling you that because it sets the right tone for this post. These conversations aren't polished. The cameras break. The prep doesn't always happen. People show up without having read the research paper.
And somehow, the most useful moments almost always come from exactly that kind of unguarded space.
This episode was recorded with Eric D'Lallo and Nathan Litz, both principal architects at WWT. Eric owns the compute side of the house. Nathan owns data center networking. They're direct peers who get pulled into the same accounts from different angles — and watching how they describe the gap between those two vantage points is where this one got interesting.
The 25-Gig Problem Nobody Saw Coming
Nathan told me about a customer in the energy sector. The network team had done solid homework. They'd researched switches, built out an architecture, and come into their engagement with WWT ready to make decisions. Good network engineers. Good instincts. The only issue was that their switches topped out at 25-gig connectivity.
Nathan asked them a question partway through the conversation: "What plugs into your switches?"
There wasn't a lot of visibility to that. They had an idea, but not a complete picture. So Nathan sent them off with homework: go talk to your compute team. Find out what their new servers actually need.
They came back a week later with news. The compute team was actively refreshing their servers too — and they were standardizing at 100-gig. Every server. Nobody had mentioned it to the network team.
"So the network team now had a very clear idea: we need to begin looking at different switches."
That's not a technical story. That's an organizational story. Two teams running parallel refresh projects, budgets both approved, decisions almost finalized — and they came within a couple of meetings of buying infrastructure that wouldn't talk to each other at the speeds they actually needed.
The Rack That Took Down the Data Center
Eric told me about a customer who had approved a rack full of modern compute. Everything looked fine on paper. They plugged it in.
"They popped some circuits and their data center went down because facilities was not considered."
He said that like it was a cautionary tale he's told before. Because it is. Modern compute draws significantly more power than the gear it replaced. That's true even without GPUs in the conversation.
Add an AI initiative and the numbers get steep fast. Nathan mentioned 15 to 20 kilowatts per rack as common now — numbers nobody was planning for just five years ago.
The facilities team — the people who know the power draw, the amperage, the floor weight limits — those people often don't make it to the early refresh conversations. Eric made a point of saying they have to now. "Now's the time to have that relationship with that person."
He also told me about a customer where leadership had quietly stood up an AI team. Nobody in IT knew. "Sometimes, you know, we found that leadership has actually put an AI team in and no one knows about it because we're just so dang big." The people planning the infrastructure refresh had no idea the requirements were about to change underneath them.
Layer Zero Wasn't Invited to the Meeting
Nathan went to the Supercomputing Conference this past year, hosted locally in St. Louis. He went expecting to see the usual mix of compute and network vendors. What he found was that a large portion of the floor was HVAC companies — giant cooling units, power infrastructure, the mechanical systems that hold a modern data center together.
"I had never seen them represented in such a showcase and important way."
That's the shift. What used to be someone else's problem — the floor load, the airflow, the cooling capacity — is now a technical decision that can stop a refresh cold or force a pause mid-project. Nathan called it "layer zero." Everything rides on it. It just wasn't always part of the refresh conversation.
Water cooling is coming. It's not hypothetical anymore. Eric pointed out that retrofitting a traditional data center for liquid cooling — whether direct-to-chip or rear-door heat exchangers — is going to be a disruption that a lot of organizations haven't planned for. "In the next five years, it's gonna be very interesting."
The Taxes Problem
About halfway through, I made a confession that felt honest in the moment: I'm like some of these customers. Refresh cycle comes up, and my instinct is to say just do what we did last time. Same way I handle taxes. Just file the same return, please.
Both Eric and Nathan recognized it immediately. And their answer was essentially: that instinct is understandable, but the world changed, and the last five years changed it a lot faster than the five before that.
"The solution of five years ago, coming up for a refresh, is not the solution today."
That's Eric. And he wasn't being alarmist — he was being accurate. The consolidation ratios, the power requirements, the AI readiness questions, the architecture shifts Nathan described from two-tier legacy to spine-leaf VXLAN — these aren't optional conversations you can defer until the next cycle. They're part of the refresh conversation now.
What the Video Covers
The episode goes into how WWT's overlay team approaches a refresh engagement — starting broad to surface hidden requirements, using a methodology called a market scan to evaluate infrastructure needs without OEM bias, and working toward a solution that accounts for what the customer can't yet see about their own environment. Eric and Nathan also get into what "future-proofing" actually means when the spine of your network is going to be in place for a decade.
But the real message is simpler: a server refresh is never just a server refresh.
Where I Landed
The story that stuck with me most wasn't the power outage or the AI team nobody knew about. It was Nathan's energy sector customer and the 25-gig switches they almost bought.
Because that story doesn't require a dramatic failure. No circuits popped. No data center went down. It was just two teams, both doing good work, both making reasonable decisions — and nobody had thought to ask what the other team was building.
Eric said it as plainly as I've heard it said: "it's never just a compute refresh. Compute sits in the middle of all of this functioning infrastructure." That's the insight I'll carry out of this one. The question isn't whether you need to refresh. It's whether the people making those decisions are talking to each other.
This is part of the work I do with World Wide Technology's research team. More at explainerds.net.
Resources:
- Read the full research paper: https://www.wwt.com/wwt-research/it-infrastructure-modernization-priorities-for-2026
- Watch on the WWT platform: Visibility Before Velocity: Rethinking Infrastructure Refresh