Thomas Burns is the CTO for media & entertainment at Dell Technologies, and says 2025 has been an interesting year, thanks to the implications of artificial intelligence.
Photo: Dell's PowerEdge XE9680 rack server
“People are both afraid of AI, especially generative AI, as well as the whole industry changing in terms of the way content is being created and distributed. We’re just trying to keep up between the AI PCs, the faster PowerScale storage, and the new generation of PowerEdge servers. One way to think of it is, we’re Nvidia’s sales arm. Whatever they do, it’s to fit in a Dell server, and we are in the business of engineering solutions around that. The new GPUs from Nvidia — like the RTX 6000 Pro Blackwell Server Edition — those are really unique because they have AI cores as well as video cores, so all of these software-defined television or other types of applications can take advantage of all the 10-bit video processing on the GPU for live media, as well as doing AI things to that live media.
Dell's Tom Burns
“Everybody’s looking for how can we do video search and summarization cost effectively against our media archives so that we extend the metadata and make it more searchable and more monetizable. That’s a big one. There’s so many other niches: AI translation, AI face replacement, all kinds of things. You could go so far as to say that what we used to think of as a pipeline for getting episodic and feature projects made will that someday be replaced by just this swarm of autonomous agents that go around and say, ‘Oh, that’s not my task, but go over here because I think that’s their task and they’ll do it for you. And this kind of haze of bots will be in charge of what we use to do as a very tightly-defined, API-driven workflow orchestration layer — a studio operating system kind of layer. I don’t know how far it’s going to go, but certainly we’ve had some good success on the enterprise AI side.
“Every visual effects company has one dev-ops person sitting there at a workstation, just figuring out what’s good, what works and what’s repeatable. All we’re saying to people about AI is, if we organize your data now and your workflow, it follows along with that.”
Burns notes that a VFX facility, at its busiest time, could be generating up to 10 petabytes of data per day.
“Gigantic data! How do you process that data as smoothly and efficiently as possible? It starts with placing it properly, and that’s where storage comes in. But the place/process/protect thing works for all data — realtime and batch. What we’re telling M&E companies is that if we look at the data platform now, you’ll be ready for AI. When that one dev-ops guy gets something that’s so cool, and it hits, and it’s so good that they need to scale it up right away to the entire production, boom, we’ll be ready for you. That’s the message from Dell.”
Broadcasters will be among those who can benefit the most by using AI to analyze their media archives.
“That’s the one where people can see real value,” he says of broadcast customers. Often times their meta data is limited and the only way to add information is to run LTO drives past playback heads to find out what’s on the drive.
“Often they only have very poor metadata in these broadcast archives. They might have air date, run length, and show title, and that’s it, so you have to watch the show to figure out what is on it. But there’s a couple of clever companies that are going out to these large language models that have been trained on the entire corpus of the internet. There’s TV Guide or the equivalent, television newspaper listings from X years ago — whenever they stopped printing them in daily papers. And that kind of associated metadata is out there and searchable without actually looking at your tape, so you save the wear and tear on the heads. It’s imperfect, but so is AI, right? There aren’t enough interns in the known universe to log all of the stuff that’s created at the pro level, let alone social media, so anything that you can do to watch it and listen to it in faster than realtime is going be huge!”
Another challenge Burns sees coming with the adoption of AI is the demand for electricity.
“What it teaches us is to consider electricity as valuable as data, because the amount of power that we’re forecasting to run,” he notes. “Before I even talk about electricity, we’ve seen a fundamental change in compute. I’m of a certain age where I remember you would count render nodes in ‘pizza boxes.’ One pizza box was one node. Then we went to dual CPUs. Now we’re at 196 cores per pizza box. We’ve gone from computers, to cores, [to racks] and then data centers. That kind of growth in compute will have huge ramifications down the line. And I’m not saying (it’s all) gloom and doom about electricity and cooling. You remember when the fiber optic boom happened? That was bad business for the companies that invested late, but all of a sudden, there’s all this fiber in the ground that birthed the modern internet — for good or for bad. Is it possible that having a surplus of compute will lead to follow-on effects that we don’t yet know? So even if it is a bubble, I think that we’re building and we’re learning how to look at things on a data-center scale. And I think that’s going to be good because some of the problems that the world has are really, really hard problems.”