STORAGE UTOPIA FOR POST PRODUCTION
Mike Hughes
Issue: April 1, 2007

STORAGE UTOPIA FOR POST PRODUCTION

Ever since the invention of nonlinear systems, disk drives have played a very significant role in the way the post community gets the job done. From the initial deployments of tiny capacities representing mere seconds of uncompressed SD material through to today's mega-terabyte monsters, the use of disk-based systems for storage and retrieval of material has accelerated on a yearly basis.

The technology associated with these disks has also evolved in two principal fashions. The first is the underlying technology associated with delivering either performance or capacity. Certain classes of drives have been engineered to have higher capacities, yet with slower access times (e.g. IDE and SATA), whereas others have been designed to deliver smaller quantities of data at much higher rates of throughput (e.g. SCSI, Fibre Channel and SAS).

The other is a higher-level difference and centers around sharing the storage amongst multiple users. The two prevailing technologies in this regard are network attached storage (NAS) devices or storage area networks (SAN). Both of these topologies have been deployed in many post facilities, usually to solve different problems.

SOLUTIONS?

With all the choices available to facility IT directors, it's useful to step back and see if there might be some solution out there today that truly addresses all the issues in the facility: the ubiquitous storage solution for post production.

These days, every facility has a rich mixture of computer systems - some higher end, some not - which all play a part in the business's production pipeline, guiding each job from the front door to the back. In a perfect world, there would be a single pool of storage into which all material was digitized once and only once. Every artist, editor, colorist or whomever needed access to the material would access it from this single pool. We should also mention that some users will also require guaranteed realtime access to the media as well for operations including editing, color timing, playback, etc. Other users, in particular those who work on media one frame at a time, don't necessarily need realtime access.

Next, the solution would need to be infinitely expandable both in capacity and throughput. The facility would have to be able to seamlessly increase the capacity or throughput or both. Adding users would increase the throughput requirement, but not necessarily the need for more terabytes, whereas adding an additional project to be completed without any new staffers or computing resources would probably only require additional space in which to place the new job.
Cost invariably plays a role in determining the viability, so the customer should be able to choose the access profile for each network client, and pay an amount appropriate to this choice. Put differently, it will cost more to get guaranteed realtime access (whatever the resolution) for a variety of reasons.

Finally - and this is a big one from the industry perspective - all applications from all vendors would have to be able to access the media natively. This small sentence encapsulates the infrastructure related frustration that is pervasive in the post production industry today, and it's a reasonable frustration because disk drives have been completely and utterly commoditized. No secret sauce is required to deliver realtime uncompressed SD, HD, or even 2K for that matter.

There was a time when the performance of a storage solution was so close to the edge of what it needed to deliver, that the only way to get a working system was to buy it from a single vendor who integrated it and then tested it to make sure it was up to specification, but those times are long gone.

Where this commodity characterization does not apply is in the domain of shared storage. When all the requirements described above are applied to a single solution, the complexity (and as such perhaps the opportunity for some vendor) soars in magnitude. Having multiple network clients simultaneously access the same storage pool while at the same time guaranteeing particular access rates (i.e. realtime) is a thoroughly non-trivial project. As a vendor-specific case illustrates, SGI has been working on the problem of delivering guaranteed rate I/O (GRIO) with its CXFS SAN file system since the nineties.

In summary, the facility requirements for the ideal shared storage solution are such that no shipping technology today can deliver against the list. And, once somebody does build it, it will likely be some type of NAS/SAN hybrid given that the requirements span the competencies of both. Even once it is built, it's going to take some serious user side pressure to get those two companies whose names begin with "A" to get off the storage bandwagon and open up their applications such that they can effectively use third-party shared storage. With that said, the storage solution had better not impose any application level changes.

As a footnote and without digressing too much, there's an equally large Pandora's Box, which such a solution would open. With everyone finally able to share all that media, what happens to the metadata or data about the data? I think that that's something better left to the MXF and AAF folks to sort out.\

Mike Hughes is the VP of product marketing for Maximum Throughput (www.max-t.com).