The NAS vs. SAN argument
Steve Modica
Issue: December 1, 2010

The NAS vs. SAN argument

As a former SGI employee (prior to founding Small Tree), I had a great deal of exposure to the SAN/NAS schism that occurred a decade and a half ago.

The issue was simple: We could not build NAS machines that could handle the monstrous data rates required to handle 100-, 200- or even 600-client systems. We could not move to faster CPUs, we could not build wider backplanes, and we could not get faster storage. We were “maxed out” along several available dimensions. 

The solution was simple: Let’s take the data server’s backplane out of the equation. Storage can already sit on a network (Fibre Channel, Infiniband and HIPPI all offered networked storage) and all the server needs to do is handle the “meta-data.” Meta-data is the inode data that defines where files live on the disk as well as permissions and access times. Using a model like this, the “data” server stops handling all the data and now handles relatively tiny meta-data (inode) traffic. Along with modern file systems like XFS, many CPUs could be brought to bear on the meta-data processing. Problem solved!

What we’re seeing today is that network cards, CPUs and storage devices are all expanding to make that original premise irrelevant.  The new Intel Nehelam and Westmere CPUs (and more importantly, their integrated memory controllers) are allowing these systems to push a lot more data through a system’s backplane. Full Duplex IO busses like PCIE (now running at 5.0GT/sec) are making NAS servers a lot more attractive considering how inexpensive and easy to manage they are relative to SANs.

Another issue that will continue to push the ascendance of the NAS is the “plateau” effect on the client side. We are seeing the i5 and i7 Intel processors offer a good level of performance for clients and we don’t see users clamoring for Nehelam and Westmere in their desktops. Gigabit Ethernet and ProRes codecs are providing a similar plateau effect. Good enough is good enough! As more and more clients become iMacs and laptops, we’ll see the server have more and more capability in terms of how many clients it can reasonably support.

For post production facilities, this means we should continue to see inexpensive NAS solutions become more capable. As 10Gb and SSD devices arrive, customers will begin to see some amazing capabilities pop out since the system busses and memory controllers are already up to the task. I can see the day when customers are doing HD multi-clip editing to several clients over their existing cat6 cabling. 

To summarize, SAN storage (shared block access) is expensive and hard to administer, develop and support. NAS storage (shared file access) is simple, inexpensive and elegant to scale. Users should expect to see NAS solutions become more and more attractive as server chipsets continue to evolve and become faster and more scalable.

Steve Modica is the CTO of Mac network specialists Small Tree (www.small-tree.com), based in Oakdale, MN.