Current Issue
April 2016

Recent Blog Posts in 2012

92 posts found. Viewing page 1 of 4. Go to page 1 2 3 4   Next
December 17, 2012
  Don't Waste Your Money On Faster Storage
Posted By Larry Jordan
Most of the time when we work on our computers, the files we create are relatively small. We load the files into the RAM memory of our computer and work away.

Working with media, however, is different. Media files tend to be massive. Most often, they need to be played in real-time and they are too big to be fully stored in RAM memory.

Why am I saying "RAM memory?" Because many people use the term "memory" to mean "hard disk storage." For me, computer "memory" means RAM - that which disappears when power to the computer is turned off.

The "Data Transfer Rate" is the speed data travels between your hard disk storage and the computer, and it is measured in "bits per second"

To keep the numbers from getting too big when discussing hard disks, bps is often converted to MBps, or "megabytes per second." The conversion equation is simple: bps/8,000,000 = MBps. (Engineers sometimes use 8,388,608 instead of 8 million.) We also write MBps as MB/s.

There are three things that determine the speed of your data between the hard disk and your computer:

- How you connect your storage (USB 2, USB 3, FireWire, Thunderbolt...)
- The number of hard disks in your device (a hard drive has one disk, RAIDs have several)
- The internal hardware architecture of your storage device


Changing how you connect your hard disk changes the speed of your data.

For example, USB 2 transfers data around 15 MB/s, FireWire 400 transfers data about 25 MB/s, while FireWire 800 is around 85 MB/s. If you spent money and bought an infinitely fast drive, then connected it to your computer via USB 2, your data would only travel at 15 MB/s, which is the speed limit of USB 2.

There's a lot of conversation today about the speed of SSD drives. SSD, which stands for Solid State Drive, can be very fast. But, if you attach an SSD drive using a very slow protocol - say, FireWire 400 - you'll never see the speed that SSD can deliver. The FireWire 400 protocol is too slow.


Connected internally, directly to the data bus of your computer, a single standard 3.5" hard drive delivers about 120 MB/s of data.

If we connect that drive via USB 2, FireWire 400, or FireWire 800, the connection protocol is slower than that of the drive, so we can't get the maximum speed from the drive because it is limited by the protocol.

However, other connection protocols, such as USB 3 and Thunderbolt can transfer data at much faster speeds than a single hard drive. USB 3 maxes out around 480 MB/s, while Thunderbolt can deliver up to 1.1 GB/s.

If you are attaching a single hard drive via USB 3, the fastest speed you can expect is about 120 MB/s, which is the maximum speed a single hard drive can deliver.

Here, the limitation is not the protocol, but the speed of the hard disk.

RAIDs allow us to combine multiple hard discs into a single unit. Now, we are able to combine the speeds from multiple hard drives so that the speed of the RAID is greater than the speed of a single hard drive.

At this point, the maximum speed of a RAID is the SLOWER of the sum of the speed of all the hard drives it contains, or the protocol that connects the RAID.

For example, a 2-drive RAID attached via USB 3 would transfer data at 240 MB/s (the sum of the two hard drives), while a 20-drive RAID attached via USB 3 would transfer data at 480 MB/s (the maximum transfer speed of USB 3).


Even when you've picked the fastest protocol, and connected a gajillion hard disks, there's one more factor that determines overall data transfer rate: system overhead.

This gets really complicated really quickly, however, the short answer is that every hard disk, RAID, and protocol needs to process the data both before it is sent and after it is received.  And this processing takes time.

Some manufacturers opt for the fastest data transfer speeds, but sacrifice flexibility. Others opt for greater flexibility, but sacrifice speed. Some do their data processing in hardware, others use software. Some opt for maximum speed, others for data security. It is almost impossible to figure out in advance how much performance will be lost due to system overhead.

For this reason, always be skeptical when a storage vendor describes their speeds using phrases like: "Up to 480 MB/s," or "Up to 1.1 GB/s."  They are quoting the speed of the protocol, not the speed of their device.


The easiest way to estimate the speed of a hard drive or RAID is to multiply the number of hard drives it contains by 100 MB/s.  This provides a ballpark range of the data speed to expect.


If all you are doing is editing a single stream of AVCHD video, any hard disk and protocol will be fine.

As you start to edit multiple streams of video, migrate to larger image sizes, or upgrade to more professional video formats, the speed of your storage system becomes critical.

Here are some simple rules:

- A single hard drive can deliver about 120 MB/s of data
- USB 2, FireWire 400, and FireWire 800 are slower than a single drive
- USB 3 and Thunderbolt are faster than a single drive
- If your computer supports plug-in cards, eSATA and mini-SAS are also very fast protocols
- RAIDs are always faster than single drives
- RAID speeds are, generally, the sum of the speeds of the drives they contain

Here's an article that can help you learn the differences between RAIDs, SSDs, and The Cloud.

Larry Jordan is a producer, director, author, editor, and Apple Certified Trainer with more than 35 years of professional experience. Based in Los Angeles, he is also a member of the Directors Guild of America and the Producers Guild of America, and author of eight books on Final Cut Pro and Adobe Premiere Pro. Visit his website at
Continue reading "Don't Waste Your Money On Faster Storage" »

Permalink | Comments(0)
December 06, 2012
  Configuring an iMac for video editing
Posted By Larry Jordan
By Larry Jordan

This blog was first published on my Website, and is a good follow up to my last Post blog

I bought a new 27-inch iMac when they went on sale Friday, specifically for video editing. And, because I've had a lot of requests recently, I wanted to tell you what I bought and why.

I bought: 27-inch iMac. All versions of Final Cut Pro and Adobe Premiere like large screen sizes. It allows us to see more of the image with more detail. In my case, because this system is exclusively for video and audio editing, the bigger screen was an easy decision.

I also have a second 27-inch Apple monitor sitting unused on a shelf that I want to experiment with. I've generally found dual monitor displays at client sites to be more trouble than they are worth. But, I've never worked with one for a long period of time, so I'm looking forward to seeing how this new set-up works.

However, for my Webinars, I use a smaller 21-inch Mac, because I find software easier to learn when the screen sizes are kept smaller.

I bought: 3.4 GHz, Quad-Core Intel Core i7
CPU speed is important, but it isn't everything. The speed and connection of your storage play a much bigger role in overall system performance than the CPU. So does the speed of the graphics card.

In the old days, the CPU did all the work. Today, that load is shared between a variety of components. For this reason, I decided to get a fast CPU, but use the money I saved in not buying the fastest CPU to getting faster storage. Especially for multicam work, faster storage provides more benefits than a faster CPU.

Given the speed of today's processors, just about any CPU is more than fast enough to edit any flavor of HD video.

I bought: 1TB Fusion drive

This new technology from Apple combines the speed of SSD (Solid State Drive) with the storage capacity of standard spinning hard disks.
However, the Fusion drive delivers the fastest speeds when it is accessing the same material over and over. This means that it is optimized for the operating system and applications. Since we are constantly changing media, a Fusion drive won't deliver the same level of performance with our media.
I have long been a fan of storing media to a separate drive, rather than on the boot drive. In the past, this was primarily for performance reasons. Now, the internal drive is faster, but an external drive allows far more storage and flexibility.

I strongly recommend using an external RAID system, connected via USB 3, or Thunderbolt (more on that in a bit), because it will store more than any single internal drive, provide more than enough speed, protect your data using the data redundancy in the RAID, and allow easy upgrading by simply swapping out devices.

For me, the ideal situation is the Fusion drive for the OS, and an external RAID-5 for all media.

I bought: 16GB RAM

Both Premiere and FCP will use all the RAM you have available. So will video compression software. 16GB is a nice balance between performance and price. And, unless you are creating some truly massive edits, you won't notice enough difference between 16 and 32GB of RAM to justify the additional cost.

I bought: Nvidia GeForce GTX 675MX

This was a harder decision. Both Premiere CS6 and Final Cut X take advantage of the graphics card. However, in the CS6 release, Adobe only initially supported the graphics cards in the MacBook Pro. (Traditionally, Adobe only supports Nvidia cards and all the Apple gear uses ATI, which is now AMD.)

Now that the new iMacs include Nvidia, I'm hoping (but do NOT know for sure) that Adobe will quickly support the graphics cards in these new Macs. I've sent a note off to my friends at Adobe to see what I can learn and will let you know what I find out.

[NOTE: Even if Adobe doesn't support the graphics cards, Premiere Pro CS6 will run perfectly OK using just the CPU. It won't do as much, or work as fast as when the graphics card is involved, but you can still use Premiere on these new systems.
This isn't the fastest GPU that's available, but it is the second fastest. Again, for me, this was a balance between performance and price. Video editing requires a fast overall system, balanced amongst all the major components.]

I bought: 1GB GDDRS

The RAM in a graphics card determines how many elements, for example frames of video, it can store for processing.

3D software and Motion makes extensive use of GPU RAM. However, video editors are using it principally for pixel painting. Since I am an editor more than a motion graphics designer, I don't need the extra GPU RAM. So, I stayed with the base level of 1GB.

I bought: Apple keyboard with Numeric Keypad (wired) and mouse (wired)
Wireless gear is great, until your system starts acting up. At which point, you need a wired keyboard for maintenance. Also, there are a number of very useful keyboard shortcuts in all my applications that take advantage of the keypad.

If I were shooting a television commercial, I'd use a wireless keyboard and mouse because it looks cool on camera. Because I am editing television commercials, I'm using a wired keyboard and mouse because they work great, decrease my stress, allow me to easily do maintenance on my system, and don't require batteries.

I bought: (um, nothing yet)

Since ProRes 422 is the default video codec of Final Cut Pro X, and a great codec to use for Premiere, I need storage that is big enough and fast enough to handle this format.

Prores 422 requires about 18MB/second of data transfer between the computer and storage. Because much of what I shoot is 3-5 camera multicam projects, this means I need to move about 100MB/second of data.

The problem is that FireWire 800 tops out around 80-85MB/second. Gigabit Ethernet tops out around 100MB/second, assuming your switch and server can handle the speeds, and most data switches that cost less than $200 can't handle that much data over a long period of time.

[NOTE: A "switch" is a device that allows multiple computers to connect to the Internet or a server by switching data from one device to another. These are made by NetGear, LinkSys, Cisco and others. A "server" is a computer with a large hard disk or RAID that allows multiple computers to share the same files. Servers can be a simple as a Mac Mini, or as complex as an Avid Isis system.]

This means that I need storage that connects via either USB 3, or Thunderbolt. (This is an iMac, which means that plug-in cards are not an option.) Yes, I could buy converter boxes - for example, from Thunderbolt to eSATA, or mini-SAS, but these boxes cost several hundred dollars apiece. If I were integrating existing hardware, this would be an inexpensive way to go. However, I'm buying all new gear.

It is at this point that I'm puzzled about why storage vendors are having such a hard time shipping RAID-5 Thunderbolt-based storage devices. Yes, Promise Technology is out there, and they recently dropped their prices, but where are the traditional storage vendors? G-Technology and LaCie both offer RAID-0 (which is fast, but provides no data safety in the event one of the hard drives in the unit dies), but no RAID-5. Drobo was way late in shipping their Thunderbolt storage, and I haven't had a chance to look at the shipping product. And, as far as I know, traditional RAID vendors haven't even announced RAID-5 storage with Thunderbolt connectivity. [Editor's note: G-Tech plans to ship their Thunderbolt RAID-5 in 2013. No specific date was offered.]

It is troubling to me that this new format is taking so l ong to take shape and appear in quantity in the market. Is this a licensing issue? Technical or integration issues? Are there hidden problems inherent with the Thunderbolt format that are holding things up? I have been inquiring about this for months and have not gotten a clear answer from any vendor.
So, I decided to hold off buying storage until I could do more research. My iMac is still a month away from shipping, so I have some time to figure this out.


I bought: (also, nothing yet)
Long-term data storage, today, means LTO tape. The problem is that all the tape vendors - Cache-A, The Tolis Group, Xendata - provide solutions much closer to $10,000 than to $2,000.

This is the other big issue in our industry: how do we protect the assets that we shot for 5, 10, 20 years into the future? If you are a major studio, money is no object and there are many solutions. However, if you are an independent producer, or small production company, dollars are hard to come by. There are no good archiving solutions that are reasonably priced.

I spoke with the three founders of Ultrium, the consortium of HP, IBM, and Quantum that invented LTO, about when they expect to provide Thunderbolt-based LTO storage? All three said that they had nothing to announce and the consortium did not have a position on how devices connect to computers.
Again, we could take existing gear - currently costing $7,000 - 9,000 and use Thunderbolt converter boxes to connect it to an iMac, but this simply takes a unit, which is already too expensive and makes it even more unaffordable.

[NOTE: The Tolis Group announced yesterday new gear aimed at creative producers. The ArGest line supports both LTO-5 and LTO-6, and the Thunderbolt version, which still requires a converter box, starts at $6,898. (Information about this new product is not yet on theirWebsite.)
I've said this before and I'll say it again: The LTO vendor that can figure how to provide a direct-attached LTO drive that works with a Mac and connects directly via Thunderbolt for less than $4,000 is going to make a lot of money.

For now, I really need some way to archive my media. But none of the units out there support either my budget or my computer.

Buying any computer is always a trade-off between dreams, performance and budget. I'm looking forward to getting my new system. I'm also looking forward to figuring out what I can use for external storage. To me, that is the key to successful video editing - storage that is large, fast, secure and affordable. And some way to back it all up.

Larry Jordan is a producer, director, editor, author, and Apple Certified Trainer with more than 35 year's experience. Based in Los Angeles, he's a member of the Directors Guild of America and the Producers Guild of America. Visit his Website at

Continue reading "Configuring an iMac for video editing" »

Permalink | Comments(0)
November 29, 2012
  Are the new iMacs good enough for video editing?
Posted By Larry Jordan
By Larry Jordan

A question I get asked all the time is whether the current iMacs (or, insert a computer model here) are "good enough" for video editing.

The answer is "Yes!" But, that isn't the right answer, because that isn't the right question.

In the past, the sheer horsepower of our computers was far less, so much so, that most computers could only play back video using smaller image sizes or lower frame rates. (Anyone remember watching computer videos that were 320x240? I remember building a business where the only videos we could create were that size.)

Today's computers can easily edit images, which are full HD (1920x1080), extending up into 2K, 4K, and, according to what I read in Post, 8K images!  (8K images sound amazing, but what actress wants to see her face with that much detail?  Make-up was invented for a reason.)

When you think about it, a 4K image is less than 16 megapixels, so the challenge isn't the image size; it's playing that many frames per second in realtime.

This gets me to the heart of the issue: any Mac, even a Mac Mini, has the horsepower to edit video. The real constraints are the speed of your storage, the complexity of your video codec, and the depth of your effects.

In the past, when many of us were editing standard definition DV video, we were working with FireWire 800 drives. DV video requires a data rate (that is, the speed of transfer between your hard disk and computer) of 3.75 MB/second. A FireWire 800 drive delivers around 80 MB/second, so everything was fine. (USB2, in comparison, only provided about 10-15 MB/second, which made it woefully slow for any serious video work.)

But as image sizes and frame rates escalated, the data rate of our hard disks became an issue. P2 media requires 15MB/second. ProRes 422 requires 18 MB/second. R3D files require 38MB/second. Uncompressed HD-CAM SR requires about 150MB/second.

Our poor FireWire 800 drive can't handle the load. And, unless you had a MacPro, you didn't have an option on anything faster.

That's why Thunderbolt is so exciting for Mac users and USB 3 to PC folks. Both these new technologies provide data rates 5-20 times faster than FireWire 800. The only hitch to these new formats is that they require a new computer that supports them. Oh, and drives that support the format as well. (The delays surrounding the release of Thunderbolt drives is worthy of a story in itself.)

(Side note on storage: A single hard drive, manufactured today, has a maximum data transfer rate of around 150MB/second. To take full advantage of the transfer speeds offered by USB 3 or Thunderbolt requires a RAID, which is a collection of hard drives all working together. To estimate the potential speed of a RAID connected via USB 3 or Thunderbolt, multiply the number of hard disks it contains by 100MB/second. USB 3 has a maximum speed of 480MB/second. Thunderbolt has a maximum speed of 1.1 GB/second.)

But the speed of our storage is only one issue. The next big gating factor is the complexity of the video codec. Camera manufacturers, in order to squeeze more data onto the relatively small storage that a memory card provides, are using highly efficient codecs (which stands for COmpressor/DECompressor, which defines the math used to reduce the size of video files) to squeeze video into every smaller spaces.
The problem is that these codecs, with names like AVCHD, H.264 and MPEG-4, create video that is extremely hard to edit. Video editing software has adopted two different ways to process these images: Adobe Premiere Pro CS6 off-loads the heavy number crunching to the GPU (Graphics Processing Unit, sometimes called the "graphics card") so that it can decompress the images fast enough to play and edit them in realtime.

Apple Final Cut Pro X (and, to a lesser degree, Final Cut Pro 7) converts the media, a process called "transcoding," into a more editing-friendly codec called ProRes 422. (While FCP X can edit video in its native format and does use the GPU, its first option is to transcode the media.)

The benefits to harnessing the GPU are that file sizes remain small and no time is lost for transcoding. The disadvantages are that color corrections occur in a more restricted space, final rendering and output can take longer, and not all Macs have graphics chips that are supported by Premiere Pro. So far, only the latest model MacBook Pros are supported by Adobe.

The benefits to transcoding are that any computer, regardless of its graphics chip, can efficiently edit transcoded media, effects and color correction are performed in a broader, more accurate color space, and overall system performance is improved. The disadvantages are that transcoded file sizes are much larger than the camera native files and time needs to be spent in transcoding. (Final Cut Pro X minimizes this extra time by transcoding files in the background, allowing you to edit in the foreground.)

The last big issue, and the only one where CPU speed really makes a difference, is in your effects. Most effects today are still CPU-based. Which means the more effects you add to a clip, the harder your CPU needs to work to create them.

Third-party developers are slowing enhancing their filters and effects to support GPU processing, which adds a huge boost of speed to calculating an effect.

Video editing remains one of the most challenging tasks we can do on a computer. But, more than just the computer is involved. When making a decision on what gear to get, keep the following thoughts in mind:

*    The speed of data transfer between your computer and your storage is the biggest determiner of system performance. Thunderbolt is better than anything. USB 3 comes in second.
*    Make sure the graphics chip is supported by your editing software
*    More RAM provides better performance than a faster processor
*    The fastest CPU is not, generally, worth the extra money. Go for the system in the middle of the range and invest more money in your RAM and storage

As editors, it is easy to get distracted by the speed of our CPU, or model of our computer. However, if you follow these guidelines, you can spend less money by paying attention to what really generates the best performance.

Larry Jordan is a producer, director, editor, author, and Apple Certified Trainer with more than 35 year's experience. Based in Los Angeles, he's a member of the Directors Guild of America and the Producers Guild of America. Visit his Website at
Continue reading "Are the new iMacs good enough for video editing?" »

Permalink | Comments(0)
October 24, 2012
  Magnetic tape archiving for media & entertainment content
Posted By Tom Coughlin
HOLLYWOOD - IBM spoke on the use of LTO-6 magnetic tape for digital archiving at the 2013 SMPTE Technical Conference, here. LTO-6 with a raw capacity of about 3.1 TB is double that of LTO-5 tape.  Data rates for LTO-6 increased by over 40% from the LTO-5 tape format.  

LTO-6 also supports the LTFS tape file format allowing data to be accessed from tape like it was any other connected storage device (like a hard disk drive or a USB storage device).  IBM has released single drive LTFS implementations as open source although it charges for multi-drive library systems.  The IBM presentation showed how a LTO LTFS file system can be used on a LAN for a file based production workflow as shown in Figure 1.  

The current LTO consortium roadmap projects raw storage capacities of about 16 TB with generation 8.  At a recent Tape Summit IBM said that it was developing technology to enable LTO tape capacities of at least 125 TB (the company demonstrated a 35 TB tape in 2010.  However, these monster tapes will not appear for several years (likely beyond a hypothetical LTO 10).  With doubling of tape capacities roughly every other year a 125 TB tape probably wouldn't appear on the market for at least 10 years...but it is good to know.

The LTO-6 tapes use a shingled magnetic recording technology with overlapping tracks of data to help with the higher storage capacity.  LTO-6 does not use barium ferrite magnetic media as does the IBM TS1140 tape that was announced in 2011.  Barium ferrite will likely be used in future generations of LTO tape to help boost the storage capacity.  

With increasing digital storage capacity for rich media content with higher frame rates, multiple camera capture and higher resolution, professional media content will be increasing enormously over the same period of time.  It is likely that petabytes of content generated in a single video project will become quite common in 10 years.  Thus a larger archive storage volume will be important to protect and preserve this digital content.  For more on these topics you may be interested in attending the 2013 Storage Visions Conference next January in Las Vegas,

Although hard disk drives can offer lower latency to access stored content, the current capacity growth rate of LTO tape appears to be about 40% annually (according to the IBM talk at the SMPTE conference) compared to the current HDD areal density growth rate of about 20% annually.  Although HDDs with higher capacity growth rates are likely with the introduction of heat assisted magnetic recording in a few years time it appears that tape capacity growth will be faster than HDDs at least for the short term.

The current difference in magnetic tape and hard disk drive storage growth rates will help maintain the cost advantage of archiving on tape vs. hard disk drive systems (see Figure 2).  With a started archival life of 30 years (under proper temperature and humidity conditions) magnetic tape looks like it will serve as an important storage element in archival storage for many years to come.

Thomas Coughlin is the principal of Coughlin Associates and can be reached at

Continue reading "Magnetic tape archiving for media & entertainment content" »

Permalink | Comments(0)
September 26, 2012
  IBC 2012 - A Retrospective (Part 2)
Posted By Sam Johnson
Blackmagic Terranex
I remember last year visiting the the Terranex stand at IBC and looking at all they had to offer as we were looking for a hardware standards converter that didn't cost the earth, unfortunately Terranex's 2011 offering was still out of reach for a small post-house budget. 

So I was pleased to hear about Blackmagic Design's recent acquisition of Terranex, and then the release of the standard conversion processors at this year's NAB. 

Blackmagic offer 2 flavors, a 2D and a 3D model. Each are $1,995 and $3,995 respectfully and offer the following features;

- Up Conversion
- Down Conversion
- SD/HD Cross Conversion
- SD/HD Standards Conversion
- Cadence Detect and Remove
- Noise Reduction
- Adjustable Scaling
- Aspect Radio Conversion
- Smart Aspect
- Converts Timecode
- Includes 16 Channel Audio
- 3D Camera Align (3D Model Only)
- 3D Dual Channel Standards Conversion (3D Model Only)
- 3D Simulation (3D Model Only)

For a hardware solution that used to cost upward of $90,000 thats now no more than $4,000, whats the catch? Right? I have no idea how they have kept costs so low, especially since Blackmagic insist that they are still using Terranex components. Which leads me to believe that it must be down to mass production. 

I was demoed a 29.97fps 1920x1080 to a 25fps 1920x1080 conversion. I must say the result was brilliant. No 3:2 Pulldown issues that usually interfere with these type of interlaced conversions. Other demos were limited, but from the demo I was given I cant wait to get my hands on it and give it a real run for its money.   

Software solutions are coming along, and are decent enough for speed-up conversions, but for now, nothing beats a hardware standards converter. Especially that of a box at this price. 

It is available now worldwide. More information on the Blackmagic Terranex processors head here;

Continue reading "IBC 2012 - A Retrospective (Part 2)" »

Permalink | Comments(0)
September 21, 2012
  IBC 2012 - A Retrospective (Part I)
Posted By Sam Johnson
I've had a week and bit to digest this year's IBC and thought I'd reflect. T'was a funny conference. It was filled with a lot of the same tech that was announced/demoed at BVE, NAB and even IBC 2011, so there was a slight lack of new announcements. That said, there were a few interesting things that I will highlight.

It was the year of the MAM/DAM solutions. Almost every stand I visited had some sort of cloud solution. Of the ones that interested and offered a complete solution, Cantemo (with a Vidispine backend) was arguable the best on offer.

Cantemo ( offers two solutions, MediaBox and Portal. Main difference between the two is licensing and site-to-site collaboration, but both other a complete MAM solution. Built around the Vidispine API, you can collaborate, annotate and integrate into your NLE. I was shown integration/round trip workflow between Cantemo and Adobe Premiere and was very impressed. By using Adobe extension manager you are able to integrate straight into Premiere as a docked window. Making it that much easier for ops to get access to the media. Has also been developed for FCP 7 and X, though integration isn't as seamless.

It has a proprietary transcoder, which can encode on import and deliver to multiple locations, should you wish. If Cantemo's own transcoder isn't good enough, you have the ability to integrate/send to third-party software such as Telestream's Episode. In fact the sky is hypothetical limit with Cantemo. Scripted using python, Cantemo have provided many a customer with customized solutions.

One of the larger announcements at this year's IBC was Adobe's Anywhere platform. A tool for editors using Adobe Premiere and Prelude to cross collaborate by using Adobe streaming servers.

We were demoed Anywhere by evangelist Jason Levine and a colleague in Berlin. They were working with 1920x1080 media (codecs not predefined) and were round-tripping sequences between each other whilst connecting to the same media. Was a very slick presentation, and they insisted that no proxies were used. Instead it Adobe's Mercury Streaming Engine that will optimize to your network and dynamically maintain realtime playback. Full technical details are yet to be released though.

It's great tech, but thought why would I use this? I've got a perfectly good SAN. It seems as if it is being aimed at production/post houses that use freelance talent. They would not be limited to local talent, instead anyone from around the world (who had Premiere Pro CS6) would be able to log into shop's server and begin working. Also looking at aiming it toward tech managers who could use it in a disaster recovery scenario, where  should staff not have access to the building but are still able to work off site.

It is currently limited to Premiere Pro and Prelude, though should it take off, I imagine After Effects and the other apps would follow suit.

Sam Johnson is currently senior production engineer at AMV BBDO (, a large UK-based ad agency.

Continue reading "IBC 2012 - A Retrospective (Part I)" »

September 18, 2012
  Making the switch: Mac to PC
Posted By David Bourne
In my work as a video editor and as an instructor, I try to focus on the importance of storytelling over the importance of the tools. But it's hard. Good storytelling elements never change, whereas video editing tools change like the weather.

For example, I often get asked by my small business clients, "What's the best editing software or hardware these days?" This question bugs the heck out of me.

I want to tell them, "Don't focus on the tools. Focus on storytelling, because that's what matters most." But I understand the question all too well. As small business owners, my clients want the same things that I want: an efficient, organized solution that gives them a better, more creative result.

But don't get me wrong here. It's not the "What tool is best?" question that bothers me. It's that I spend so much time and energy focusing on the tools myself!


In my 21 years as a visual storyteller, I've switched from tape-based AB roll editing to Premiere on the Mac, to the Media 100, then back to Premiere on the PC. I've tried Avid, I've been a faithful Final Cut Pro user, and I recently even tried to like FCP X.

I was super excited about FCP X before it came out, but after trying it, I knew it was time to switch. I simply did not like it for many reasons that I won't go into now. I knew that Apple was going in a different direction than I was. The MacPro needed a major update that has still not come as of September 2012. Apple seemed to put faith in their expensive and slow-to-grow Thunderbolt option, so that did not excite me either.

I dreaded a move to another editing solution, but I knew it had to happen. Switching takes time, energy and money but I knew I had to do something. My 4-year-old Mac Pro was bound to die and I'd be up the creek.

- Was the Dell/Nvidia Combo Worthy of the Switch?
That's when Dell threw me a curve ball out of nowhere. They were looking for Mac users who would consider trying a Dell Workstation. As you can see in my initial response and project history <> , I told them yes but don't expect too much. I love my Macs and had no plans to switch back to a PC.

At first, I was skeptical. It was not because of Dell, though. I had used Dells for many years when I worked at Duke University and at UNC. My skepticism came from the difficulty of integrating two platforms: the Mac and Windows. I plan to keep using my 4-year-old Mac Pro, so the difficulty of throwing a Windows machine into my workflow did not excite me, even though I knew the Dell would be a faster editor.

By the end of my review process, I discovered many advantages to adding a Dell.

Check out my list and see why even a Mac user should consider a Dell solution.

1. Adobe's Premiere Pro Has Become a Top Choice
I switched back to Premiere Pro  after being a Final Cut Pro user for four years. Premiere is better at handling DSLR footage than FCP, and it has better integration with PhotoShop, AfterEffects and Audition.

Adobe's Mercury Playback Engine does a great job with the color conversion. When used with Nvidia Quadro cards you get very fast, realtime edits and quick output renders. I'm using the Quadro 5010M and it screams.

Premiere's program files and media will work on both the Mac and the PC with few problems. This is a huge advantage, especially since Adobe has put Premiere on the cloud. I can now use two copies of all their cloud software: One set on the Mac and one on the PC. You had to buy two copies to do that before.

2. Cross-Platform File Sharing has Gotten Better
Like most of my small business clients, I edit everything myself, so file sharing between machines has not been a big issue. When I added the PC into my workflow, it became a problem. I use Gigabit ethernet for networking, but direct disk access is best.

Thankfully, the new XFAT file system  that works with Windows 7 and OSX version 10.6.5 or above, allows me to swap ESATA or USB3 hard drives between platforms. (Never mind that my MacPro does not have a good USB3 solution.)

3. The Dell Workstations are Very Capable
Like I said earlier, Dell and Nvidia gave me an awesome machine to try out and to keep. The M6600 laptop  has a fast processor, a built in ESATA port, two USB3 ports, Firewire 600, 3 internal drives (two 2.5 in. drive slots, one micro drive) and there is still room for a DVD burner. The 17 in. display is wonderful and the Quadro externally displays via SVGA, HDMI, or a Display Port.

One big recent surprise to me: Apple no longer sells a 17 inch laptop! The 15 in. Retina Display is certainly great, but I'm not sure it makes up for the smaller physical size.

Speaking of size, I must add that the larger size of the Dell comes with a tradeoff in thickness and weight over the Macbook Pro, but that's the tradeoff for more flexibility and power without having to add Thunderbolt peripherals.

4. Windows is Better than it Used to Be
Please excuse my Mac bias, but Windows annoys the heck out of me. My last experience was with Vista, so it could not have gotten much worse than that. All in all, working with Windows 7 on the Dell has not been too bad.

One thing that helps a lot: most of my daily productivity apps (Evernote, Gmail, and Dropbox ) are now on the cloud. I miss the beautiful simplicity of the Mac OS when I'm editing on the Dell, but in a way that is good. It keeps me focused on what I'm supposed to be doing: editing video!

5. Apple has Shifted Focus from Professional Video Editors

As a professional editor, I feel like like Apple has given us the proverbial finger. And we were their darlings for so many years! (Students of brand loyalty and emotional connections to products can learn a lot by looking at Apple's shift in this market.)

FCP X now caters to the middle, where there are more numbers of folks buying. This is not good for many video pros nor for small businesses who need more flexible solutions.

Dell, on the other hand, is going over backwards for the pro video market. They make it very clear that they want our business.
- What's the Answer to the Question Now?
Ask me today "What tool is best?" and I will recommend Premiere Pro with a Quadro card in a Dell workstation. Or if you already love OSX, buy a Mac and know that you will be super compatible with the majority of the world's PCs.

David Bourne is the Owner of Bourne Media.

Continue reading "Making the switch: Mac to PC" »

September 14, 2012
  IBC: storage, archiving, asset management
Posted By Tom Coughlin
AMSTERDAM - Video capture, production, distribution and archiving require significant investments in equipment and software.  As a consequence significant media industry events include a lot of digital storage and stored content management booths on the exhibit floor and some sessions focusing on storage, archiving and asset management topics.  This piece looks at some of the many products and services on display at the 2013 IBC in Amsterdam.

There were storage offerings a plenty and with a great many interfaces including direct connect as well as network connections.  Among the products shown, Marquis was offering project parking for archiving, distributing and restoring Avid projects.  Dynamic Drive Pool (DDP) showcased their Ethernet SAN products for file-based workflows. Promise Technology was showing their latest Thunderbolt products. These systems could be seen at many other booths in the IBC exhibit area. Atto was another company whose PCIe, Thunderbolt and HBA products were in wide use at the IBC.  HGST (a division of Western Digital) was also showing its Thunderbolt and other storage products and many storage arrays at the IBC were using HGST HDDs.

Active Storage was demonstrating their Mmedia systems, offering an integrated media creation storage platform for storing and managing content from ingest to archive. The mRAID offers scalable and easy to use mass storage and including an environmental processor to control the system environment and out-of-band communication. Their mVault scalable media archive offers Petabytes of near-line content storage.

LaCie (soon to be part of Seagate Technology) was showcasing Thunderbolt external SSD drives (two of these could be used in a RAID-0 configuration to provide up to 1 GBps data rates).   Likewise CalDigit was showing SSD as well as HDD external storage products.  Samsung and Toshiba were showing storage and memory components at the IBC.  There were European storage product distributors such as Stordis showing products by many vendors at the show.  In addition to iSCSI and Fibre Channel SAN products, Studio Network Solutions also offered XTarget software to turn any Mac with attached storage into an iSCSI SAN.

GB Labs debuted their Space network storage systems, offering various sized storage configurations for SSD, HDD and tape NAS storage.  Accusys was showing their ExaSAN PCIe disk arrays.  Their products include an interesting PCIe SAN switch technology that provide external PCIe SAN connectivity, facilitating up to 60Gbps data rates.  Object Matrix had their MatrixStore cost effective and feature rich nearline storage for file-based workflows.  Apace showcased its portfolio of media management, workflow and clustered storage system products for the media industry.  Facilis showed their shared storage for post  with both Fibre Channel and Ethernet connectivity.  SGI was at the show with its Modular open storage NAS offering.  XOR Media is the former hardware part of SeaChange, and they were showing their storage systems geared for advanced content delivery.

EMC Isilon was demonstrating its scale-out storage solutions that have been popular for many media and entertainment library systems.  Thelatest products offer up to 15PB with 100GBps data rates. Avere was showing their FXT Edge Filers providing high performance and cloud storage for some M&E applications by caching active data on their FXT clusters at the edge near the user to eliminate WAN latency issues.  JMR was showing their desktop PCIe SATA/SATA RAID storage products for use with video workstations. Tiger Technology was showcasing clustered storage and SAN appliances.  Rorke (part of Avnet) was showing its Fibre Channel SAN products as well as NAS and archiving products.  HDS had a significant presence showing their high performance storage systems geared for the M&E market.

Aframe is offering video production in the cloud and collaborative workflows between video editors in multiple remote locations.  TATA Communications from India is offering media communication services enabling collaborative workflows as well. Scality advertised a Sync-N-Share software, which looked like it also could be used for cloud-based collaborative projects. DataDirect Networks also presented its Cloud Storage  (Object Storage) products for distributed media workflows.

There were a variety of external recording devices on display for direct camera storage or for rapid transfer from camera flash card formats to external HDDs.  These included products from Nexto DI and Gemini 4:4:4.  Mediaproxy unveiled an uypdated logserver with 80-100 TB of RAID storage.  Aspera introduced its FASP 3 high-speed content transport technology enabling cloud object support, file sharing and content distribution with various delivery options and support for Microsoft Windows Azure and providing full resolution media sharing between remote Avid editing stations.

Xendata demonstrated its LTO Video Archive products using LTO-5 tape while For-A was showing their LTO realtime archiving and production solutions.  StorageDNA was offering LTO-5 LTFS file-based workflow products.  Cache-A likewise was showing LTO storage products.  Quantum had a large display showing their upgrades to the StorNext DAM platform and its support in its HDD and tape based products.  SGL was showing its content archive and storage management solutions.  Spectra Logic had examples of its large LTO storage libraries at its booth. Front Porch Digital was showing its asset management and cloud archiving solutions.  Quotium Technologies is a European company with software that can do pro-active monitoring large digital archives.

There are new and important initiatives afoot in Europe. The EBU is working on creating standards for the long-term storage and preservation of file-based media assets. NHK was showing their Super Hi-Vision 8K X 4K video from the 2012 Summer Olympics, which will drive a pixel count (and likely storage and bandwidth requirement) that is 16 times that of today's HDTV. Other groups are pushing multiple camera content capture to create free-viewpoint video imaging - perhaps part of the technology needed to make a working holodeck. In my last blog, I discussed very high frame rate demonstrations at the 2012 IBC (1,000 fps or higher and resolution up to 4K) but even many modern 4K professional cameras run as high as 120 fps. The DVB technology group was demonstrating next generation content delivery technology and MPEG DASH and HVEC compression should help with adaptive streaming through the internet as well as better compression algorithms for high resolution content. IBM was showing its archive solutions at the show and folks from the LTO consortium indicated that the first LTO 6 format products would be released in 2013. 

Whew, that is a lot of companies, and I am sure that I didn't get all of them - too many companies in multiple halls and too little time.  Nevertheless it should be clear that the media and entertainment industry is driving a lot of digital storage and storage management innovation.  As the frame rate, resolution and number of cameras increase the innovation required in content compression, storage capacity and storage bandwidth will continue to grow - bit overflow indeed!

Tom Coughlin runs Coughlin Associates,


Continue reading "IBC: storage, archiving, asset management" »

Permalink | Comments(0)
September 12, 2012
  IBC 2012: IBC Under the Sun and another successful year
Posted By Robert Keske
The Sun and stars shined this year, with stunning days and nights. I always enjoy traveling to the beautiful city of Amsterdam to attend IBC, but this year was something special. 

IBC really showed it's strength this year, with the addition of IBC Connected World - in Hall 14, and the Meeting Suites. I believe that IBC Connected World, an area of IBC which "encapsulates the very latest developments in mobile TV, 3G and 4G services," led this year's theme: "Connected" or "Collaboration"...whatever you want to call it. This year's exhibitors delivered, as technology providers always had an answer when asked how to best connect/talk to the team back at Nice Shoes, my clients or the rest of the industry. The latest collaboration tools and standards were being delivered and communicated across all levels of our industry in a single exhibition hall.  
Robert Keske is the CTO at Nice Shoes in NYC. The studio is a full service, artist-driven design, animation, visual effects and color grading facility specializing in high-end commercials, web content, film, TV and music videos.
Continue reading "IBC 2012: IBC Under the Sun and another successful year" »

Permalink | Comments(0)
September 11, 2012
  IBC 2012: Day 5 - Hall 7, The Closing Bell
Posted By Katie Hinsen

Hall 7 on the final day of IBC 2012.

Today was the final day of IBC 2012. I could feel it, the demo artists were adding a little last-day flavor to their presentations, the vendors finally had a chance to rush around and see each others' products, and the visitors were wheeling around their suitcases, grabbing the last of the freebies and making last minute deals before flying home to every corner of the world.

It was much quieter today. Still buzzing, but in more of a muffled drone. Many of the visitors and sales people had already left, and those remaining at the RAI were tired; slower moving, and conversations were muted. There was a general understanding, read on all faces, that both the conference and the city had been enjoyed to the fullest extent this week.

I decided if there was anywhere I wanted to be at the moment the end of the conference was announced, it was Hall 7. Post people are my people, and I knew there would be an air of relief when people finally got to take off their corporate garb and get back to being relaxed, easy-going techs and creatives. As the last half-hour approached, demo screens turned to countdown clocks. People started packing up and announcer mics were unplugged so that the audio systems could be used to play music instead.

As visitors left to grab the first of hundreds of taxis outside, exhibitors cracked open hard-earned drinks and sat around the shells of their booths. 

I strolled out onto the sunny streets of Amsterdam, inspired, exhausted, and excited to do it all again in 2013.

At the announcement of the closing of IBC 2012, post people were smiling, sleeping, or sneaking out the door. Quantel, The Foundry and others passing by.
Continue reading "IBC 2012: Day 5 - Hall 7, The Closing Bell" »

Permalink | Comments(0)
September 10, 2012
  IBC 2012: Day 4 - I Have Seen The Future
Posted By Katie Hinsen
People are getting "IBC Fatigue". I can feel it all around me, the excitement that existed a couple of days ago is diminishing. For the vendors, tomorrow is the light at the end of the tunnel. For the visitors, sleep and a chance to rest the feet is in sight. Today was less crowded, the past two days was when the conference swelled with all the Europeans who only took off work for the weekend. Last year there were over 50,000 people at IBC, and I'm told this year the show is much bigger. There is 900,000 square feet of indoor space, plus an additional large temporary show hall and an outdoor exhibit section. Those numbers make my economy class airplane seat seem even smaller.

To combat the feeling that I maybe no longer care about anything other than being in my own bed at home, I decided to spend the day looking for new and interesting technology and innovations. I dragged along a geeky cohort and we paced the halls looking out for anything we hadn't seen before.

The most interesting thing we noticed was the trend toward autostereoscopy. It's becoming more and more obvious that it's a technology that might not go away, and the industry is trying to push it into the home. This means it has to be of high quality, and low cost. 

Right now, there are two manufacturers of autostereoscopic displays, Alioscopy and Dimenco (an offshoot of Philips). These guys have been around for quite a while, but content has been mostly CG, produced specifically for the display. As stereo images are half-res (HD on an HDTV are only SD), depending on the system used autostereoscopic images can be much lower res still and there are generally only a few good viewing angles. 

Dolby and Sony are now showing 4K monitors that render from stereo to autostereo on-the-fly, so your 3D blu-ray or 3DHD broadcast content show at (upscaled in the conversion) HD/2K quality. Dolby's one is actually a Dimenco with 28 viewing angles, and a custom chip in the back that does the conversion pretty well. Everyone is talking about it at IBC this year. It stutters a little and distorts occasionally, and it hasn't got much depth to it but fantastic picture quality. 

Sony has done something really clever. They have come up with their own monitors. They have turned their VAIO LCDs into autosterescopic displays. They are doing this across the range, too- adding a conversion chipset to all their VAIO televisions, computer monitors, laptops, and handheld devices. You can turn the lenticular barrier on or off, so if you're not watching 3D you can still see perfect 2D. Even cooler than that, they are showing the holy grail of 3D... autostereoscopy with head tracking. There's a camera on the top of the screen, that uses facial recognition to find your eyes and make sure that no matter where you're sitting, you are in the "sweet spot". Now of course, this only works for one viewer, and it is still in need of a lot of development; but it's a great concept. 

Even further into the future, and deeper into the realms of geekdom, we have the serious R&D guys. The most impressive is actually the Fraunhofer Institute. These guys won the IBC Award for Best Conference Paper, for their research into autostereoscopy. 

They have a very fast chip for converting stereo to autostereo, so the quality is better. Even cooler in my mind, was actually their version of the Lytro camera that does light field video. So in the future, we might not be doing autostereoscopic conversion in post houses, but we could be replacing the on-set focus pullers.

The IBC Future Zone has a stand showing holographic autostereoscopy, and a stand showing autostereoscopy so fast, that it can convert live video. But I really do have to give massive credit to the research team at NHK in Japan. Instead of focusing on 3D like the rest of us, they are letting everyone else figure that out, and creating Super Hi-Vision. I watched clips of the London Olympics that they shot, on their 8K monitors with 22.2 channel audio, and I could see the individual faces of people in the stadium crowd. It was really, really amazing. The cameras have a 120 Hz image sensor, and 72 HD-SDI cables coming out the back.

Looking at the back of this beast, makes me glad I'm in post production!

Tomorrow is the last day of IBC 2012. The show officially closes at 4pm, and I'm sure every vendor who has been standing at a booth being nice to people for five days, will have a well-deserved drink to make up for having to watch the rest of us wandering around with freebies, brochures, beers, and lots to talk about, both from the show and after-hours in Amsterdam.
Continue reading "IBC 2012: Day 4 - I Have Seen The Future" »

September 10, 2012
  IBC 2012: Day 3 - Less Trekking, More Teching
Posted By Katie Hinsen
Today was the day I planned to give my feet a rest and give my brain a decent workout with conference sessions. The biggest gossip of IBC is that Fuji have announced they will stop producing film at the end of 2012. That's huge news for everyone here I think. It is great for some, awful for others. 

However, my conference sessions today focused on the future, beyond film. The need for standards, and improvements in what we produce and how we produce it.

James Cameron and Vincent Pace gave a talk about what they call "5-D Production". Basically, it's a nice way of selling to clients that when we make something in 3-D, they get a 2-D deliverable as well. The production costs are the same, although the post costs are a little higher, but Cameron and Pace are on a mission to educate content buyers and distributors that having both deliverables is okay. Make something great in 3-D, and it will be great in 2-D as well. Around 70% of all televisions sold at the moment are 3-D capable. Cameron suggests that in about a year, autostereoscopic televisions will be ready for the consumer market, and we as content creators need to be ready. He's a man with high hopes for this, but the other speakers I heard today had a more important technical issue to press on us.

Audiences got really excited when they first saw decent quality images jumping out of the screen at them in recent years. Now, however, they're asking if the dorky glasses and occasional headaches are worth it, when the quality of the image is generally less than they have grown used to. I saw six conference sessions today, and I left with a definite sense that having a good knowledge of image standards and measurements, and a good knowledge of stereoscopy, is no longer good enough. 

I have got to start getting very serious about higher resolutions, higher frame rates, higher bit depth, and higher expectations from my clients, and their audiences. 

More importantly, ACES workflow, which is going to be a hard pill to swallow for most post houses and their colorists; is about to be the new standard for all deliverables. 

With the sad end of Fuji, and the very serious charge into ACES, times are about to change for us in post production. Quickly, and radically. So we had all better embrace the new, fire up our bandwidth, and have some fun with it. 
Continue reading "IBC 2012: Day 3 - Less Trekking, More Teching" »

Permalink | Comments(0)
September 09, 2012
  IBC 2012: Day Two - Sore Head, Sore Feet, Big Smile
Posted By Katie Hinsen
Getting out of bed this morning hungover and jetlagged wasn't easy, but I was on a mission. I would see all of Hall 7.

The sheer scale of IBC is impressive. There are 17 halls and an outdoor exhibit. Hall 7 is the main Post Production hall, and it was going to take all day to see every exhibit that sparked my interest. But by golly, I'm a woman on a mission and no amount of fun and free drinks last night was going to slow me down. That's what coffee is for, and the Dutch do a great espresso.

Full of caffeine and Advil, I took my camera and notebook and entered the Post Production hall of IBC 2012. 

Many long hours later, I hit the bar with a few of my nerdy friends and rested my feet. I had collected two bags full of pamphlets, purchased a ShuttleXpress controller, and got my hands on the coolest freebie of Hall 7, Hitachi's sumo guy.

He's going on my desk when I get home.

Talking amongst ourselves, we decided pretty quickly that this show's post production buzz was about two things: Thunderbolt, and accessibility. The latter refers to companies making their high-end products available and accessible to the small company or at-home user. 

The most exciting example of that, is Baselight Editions. We already have a few software-only versions of high-end grading and effects toolsets, and they are fun. But this one is a plug-in. You can now have Baselight inside your Avid, with most of the features of the full Baselight including the Truelight LUT system. Right now it's only out for Mac Avid, and you can only grade with the Avid panels. But they've announced it for FCP 7 and Nuke, with Tangent Wave panel. That should be ready within about 6 months. A single license is for one computer, so if you have Avid, FCP and Nuke all on one machine your plugin will work across all three. 

Baseight Editions in Avid.

On the subject of Avid, they are showing off a new version of everything, but I can't write about it because I haven't explored it. I went to the shiny Avid stand, and asked a shiny lady if I could chat to someone, and was told that they aren't talking to anyone one-on-one. I waved my Press badge and was told someone from PR would call me. I haven't heard from them, so I suppose their shiny new line is a bit too polished to show to an editor like me. I'll scrub up and try again another day.

Boris FX has launched BCC8, and it's improved nicely. They have now put all their plug-ins into 16 well-origanized categories. They say they've tried to make their effects look more "film" but I think for the most part, they still look like BCC plug-ins. They have a proper 3D lens flare, and some nice new lights with automatic occlusion. They have added some really cool restoration tools, and some new scopes that also generate a log file of color errors in .txt format.

Blackmagic has the biggest stand at the show this year, but no racecar. They don't need one, their products are new, exciting, and clever. Resolve 9 is out now, the Beta test is over. It has about 100 new features, and a great new GUI. It's definitely improved and much less clunky. They have also embraced Thunderbolt across their line. Both Blackmagic and AJA have added Thunderbolt to their converters/extenders, Blackmagic is the only one to be certified to run Thunderbolt for Windows. 

Blackmagic Mini Monitor and AJA T-Tap, Thunderbolt to SDI/HDMI.

The EMC booth had a magician, the most interesting way to demo a product that shifts data. If you want to experience the Autodesk booth from afar, they are streaming and uploading their demos and presentations to They don't show card tricks though. The SGO, Foundry and Autodesk booths also have magicians of sorts, they have regular presentations by users and artists from post houses such as Park Road Post, PrimeFocus and Sinneszellen.

One thing that I noticed, was that almost nobody is developing stuff for FCPX. When asked why, they all say it's because nobody is using it. Companies are still developing new products for FCP7 as if the whole "X" thing never happened in the professional world.

I drooled over all the little toys and peripherals. I have a very extensive list for Santa Claus this year. I left Hall 7 with a sore head, sore feet, and a big smile. 
Continue reading "IBC 2012: Day Two - Sore Head, Sore Feet, Big Smile" »

Permalink | Comments(0)
September 08, 2012
  IBC 2012: Slowing Time - Massing Data
Posted By Tom Coughlin
Physicists in the last ten years have accomplished an incredible feat; they have slowed and even stopped light using ultra cold plasmas.  The light is absorbed in the plasma and does not radiate again until a laser beam perpendicular to the original path of the first light beam irradiates the ultra cold plasma and excites to it reemit the light along its original path. Since the motion of light in a sense defines our local perception of time we might say that these experiments have slowed and even stopped time, within the ultra cold plasma used to absorb the energy of the light beam.

In a similar sense, a video played at very slow speeds can enhance our perception of time and make us aware of features of the scene that might otherwise pass unnoticed.  Many folks have seen this effect but it is only when the video frame rate becomes very high that these temporal details become more clear.  When you combine this with high-resolution video images the effect is striking.  At the 2012 IBC conference I had an opportunity to experience close to 1,000 frames per second, true 4k video at the For-A exhibit.

For-A announced their Super Slo Mo Camera earlier in 2012 (the FT-ONE).  While other professional video cameras are available that can shoot greater than 1,000 fps HD video (2K, such as the Phantom Miro M320S), this is the first 4K slow-motion video I have seen up close.  In a small display room For-A showed footage of Orcas jumping out of the water, competitive skiers as well as canons and explosives.  With the high resolution images the slow motion revealed amazing details.  I saw many effects in great detail that I had previously only read about in physics books-amazing...  Just seeing such video answers the question about what you can do with higher video frame rates.

While high-resolution video at even higher rates has been demonstrated in university labs-just check out youtube videos to see some of this work, the For-A camera is a commercially available product (selling for about $135,000 US). This camera captures uncompressed raw data on an internal RAM memory with a recording capacity of 8.5 seconds. Two hot-swappable SSD cartridges (each capable of storing 75 seconds of 4K images) are used for the content capture.  Clearly the data rate for capturing this content is large.   Higher capacity, very high data rate storage devices will be needed to make it possible to capture more than about 150 seconds of content.

Most modern professional video cameras can capture content up to 120 fps. At higher resolution (and with stereoscopic content) these frame rates will be needed to prevent motion artifacts that distract from a video experience.  Devices such as the For-A Slow Mo Camera point to where these frame rate trends could go in the future to capture details and create special effects that might not be possible otherwise. The storage capacity and bandwidth required are daunting today but probably not so much in another ten years. Higher frame rates, higher resolutions (NHK was showing its 8K X 4K video shot at the 2012 Summer Olympics) and more cameras simultaneously capturing video content will create a new generation of visual experiences within a decade or so and they will require significant increases in the required bandwidth and storage capacity to make this happen.

Thomas M. Coughlin, the founder of Coughlin Associates, has over 30 years of magnetic recording engineering and engineering management experience at companies developing flexible tapes and floppy disc storage as well as rigid disks at such companies as Polaroid, Seagate Technology, Maxtor, Micropolis, Nashua Computer Products, Ampex and SyQuest.

Continue reading "IBC 2012: Slowing Time - Massing Data" »

September 07, 2012
  IBC 2012 - Day 1: Racecars in a sea of traffic
Posted By Katie Hinsen
This is my first IBC. I'm an editor, colorist, 3D artist and all-round nerd. Coming to IBC has been a fantasy of mine for years, and here I am. 

My first day at the show has been pretty overwhelming. It's huge, and bustling. The halls are hot and stuffy, so it's no wonder so many people spend time in the large outdoor bar area. Mind you, when in Holland, drinking beer is the thing to do. Among the offerings for lunch at IBC was traditional Dutch fare: a "combo meal" of 8 fried meatballs and two bottles of beer.

The RAI is a maze of exhibition halls, divided into zones. I spent my first day wandering around, exploring, trying to get my bearings. Companies with a lot of money have large, flashy booths, usually offering refreshments. Taking Quantel's lead from NAB a couple of years back, it is now cool to have a racecar in your booth. I understand GoPro suggesting that their range of cameras perform best when mounted on a racecar... but I'm not sure how one is expected to color correct, or watch a large HD televsion, while driving one. 

GoPro suggests mounting an HD camera on your F1 vehicle, and a Stereoscopic camera on your rally car.

I wandered around the Post Production and New Media section for quite a while, figuring out my first priorities for demos while I'm here. Avid is showing off MC 6.5, and Quantel has re-launched the software-only version of Pablo that they announced at NAB, now calling it "Pablo Rio" and pairing it with a new mini version of the Neo panel. And, unlike at NAB where they seemed to have rushed the announcement to keep up with the 64-bit buzz, they are actually selling and shipping it now. For those who look closely enough, Quantel are also sort-of showing a new product that is in development for 3D. It's a separate piece of software, designed to fix stereoscopy errors. It also exports depth maps. I'm keen to see where that goes, as it looks like they are working on keeping up with SGO's Mistika which already does everything Quantel are trying to develop, but for those companies who don't want to make the move to a new DI solution, Quantel might just be getting back into the competition.

The Future Zone is a section of IBC that I was very excited to explore. It's much smaller than I expected, but I suppose most of the companies that focus on creative and innovative R&D aren't making enough money for a specially designed, expansive booth with a cafe and a racecar. This section is like a toy store for me. NHK, the Japanese broadcaster with their own in-house Sci-Tech research lab, was screening sections of the London Olympics in Super Hi-Vision (8K television with 22.2 channel sound). Less serious but more fun was the throwable panoramic ball-camera, and the impressive holographic method of autostereoscopy that provides a seamless 70 viewing angles.

Of course I couldn't go past the Focal Press bookstand, so I bought myself an appropriately nerdy souvenir. I had to. 

Some light reading for the flight home.

Katie Hinsen is an editor, colorist and graphics artist working at Goldcrest Post in NYC. She can be reached at

Continue reading "IBC 2012 - Day 1: Racecars in a sea of traffic" »

Permalink | Comments(0)
September 05, 2012
  IBC 2012: Leading the Electronic Media and Entertainment Industry
Posted By Robert Keske
Attending IBC is always exciting from a cultural perspective. From the attendees who have travelled from all over the world, to Amsterdam's own unique culture and atmosphere, it's an opportunity to meet people and see things that I wouldn't encounter every day, even working in New York City. 

On one level, IBC exhibitors display our industry's solutions, software, and hardware, in order to showcase their wares. But IBC exhibitors also give the attendees a glimpse into the future, offering a look at where both professional and consumer technologies are headed. I'm looking forward to yet another glimpse into the future at this year's IBC, while also seeing how what I got a peek at last year has panned out.

IBC's catchphrase, "Leading the Electronic Media and Entertainment Industry,' is something that I feel rings true, and I look forward to seeing what the technological leaders in our field will be showcasing this year.

Robert Keske is the CTO at Nice Shoes in NYC. The studio is a full service, artist-driven design, animation, visual effects and color grading facility specializing in high-end commercials, web content, film, TV and music videos.
Continue reading "IBC 2012: Leading the Electronic Media and Entertainment Industry" »

Permalink | Comments(0)
August 16, 2012
  Click-To-License Vs. The Sales Rep
Posted By Donna Kaufman
Whether you are new to the world of stock footage, or a seasoned and successful stock footage researcher, now is a great time to take a closer look at the evolution of point-of-sale purchasing in the stock footage industry. The power of the click and pay online experience has substantially changed licensing and how those of us in the media production industry tend to communicate and relate to one another. However, just because the tools exist to speedily click, license, and download stock footage, it remains essential to develop relationships with your licensing representatives to ensure you're getting the best service and pricing available.

As a seemingly unlimited pool of new producers enter the film and video arena, their experience of looking for footage has become more dependent on Google and online searches than ever before. Click-to-license and download stock footage sales have become standard for those new to the industry as well as long term professionals. Distribution of stock footage today requires terabytes of storage, instant online preview and licensing, and snap-fast digital delivery. While this technology has improved the ease of locating and purchasing stock materials for all kinds of productions, this streamlined experience has significantly devolved the dialogue between licensee and licensor. In an industry that for decades has depended on the strength of personal relationships and tailored pricing to meet the needs of a wide range of productions, the advance of easy online stock footage acquisition has changed the way in which many producers approach stock footage licensing and pricing. 

Many stock footage agencies have invested significant capital to develop instant online licensing for both rights managed and royalty free footage. So why would I recommend to contact a sales representative directly before placing your next order? The answer is simple: a good sales representative can assist with subject ideas, provide comprehensive research, help with format requirements and delivery options, and extend discount opportunities, saving you time and money in acquiring the best stock footage for your production.

Discount opportunities include bulk discounts, selecting the appropriate license terms for your production, making the best choice between royalty free, rights managed, and premium content, and inquiring about non-profit and preferred vendor discounts. A sales representative can often save you money by discussing your project prior to licensing online. Such full-service access to skilled professionals is an opportunity to expand your production team free-of-charge. So, the next time you are about to place an online order, ask yourself: How could I be doing my job better?

Donna Kaufman is Chief Strategy Officer of Footage Search, Inc. ( ). Footage Search represents OceanFootage, NatureFootage, 3DFootage, AdventureFootage, and other premium stock footage collections.
Continue reading "Click-To-License Vs. The Sales Rep" »

Permalink | Comments(0)
August 14, 2012
  4TB Drives
Posted By Steve Modica
Those new 4TB drives that are coming, there's more than just one more Terabyte to be excited about.

We've all watched over the years as storage and memory capacities have continued to grow in conjunction with Moore's law. It seems true that Intel and other manufacturers have figured out how to overcome barrier after physical barrier to double the number of transistors on a chip (or double storage density on a platter) and continue to overwhelm the industry with more space, cores and storage than we know what to do with! (I myself date back to the age of full length, full height, 25pin SCSI devices that didn't quite get to 1GB. I also got to repair more than a few of those washing machine sized "tubs" that you would lower spindles into with a handle and a few reel to reel tape devices as well.)

So what's all the excitement about these new 4TB drives?  It's not really very much of an increase now is it? 

What's exciting to Small Tree is that the vendors are now implementing "Advanced Format." This is a new drive standard that changes how data is stored on the platter. On previous devices - going back even to my storage "tub" days - data was stored on the platter in 512byte increments. You could "not" write data to the device in chunks smaller than 512bytes. That was the size of a sector.  Each sector required a Disk Address Mark, some ECC or error correction bits and a gap to separate it from the next sector.  Much like TCP and the idea of jumbo frames, the more things you have to look up and decode, the longer and more cpu intensive it is.  

The new Advanced Format implemented in Hitachi's 4TB drives moves the minimum sector size to 4K. This allows for fewer DAM's to look up, less overall gap space and fewer ECC decodes to run. With this change, the smallest size a file can theoretically be is 4k. Given the larger size of files today and the relative size of the drives by comparison, this will hardly matter. 

So, will this new format be supported by RAID card vendors? Most definitely yes.  However, they have not all jumped on the bandwagon just yet. For those that have not, these new drives implement a spec called "512e." This allows the drives to continue to accept, write and read requests for smaller increments to maintain compatibility. These requests are handled by doing a "read-modify-write" cycle to the drive. To write 512bytes, the 4K sector containing that data is read in, the 512bytes is modified within the sector, and then the entire sector is written back out - RAIDS have used this technique for a long time to write less than a full stripe.  

Overall, we've seen this technology offer a 33% increase in the number of streams Small Tree can support from a single storage array. We expect to see continued improvements as vendors begin to adopt the technology and hone their OS tuning to take advantage.

Steve Modica is CTO of Small Tree
Continue reading "4TB Drives" »

Permalink | Comments(0)
August 13, 2012
  SIGGRAPH: Keynote Speaker Jane McGonigal
Posted By Jeff Kember
Jane McGonigal, author of "Reality is Broken", gave an insightful keynote to a packed house. She extolled the virtues of gaming, cited interesting facts from a number of case studies (including her own research) and claimed she would increase our lifespan by seven and a half minutes (more on that below). 

After suffering a severe concussion, she created the game "SuperBetter" to help her through the depression of a long recovery. In another study, young cancer patients who played the game Re-Mission for a minimum of two hours showed significantly improved outcomes. We learned that thirty minutes of online game play a day could outperform pharmaceuticals in treating clinical anxiety and depression.

She led the crowd through a set of activities to designed to improve physical, mental, emotional and social resilience. An example of social resilience was shaking hands with someone for six seconds to increase the levels of oxytocin - the trust hormone. She suggested the effects of increased oxytocin would last through our lunch break and that we should take advantage of networking opportunities. The combination of these activities, performed daily, contributes to increasing one's lifespan (up to ten years in several studies, seven and a half minutes in our case today). 

Another positive aspect of playing games is that it is one of the few areas in our life that a high rate of failure is OK.
Continue reading "SIGGRAPH: Keynote Speaker Jane McGonigal" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH: Inspiring Workflow To Handle Large Data
Posted By Scott Singer
A common theme at this years SIGGRAPH was how different studios are
handling the unique demands of shows with ever increasing scope and
complexity.  Two examples of this were Method Studios' work on Wrath
of the Titans, and CineSite's work on John Carter. These two
productions had enormous environments and sets that quickly defied
their standard workflow techniques of wrangling data. In both cases
their teams adopted scalable data driven descriptions of the
environments as separately addressable, hierarchical elements managed
outside of a traditional Maya or Houdini workflow. While these
techniques have been in heavy use in CG Feature animation for quite a
while, especially in pioneering work by PDI for wrangling the jungle
environments in Madagascar, the increasing complexity of live action
environments is making these issues imperative to VFX workflow as
well.  And both CineSite and Method rose to the challenge with some
inspired answers to some hugely vexing problems.

In Wrath of the Titans, the Kronos sequence involved massive
environmental destruction with dizzying camera flybys. The Kronos
creature literally breaks from the mountainous cliffs that it's made
from.  These shots were as thrilling to audiences as I'm sure they
were terrifying to the VFX artists. But Method studios rose to the
challenge by creating in ingenious system of data-driven, resolution
independent scene elements that could be accessed differently to
achieve maximum rendering and animation efficiency, but within a
unified texturing and rendering paradigm that ensured a consistent
look a variable levels of detail.

They broke down the overall model of the huge mountainous environment
into a collection of useable rocky crag shapes called greebles.  These
were located both on and within the volume of the mountain and could
be called up when needed, and in the form most appropriate to that
particular use case. For instance greebles close to camera would be
called up at their highest resolution and those farthest from camera
at there lowest.  Taking this hierarchical methodology one step
further, even the rock face texturing was handled as volumes which
could blend regions of differing resolution together. Because the
data locations for each greeble could be distributed within the volume
of the mountain, the actual geometric assets did not have to exist in
the scene until the overlying rock faces crumbled away to reveal them.

Likewise on John Carter of Mars huge data savings were had based on
instanced reuse, but in this case, the instancing itself was leveraged
to provide the actual animation. The large machine environment of
Zodanga was a walking city on centipede like legs.  To illustrate the
ingenuity with which CineSite addressed problems of scale we can look
at the legs of the city.  Each leg is essentially a piston and the
city walks by articulating these in sequence like a centipede. One
library animation of a single piston cycle was stored as a cache and
then new instances were created from this cached animation database
offset to their locations under the city as well as offset in time.
This meant that new firing sequences could be choreographed without
reanimating hundreds of individual animations, likewise changes to the
animation of the underlying cached cycle would be automatically
inherited by the instances. It also meant that the actual geometry of
the piston animation only had to be stored once for the single
canonical piston animation. Cine Site wrapped all of this
functionality in a very nice, user accessible interface both in
standalone and Maya hosted forms; this allowed artists to access sub
components of the structures for specific effects on a granular level
during shot production.

Because the CineSite team stored all of this data existed in a
queryable object database, it could be programmatically filtered to
check for visibility, set render quality levels and perform other
manipulations that helped to achieve maximum efficiency in terms of
rendering, disk storage and artist interaction speed. Not being
satisfied to sit on their laurels, CineSite is already looking into
making the system more robust by evaluating new technologies like
Alembic, and Renderman's new instancing capabilities to make their
approach even better.
Continue reading "SIGGRAPH: Inspiring Workflow To Handle Large Data" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH: The 'Real Steel' Presentation
Posted By Scott Singer
The Real Steel production presentation at SIGGRAPH 2012 illustrated
the benefits of including the VFX units from the very beginning of
production.  A panel of industry veterans including Eric Nash and Ron
Ames and moderated by Mike Fink told the story of how Real Steel came
together as a production and what went into making it such a smooth

From the beginning the decision was made to include the VFX
representatives as collaborators in the film making process.  The
close integration of CG robots with their practical counterparts as
well as the elaborate choreography of the CG fight scenes within
contained practical locations, required technical considerations to be
a central to production.

Rapid prototyping techniques were used to design and construct the
practical robot puppet components which provided 3D assets to  Digital
Domain.  This meant that DD had early visual targets to hit as well as
exact digital representations of the practical models. These early and
ongoing exchanges provided very clear visual criteria to drive
approvals in the look development process. By coupling these two often
disparate aspects of the visual development approvals process they
avoided many costly last minute technical changes.

The elaborate fight choreography meant that motion capture techniques
would have to be integrated to drive key narrative elements in the
film.  Using the advanced virtual camera technology Simulcam,
pioneered at DD, to attain the necessary level of CGI/live action
cinematography meant that another aspect of the VFX crew was brought
in early on. The obvious benefits of having instant feedback of CG
element placement within the camera operator's and director's monitor
feeds, not only sped production along at the shoot, but also cut down
on extraneous takes usually made as "insurance" during post
production. It also seemed to play an important social role in
reinforcing the presence of the VFX crew as an integral component of
the daily shooting process.

A story from the shoot that illustrates this synergy was how VFX
stepped in to help iron out a major production wrinkle during the
Detroit location shooting of the fight sequences. Production was
unable to secure the two fight venues needed to shoot in on the time
frame in the schedule.  The only location that was available was in
very bad physical shape.Since a majority of the location would have
been covered by CG crowds and set extensions anyway, DD suggested
creating entirely digital arenas, providing that production could
design the two environments within the constraint of keeping the arena
floors to a matching footprint. This saved the production time and
money and opened greater possibilities for the design of the fight

The panel's overwhelming opinion was that the inclusion VFX at the
very earliest stages of pre-production on FX heavy shows, not only
smooths out technical hurdles but actually allows for greater creative
opportunity while keeping costs down.  There is a definite feeling in
the VFX community that it is the redheaded stepchild of production
that is often called onto the job after everyone else is done, and
after their knowledge and unique expertise can help make the
production process much smoother. Once again we have evidence that a
well conceived plan carried out by a collaborative and inclusive team
of dedicated professionals can result in a project which can be
successfully completed with the minimal amount of difficulty. And just
for fun, they also brought a full size version of the robot Noisy Boy,
with them which was a huge hit as a photo-op with the audience.
Continue reading "SIGGRAPH: The 'Real Steel' Presentation" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH: Conference Recap
Posted By Sung Kim
I've been fortunate enough to spend the past few days at Siggraph at the L.A. Convention Center. Some highlights came during the production sessions for the making of Hugo, The Avengers, and Brave. It was interesting to get a glimpse at the thought process that went behind these mega-blockbuster hits. The Hugo panel talked about their very efficient production pipeline, and their 3D stereoscopic workflow. The Avengers talk was presented by ILM and WETA team, and they dove into set replacement, digital stunt doubles, as well as the methods they used to recreate the New York landscape with survey data.

As for the Brave talk, Pixar provided a glimpse into their process, from concept art and layout to animation, set dressing, lighting and rendering. They talked in more detail about the Fluid simulation pipeline, built with Houdini and Pixar proprietary software, and described the techniques they used to create the movie's gorgeous river scenes.

Another highlight for me came from learning about new software updates and releases in our industry. "iPi Soft" makes motion capture software that uses Microsoft Kinect. The Kinect was originally designed to play games on Xbox 360, but iPi Soft extracts the z-depth information for performance capture. Interestingly enough, they'll be releasing a new version with multiple performance capture by end of the year.

Another new release comes from "Eye tap," which makes software and hardware solutions for real-time high dynamic range video images. It captures high-exposure, low-exposure, and mid-tone in real-time, and tone-maps it to single video images. I could see a lot of potential applications for this software.

Elsewhere, "Camera Culture Group", and "Holografika" were demoing their glasses-free 3D display.  While there have certainly some major advances, I did note one big disadvantage to this method--namely, that content must be shot with an array of cameras, a technique they refer to as "multiview". Regular stereoscopic content won't work for this new technology, although there is a company called "Fraunhofer" that can convert your existing stereoscopic 3D video to multiview by generating virtual camera images.

Finally, while on the exhibit floor, I found out about some exciting updates that will surely make a splash in our industry in the coming months. Real Flow 2013 will be coming out at the end of the year. They rewrote their hybrido solver to be a particle/volume solver. The advantage of this is that you get a more detailed simulation with less memory and particle count. Also, Krakatoa for Maya is coming out at end of the year. And The Chaos Group, which brought out "V-ray" renderer, came out with its own fluid simulation software called "Phoenix", which looks very promising.

Click 3X, its interactive division ClickFire Media, and the recently launched C3X Live create engaging film, TV, web, and branded content. Click operates a full service, 11,000 square foot state of the art studio in Manhattan outfitted with 60 full-time staffers.
Continue reading "SIGGRAPH: Conference Recap" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH 2012 : Dreamworks Animation Open Sources Volume Data Format
Posted By Scott Singer
I sat down with Ken Museth of Dreamworks Animation, to discuss their
latest in a series of open source software development efforts - Open
VDB.  Open VDB is a set of core programming libraries that seeks to
simplify the writing of programs that deal with volumetric CG
elements.  These include clouds and smoke, but also fire, water and
other spatial phenomena that are often difficult as well as expensive
to implement.

The main advantage of the Open VDB approach to storing the data is
that it's very efficient at describing not only the data and its
memory footprint on disk, but it is also fast to access that data at
run time from within the software applications that use it.

It is a multi-resolution sparse grid description which means that only
the data necessary to describe an event needs to be stored.  For
instance in a cloud, only the data representing the outside surface of
the cloud needs to be stored at a high resolution.  The inside, where
most of the cloud volume is uniform, only needs to be described once.
This offers huge potential savings over more traditional octree data
storage methods.

Another unique aspect about DWA's release of OpenVDB is their close
working relationship with Side Effects Software which means that Open VDB
will be supported as a built in, first class, type in Houdini and will
soon be ready to use right out of the box.  This is potentially a huge
win for small and medium sized VFX facilities that don't have the R&D
resources to build the middleware necessary to leverage many open
source contributions.

Dreamworks is very committed to being an ongoing leader in the open
source software community.  This latest offering sees them raising the
bar even higher.

Continue reading "SIGGRAPH 2012 : Dreamworks Animation Open Sources Volume Data Format" »

Permalink | Comments(0)
August 09, 2012
  SIGGRAPH: Blue Sky's Open Ocean System For 'Ice Age: Continental'
Posted By Scott Singer
In Blue Sky Studio's presentation as part of the Wild Rides
presentation at SIGGRAPH 2012 they demonstrated several novel
approaches to creating their open ocean environment in Ice Age:
Continental Drift, for a sequence in which the characters sail a small
iceberg across the sea. Some challenges were technological and others
aesthetic but all served the goal of turning what can be the arduous
and expensive process of water animation into a manageable package.

At the base of their solution is a library of wave animation based on
mathematical descriptions of open ocean wave forms.  These were
collected into look libraries that could be classified, for instance
calm or stormy, and easily called into the pre-vis artists Maya

Once in Maya these libraries could be easily "scouted" by the PreVis
department for ideal locations and appropriate animation layout paths
could be derived and pushed out to the Animation department.

Even before handing the the scene data off to animation, Blue Sky made
sure that a lot of visual detail could be added that helped to set the
tone of the shots early on. They included environmental elements, sky,
sea foam, as well as hero wave splashes. These details allowed for
creative decisions and approvals to happen early on in the process
when changes are easier to accommodate.

Hero shot specific elements, like the giant tsunami wave, could be
added into the ocean look library and handled as though they were any
other shot, thereby limiting the costs that usually accompany large
hero moments.

On a technical note, because the FX Department used Houdini they were
able to leverage the point based water look libraries, and then blend
new library elements from existing ones, for instance to smoothly
transition from open ocean to calmer shoreline water. In addition the
encoding of all of that data allowed the FX Department to derive
visual details like foam and ripples on the water.

Continue reading "SIGGRAPH: Blue Sky's Open Ocean System For 'Ice Age: Continental'" »

Permalink | Comments(0)
August 09, 2012
  SIGGRAPH: High Frame Rate Cinema
Posted By Scott Singer
At the SIGGRAPH 2012 presentation on High Frame Rate Cinema, an
assembly of industry leaders including Douglas Trumble and Dennis
Murren, along with a very informative, pre-recorded presentation by
James Cameron, gave very convincing arguments for why using a higher
frame rate is a better direction for film production than adopting
higher image resolutions.

The main argument revolves around visual artifacts created by the
industry standard 24fps frame rate.  This standard itself is a
holdover from decisions made as cinema moved into the sound era
because it was the slowest frame rate that could still support
sound-sync. Perceptual modeling of human vision by IMB suggest that a
frame rate of 72fps is necessary for fully seamless playback.  The
24fps limitation introduces flicker and strobing onedges during fast
motions and camera movements.  These artifacts are particularly
jarring in stereo. Going to a higher spatial resolution or larger
frame size does nothing to alleviate the strobing, however the
strobing is noticeably reduced as the frame rate and shutter time
increase.  A frame rate of 48fps with a shutter of 270-360 degrees
(digital cameras have no inherent shutter limitations) provided a
reasonable reduction of the strobing artifacts.

Douglas Trumble also showed that footage captured at higher frame
rates can be effectively downsampled to provide content for a standard
24fps 35mm projection, thereby allowing studios to provide the same
content to theaters of different capabilities albeit with the same
quality compromises inherent with those slower frame rates.  And
current 2nd generation digital cinema projectors can already support
these higher frame rates with only a software upgrade.
Continue reading "SIGGRAPH: High Frame Rate Cinema" »

Permalink | Comments(0)
92 posts found. Viewing page 1 of 4. Go to page 1 2 3 4   Next