Current Issue
April 2016

Recent Blog Posts in 2013

36 posts found. Viewing page 1 of 2. Go to page 1 2   Next
September 17, 2013
  IBC 2013: New Levels of Real in Content Capture
Posted By Tom Coughlin
At the 2013 IBC conference in Amsterdam the introduction of advanced cameras and digital storage for cameras gave some insights into the shape of things to come for the digital capture market.  These included a new flash-based digital storage format for ARRI cameras, cameras supporting up to 200 fps, a prototype motion scene camera, as well as a prototype of a mobile 8K cameras by NHK.

According to a Coughlin Associates survey of media and entertainment professionals, many members of SMPTE or the HPA, flash memory camera storage is the storage media most used in modern cameras (59% of the survey participants used flash memory for camera recording media).  Various formats for flash memory are used in professional cameras made by companies such as Sony, Panasonic, Ikigami and ARRI.  At the 2013 IBC ARRI and SanDisk introduced a new flash memory format for digital cameras, the CFast 2.0 recording media. 

CFast 2.0 has flash storage capacities of 60 GB and 120 GB with write speeds up to 350 MB/s.  This write speed supports recording with high quality codecs at high frame rates.  In fact the new Amira documentary-style shoulder mount digital camera that ARRI announced at the IBC uses the CFast to record up to 200 fps. Most common professional video cameras and flash storage media today only support up to 120 fps on full resolution formats.

ARRI also showed a prototype Motion SCENE Camera at the 2013 IBC.  SCENE is a European research project to create advanced imaging technology.  SCENE has a goal of developing novel representations and tools for digital media beyond sample-based (conventional video) and model-based (CGI graphics) based systems.

In the IBC Future Zone the Alexa SCENE prototype showed an RGB+Z camera that couples an Alexa Studio camera with a time-of-flight camera.  By including the time-of-flight camera with the conventional camera, imagining through the same visual portal, this camera prototype can capture RGB images combined with depth information.  This allows the resulting images to be manipulated like CGI images and provides inherently 3D spatiotemporally consistent captured content. Combining CGI animation with video will be more seamless with this approach, creating new possibilities for visual effects and more efficient video workflows. 

Also on display were several other interesting camera arrangements using several cameras at one.  Hitachi was showing a 16 small camera array while the Fraunhofer Institute had a camera system that can captures an entire 360 degree view at one time.

For several years NHK has been championing 8K X 4K video.  In the Future Zone at the 2013 IBC NHK was showing 8K video from the 2012 Summer Olympics and other sources on the large format Sharp 8K video display.  In the same exhibit they had their mobile camera prototype for 8K content capture on display.  NHK has been vigorously pushing 8K video and plans to have initial 8K broadcast trials within a few years.  At the 2013 IBC there was a clear push to implement HEVC (H.265) compression for video content.  On display were many examples on the encoding and decoding of HEVC content for 4K video.  

The used of an advanced compressed content delivery codec will also enable future 8K content.  With the advances of content capture combined with advanced content delivery and display technology consumers will be able to experience whole new levels of realism in future video displays.

Continue reading "IBC 2013: New Levels of Real in Content Capture" »

July 30, 2013
  SIGGRAPH 2013: The Spark CG Society
Posted By Scott Singer
I sat down with Larry Bafia, Sly Provencher, and Dennis Hoffman of Spark Animation and Spark FX, to find out more about their organization and there connections with SIGGRAPH.  The Spark  CG Society is a group from Vancouver that is dedicated to being a nucleus of community building in the Vancouver Animation and VFX arena.

Among the many events they sponsor there are two important conferences - Spark Animation and Spark FX, which bring people together from schools and production companies in the Vancouver area for several days to attend talks by industry professionals, artists, and educators. The talks and presentations are first and foremost educational - sometimes discussing specific techniques and concepts, but also bringing in industry luminaries to share their thoughts on the creative process.

They have recently entered into an official three year partnership with the SIGGRAPH organization to cooperate on conference organization.  In fact the Spark CG organization grew out of the Vancouver SIGGRAPH chapter recognized as their most active chapter in the world. They were instrumental in bringing SIGGRAPH to Vancouver in 2011 - the first time the conference was held outside of the US.  This was so succesful that SIGGRAPH is going back to Vancouver in 2014.

this years conference, Sept 11-15, will feature lectures, presentations and screenings along with  an "Anijam" where the audience will be asked to actively contribute to the making of an animation during a session.  The theme is "Story and Storytelling" and there will be a special screening of a Disney animation "Get a Horse", which was started by Walt Disney over 80 years ago, but just recently taken back up and finished by current Disney animators.  It made its debut at Annecy earlier this year, and this will be only it's second public screening.

Spark is committed to making their conferences affordable, and rather than selling a single non-tranferable pass, they sell a set of tickets to each event. Attendees unable to see a particular session are actively encouraged to give those tickets to colleagues who can.

During the year they sponser numerous other events including guest lectures, educational events, and special screenings that try to keep the history of animation and cinema alive. They have a commitment to keeping the Vancouver Animation, Game and VFX industry an informed and coherent communities.
Continue reading "SIGGRAPH 2013: The Spark CG Society" »

Permalink | Comments(0)
July 30, 2013
  SIGGRAPH 2013: The effects omelet
Posted By Scott Singer
The "Effects Omelet" presentation at SIGGRAPH is always a great source for inspired creativity on the ground by VFX artists and TDs.  David Lipton, Head of Effects at Dreamworks Animation, gave a particularly interesting talk about he achieved the Jack Frost frost effect in DWA's "Rise of the Guardians".

Interesting use of old school approaches to get more controllable artistic results in the expressive effect of Jack Frost's frost in DWA's Rise of the Guardians.  The frost needed to be a highly stylized, very art directable and expressive effect, where Jack's staff would freeze objects by propagating elegant, icy arabesques that skated across surfaces, covering them in stylized frost patterns.

Lipton said that they were helped immensely by the copious notes, reference images and concept art prepared by the Art Department.  This gave him and his team a very distinct target to aim for, and helped to narrow the problem at hand.

The first approaches were simulation based, but proved to be hard to control, especially because the effect itself needed to be an expressive actor in the film, with its performance often leading directing the eye through key story moments.  The winning approach was to look far back into the history of computer graphics to an old standby of cellular automata.  These are systems in which cells of a grid, like pieces on checker board, follow simple rules that determine how each cell becomes filled by its neighbors.  In this case the rules would determine how ice would grow from square to square as time progresses. The speed at which the squares were filled defined paths, like roadways, along which the delicate and stylized crystal patterns would be constructed.  Because the automata exist in a grid, the rules could be "painted" in like pixels in a digital photo providing a high degree of control.  The end result was a controllable, yet very organic looking crystal propagation that added a sense of magic and expressiveness to the scenes.
Continue reading "SIGGRAPH 2013: The effects omelet" »

Permalink | Comments(0)
July 30, 2013
  SIGGRAPH 2013: A slice of Monster Pi
Posted By David Blumenfeld
My third and final day of Siggraph came and went.  It was a pretty nice show this year, although as I mentioned before, it was smaller and definitely less energetic and enthusiastic overall.  I'm sure this is a combination of the economic downturn in this business coupled with the location.  I think it would be fair to say there also wasn't a ton in the way of new innovation being showcased.  Almost every year, there's at least one major theme (evidenced by the booths on the expo floor as well as the talks) in which some new development or tech topic sticks out, but this year really seemed to be more of the same thing with not much new.  I attended three talks during the day, the first being the production session for Life Of Pi, the second the State Of The VFX Industry discussion, and the final had some making of Monsters U and the Blue Umbrella information.  All were interesting, and I'll get into that shortly, but there was one topic I thought I would mention first.

I've been in the visual effects business since the mid 1990's, and while I've done my share of various types of work, it would be fair to say I spent most of the first decade working on feature films and shorts (with the occasional ride installation, music video, and special venue project thrown in).  Aside from around six commercials during that time, most of my work focused on large scale, longer term productions.  However, for the last six years, I have spent the bulk of my time engaged almost solely in television commercial visual effects, racking up easily over one hundred spots during that time.  As most everyone knows, commercial post production today is a different beast than it was a decade ago.  Every spot is made at a minimum of full HD resolution, with some larger format on occasion, and the type of effects necessary must be at the same caliber of those in feature films.  Of course, as schedules and budgets shrink, this work must usually be accomplished from start to finish in a few weeks at the most, using crews from small to miniscule.  This is what draws me to this work, and makes it fun and exciting.  While there is often a sacrifice on the r&d portion of the process due to time constraints, the ability to quickly figure out a solution, implement it, and create a number of shots in that short period is challenging and extremely rewarding.  The sheer volume of advertising that makes use of this type of work is increasing all the time, and due to this, there are a number of studios out there, both large and small, whose work focuses on commercial post production entirely. 

Additionally, because the hardware, software, and skillset used to produce this work is now identical to that of feature vfx, many artists cross over between the two types.  The reason I bring this up is that, as in the past, commercial visual effects is nearly entirely absent from SIGGRAPH in all forms.  There are no production talks on the making of these spots, there are no screenings (that I am aware of) of the top spots of the past year, and there is not even any mention of this sector of the workforce in the state of the visual effects industry talks.  I find it a great disservice to this large portion of practitioners and their work that this has been left out from the show, and I hope to see this added in the near future. 

As I watch my peers discuss their techniques on the films they are presenting, I see things that I fully understand and agree with, but have to forgo and "fake" in order to make the deadlines.  I am often able to match the quality in the result, but it is these types of techniques that I feel should be shared with the vfx community during these talks, as I think there is merit and benefit for those in attendance.  One reason commercial vfx houses continue to succeed (those that do, of course) is because they have found a way to keep their overhead low and the effects budgets in check by performing in a highly efficient manner.  This is an area that could be utilized by some of the larger scale film vfx studios so they can remain in business and operate at a more profitable margin, something that is of great concern recently and one of the many factors forcing these companies out of business or to look for financial relief elsewhere. 

Having worked at many of these larger studios in the past, I have witnessed firsthand a large amount of waste in regards to inefficient workflows, including too many layers in their vertical stratification, as well as development that, dare I say it, makes for a better SIGGRAPH paper than it does for a necessary step in the vfx production process.  Anyway, in summary, let's try and push for some commercial visual effects representation at the show, it is a large part of the community and would provide a greater amount of knowledge for the attending crowd.

For my first talk of the day, How To Bake A Pi, a panel of five supervisors from Rhythm & Hues spoke about some of the technical challenges and processes on the production, including the creation of photoreal animals, digital oceans and skies, artistic imagery (not just real but pretty), and working in stereo 3D.  Highlights included the problems they solved during production by constructing a practical in-ground 70x30x3 meter water tank complete with wave generators and special concrete "tetrapods" used to counter the effects of reflecting waves from the walls (called the bathtub effect) for greater realism of practically simulating an open body of water.  Houdini, Naiad, and other custom tools were used to realize the CG ocean extensions.  While not directly mentioned in the talk, I assume the underlying deformations of the water are based on the Tessendorf algorithms.  I have done some pipeline development at Brickyard using these algorithms and tools such as the Houdini Ocean Toolkit (ported to Maya) for this purpose, with great results.  For anyone looking to do open ocean simulation, this is a great place to start, and a simple web search will get you to some source code and precompiled tools you can use on your own for development purposes.  Back to the presentation, they further discussed the fx animation for whitecaps, mist, foam, churn, spray, and other water interaction, as well as splashes which were added as separate elements. 

Moving onto the discussion of their characters, they talked about muscle and skin development and bone simulations as well as the hair and fur systems for the animals.  Of note, the tiger had approximately ten million hairs, while the zebra was in the twenty million range.  I was a bit surprised by this amount, thinking it would be higher, as when at times I have had to do animal mug replacement, I tend to use approximately two million hairs for the front of the face alone.  They talked about some of the advances they made in rendering this fur, such as intelligent raytracing and importance sampling, coupled with partial hair transparency and ray occlusion, coupled with the use of subsurface scattering on the hairs themselves.  They spoke briefly about some of the crowd work as well, for shots of flying fish, meerkats, etc.  At the end, they presented a few metrics, the most interesting of which were a total of (if using a single processor) 1,633 years of total render time, with a peak disk usage of 260 terabytes.  In all, the talk was well done, informative, and the work they accomplished was beautiful.  I hope the talented team there is able to emerge from their financial woes and put all the artisans who created this back to work soon.

From here, I took my last walk around the expo floor, checking out a few technologies that I missed over the last few days, grabbed a bite to eat, saw a few more old friends, and then headed to the State Of The VFX Industry talk.  The talk itself started off with some good history of the industry and such, and then spoke to some of the major problems facing the business at this time.  I won't get into this too much as it can be a very polarizing subject and clearly open to interpretation, but I still encourage anyone working in this field to visit some of the websites they listed for more info, including:

I would encourage everyone to become educated on the subject matter so that any future discussion or participation can begin from an informed standpoint.

The last talk of the day was in the same location, and was about some various Pixar technology, including some areas of Monsters U and the Blue Umbrella short.  They spoke about the character rigging, crowd simulation and pipeline, vegetation such as trees, hedges, and grass population and simulation, and how this differed from that in Brave, as well as the rain and lighting/compositing pipeline for the Blue Umbrella.  The work was well done, though I honestly didn't find any of it groundbreaking or particularly different than before.  One thing I did take away, which I would like to look into and encourage the reader to do the same, is the NGPlant open source library, presumably a set of base code they took advantage of for their tree development.  As I may have mentioned before, I did some looking into SpeedTree software (a commercially available solution), which may be on my short list of upcoming software when I need it for a job.  I don't know if this uses a similar code base or not, but I will definitely check out this other library as well to see if I can glean any techniques or useful processes out of it.

In summary, I enjoyed the show and learned a couple new things, as well as found a few areas I will be researching more in-depth in the coming months. 

I found it interesting that the show seemed to be attended by a significant amount of people who didn't appear (at least at first glance) to actually be working in this field, at least in my opinion (one can never really know).  As mentioned above, I would really like to see more real-world examples of vfx production in areas besides large scale film production and university or tech industry research discussed, as there is definitely room for a greatly expanded set of classes, talks, and other show events that would be of interest to many in attendance.  I likely won't be going to next year's show in Vancouver, but hopefully I'll return to the show in two years if it's local to Los Angeles again.  It would be great to see it bigger and better than before, and I hope to see you there as well!
Continue reading "SIGGRAPH 2013: A slice of Monster Pi" »

Permalink | Comments(0)
July 30, 2013
  SIGGRAPH 2013: Printers, Trees, and Raytracing, Oh My!
Posted By David Blumenfeld
Hello again loyal reader!  Today was a fun day at the show, albeit more freeform. I wasn't able to arrive in time for the IronMan 3 Production Session, so I decided to spend the day doing some more in-depth research about the items and ideas on the expo floor.  This also gave me a good chance to check out some of the parts I tend to explore less, such as the Emerging Technologies, Studio, and Art Gallery sections.  As always, I ran into a number of old friends and coworkers, and had a chance to catch up with them for a while.  In many ways, this is one of the most enjoyable parts of the show.  It's funny how time flies, a large number of people I hadn't seen in nearly a decade!  Along with some old colleagues and such, a few former students of mine came up to me as well.  Nearly all of them are actively employed, mostly at the same company for quite a number of years, so that there is a great thing.  We talked about what we were up to and the like, but a few of them sincerely thanked me for the help I gave them, and the fact that my piddly few classes actually made a difference in their lives and careers.  I only taught for a few years back in 2002-2004 or so, but to hear something like that really made me feel good, knowing I made a positive impact on some people and that the time i spent doing that actually meant something to someone.  If the world had more people who actually cared about seeing others succeed, it would definitely be a better place.  Anyway, enough about that.

I had a chance to check out a number of different things on the floor, and while there were some cool booths for NVidia, Intel, and Epson, as well as some of the major software vendors (notably absent was Autodesk), it was some of the smaller setups I ended up gravitating towards.  As I mentioned in yesterday's column, there was a booth for a piece of software called Flux, made by FXGear.  They have a few other products, including a hair and cloth simulator, but the fluid sim is what interested me the most.  While very Naiad like, the performance looks to be faster than that as well as RealFlow, and most importantly, the multiprocessing it provides is actually exponential the more cores you have in your machine.  The person I spoke with also indicated I could run the simulation simultaneously on my farm as well, so we have received a demo of it and are in the process of setting it up.  I am excited to try this out, as if it performs as indicated, the price to performance ratio seems like it might be a winner...time will tell once I get a chance to play with it.  I also spent a fair amount of time at the SpeedTree booth.  

The last time I looked into this, it was Windows only, with a version that had the feature-set I desired at the five thousand dollar price point.  This time around, there is a Linux version available, and in the new version 7, which is supposed to come out of beta in a few weeks, quad export will be standard now in the midline version at one thousand dollars.  While this is node locked as opposed to floating, I don't see this being a problem to have it on a dedicated machine, especially since there is really no benefit to using the farm to process anything anyway.  And if it was needed for more artists at one time, you could still pick up four more before the price of a single floating license.  I'm hoping that this package might fit into my pipeline soon, and I intend to write about that as well once I have some feedback.  I also looked at more of the 3D printing technology and services, as well as a cloud rendering solution and a non-gpu based real-time raytraced viewport for Maya and Max (sadly it was Windows only, so I won't be trying this out until they port it to Linux).

Another interesting thing happened while I was in the Emerging Technologies area.  There was a studio class going on which I opted to watch for a little bit.  While sitting there, I looked around a bit and out of the corner of my eye saw a roving object.  I doubt this is the first time this thing was there, and the technology may not be cutting edge as I know there are similar assistive devices out there as well as similar equipment for defusing bombs and such, but what I saw was basically a thin vacuum cleaner base with motorized wheels, two vertical five foot poles, and a monitor, video camera, speaker, and microphone mounted on top.  In the monitor, you could see the image of the driver who was operating this device from a remote location.  It was clear they were moving this around, talking to people, watching things going on, etc.  It occurred to me that this device, albeit primitive, is the beginning of a Surrogates-esque avatar, and in the future if our species opts to embrace this type of virtual interaction, I can look back and remember this primitive device of the early 21st century and laugh.  Oh, the humanity!

In conclusion, today was about catching up with people, seeing some new things on the expo floor, and taking in the atmosphere of the show.  Tomorrow I intend to do a few production presentations and talks, so I should have more to report on.

Continue reading "SIGGRAPH 2013: Printers, Trees, and Raytracing, Oh My!" »

Permalink | Comments(0)
July 25, 2013
  #Siggraph2013 Emerging Technology
Posted By Scott Singer
Girish Balakrishnan, a masters candidate from Drexel University was demonstrating his performance capture camera rig made entirely of commodity consumer components. It's centered around an iPad and attached Playstation3 controllers that provide the rig's spatial tracking as well as the user interface components.

The virtual world which the camera operator navigates is provided as a Unity game engine scene running on the iPad. As the the operator moves through space the iPad displays that motion through a virtual camera in the game scene - like Avatar on a beer budget. The iPad integrates data from the playstation with its own, storing it as a file that can be imported into Maya or Motion Builder.

Balakrishnan has been interested in performance capture for years and feels that the current crop of tools leaves users too tethered to the mouse and keyboard. He wants to change that using tablets, commodity cameras, and game technology. His enthusiasm for the project might just make it a reality.

In its current configuration it can serve as a low budget indy game production tool or a very inexpensive previs tool for independent film and video production. Girish is looking into how to incorporate new HD cameras like the Blackmagic to build a more robust camera performance capture system that could expand the creative palette of independent film makers. Performance in the venue was hampered by the huge amount of wireless interference in the Emerging Technologies hall, but it would be interesting to see how it performs in its intended environment - the mocap green screen stage in your garage.
Continue reading "#Siggraph2013 Emerging Technology" »

Permalink | Comments(0)
July 24, 2013
  SIGGRAPH 2013: Day 1 - Oz, Man of Steel, 3D printing & more
Posted By David Blumenfeld
Hello again! Another year, and another SIGGRAPH is here. This is my first time in the Anaheim Convention Center, which is somewhat exciting, as I've always wondered about the inside of the Arena building since I was a child coming to Disneyland long ago.

The venue is quite nice, a bit smaller than LA, but clean and easier to navigate. I was finally able to make it here, as work obligations took priority at the beginning. After a two-hour, heavy-traffic drive from home, I arrived to begin my day.

My first visit was to the "Production Session for Oz, The Great And Powerful," presented by some of my ex-colleagues from Sony. The main topics they covered were about the digital environments, VFX, animation, and character work.  The presentation was well done, and they had some nice shots from on-set in Michigan, which definitely helped the talk. Some of the more interesting topics included shots which had the entire set, including the ground contact, replaced entirely (which seems more common these days), but that these plates had to subsequently be scaled down in frame to emulate a much wider lens, and still hook up properly with the extensions - a more cumbersome way to work for sure, but well done to say the least.

One thing that always sticks out in my head is the extra steps they are usually able to take on the set for these large films, such as LIDAR scans of the set. Not only having access to the equipment (or the budget to outsource it), but the time to run the scans and the cooperation of the production crew in acquiring them is a huge benefit. As someone who has spent the past six years almost exclusively on commercial production, these luxuries are usually impractical, both time- and money-wise. I'm always forced to solve these problems a separate, often less-accurate way.

As with any other technology in this business, I'm anxiously awaiting the day when I can purchase a LIDAR scanner for 10 percent the current price (there's a $50K model on the expo floor being demoed), and that runs faster than the five to ten minutes it takes currently. On a commercial production, you're lucky if you can get the crew to pause for :30 for you to manually shoot an HDR as fast as you can, and heaven forbid anyone clears the set: just standing still in one place is about as much as you can hope for.

After the environment portion, the discussion shifted to the FX department, responsible for stormy skies, snow, wind, water, fireworks, explosions, crowds, destruction, rainbows, bubbles, etc. The talk finished up on the character animation, and the part I enjoyed the most was their use of what they termed a "puppet cam" - basically a hand-held boom with a monitor and video camera on the end. The other end of the set-up was a second monitor and video camera in a trailer, where the voice talent sat. This enabled the actor on-set to verbally interact with the non-existent CG character in a method far better than a ball on a stick, allowing them to see each other and react to their facial cues and such. A great idea if you ask me.

After this, I visited the exhibit floor for a bit. As seems to be the trend, it was smaller than the previous year. I'm not sure if this is due to the location, the rising costs associated with having a booth, or both, but in either case, it's unfortunate. Nevertheless, a few things stuck out. For one, there were less motion-capture booths than before, and of the few I saw, one opted to get rid of the girl in a skintight suit and went for a skater on a mini-pipe enclosed by a net to stop an errant skateboard from whacking some unsuspecting visitor.  

Having skated myself when I was a teenager, I couldn't help but feel a bit bad for the guy in his ever-so-revealing suit. Oddly enough, I happened to walk by at the exact moment he munched it hard, causing a large group of people to react in shock at the loud noise. I half expected the cloud of bees to show up and form the shape of a needle and yell "Skate Or Die" at him, but alas, that didn't happen.  I would be curious if they could track the motion of each bee in the swarm though... that would be impressive! But I digress.  

There were a few 3D scanner companies and services showing, but not as many as I would've liked to have seen. Of the two big ones, StrataSys and 3DSystems, only the former was present. It doesn't seem like the prices have come down drastically on the machines, which is a bit unfortunate. There were definitely less scanning systems, and as far as I could tell, only a couple glasses-free 3D displays (which was nice for a change). One company was there showing their new fluid simulator, Flux, which I intend to have a deeper look at. Overall, I would say there wasn't anything in particular on the floor that really stood out as new and groundbreaking this year.

My next stop was the "Man Of Steel Production Session," held in the very cool Arena (which was smaller inside than I had expected). There seems to be a greater push this year in the sessions to really pound home how they don't want the attendees recording the sessions through any form. There are now volunteers pacing up and down the aisles trying to enforce this as well. To be honest, anything more than a simple mention of it is distracting, and I found the people walking around during the talks to be slightly annoying as well. In many ways, it reminded me very much of my visit seven years ago to the Sistine Chapel. The entire time in the main room, while everyone stands shoulder to shoulder in an effort to view the indescribable beauty, you are bombarded with people half-yelling "no photos."  They used to use the claim that the flash would ruin the paint, but it sure seems the reality is that they just want to sell you the pictures THEY have taken instead. That seemed oddly similar to what was going on here, and considering some clown will likely record the thing anyway despite the rules, it just seemed like more of an annoyance to have nearly 10 minutes of the "due diligence" speech. Anyway, on to the presentation. 

Five VFX studios were represented, including Weta, Scanline, LookFX, MPC, and Double Negative. Now personally, I haven't seen the film. In fact, I rarely see movies anymore simply because my current schedule and other factors make it difficult at best. However, looking at the footage each company showed, to be blunt, the work looked absolutely fantastic. The extensive use of digital doubles was close to flawless in what I saw, and the sheer volume of set work, building destruction, volumetric FX, and such really looked great. It was interesting to note that multiple shops are using Esri CityEngine for building creation, adding their own custom functionality to it where needed. I remember doing my own city builder development nearly 10 years ago in anticipation of some work for the last Superman movie, but this tool seems to be a great solution for what used to be a difficult problem.

The fact that there's a commercial solution out there now that is extensible and pretty full featured really solves this problem in a big way. Weta's work on the liquid geo display sequence was very inspiring and well done, and MPC's Enviro-Cam solution, where they used a 5D to capture HDR style panoramas, only with a single exposure, but using 74 pictures to stitch together into a 55K image instead of a 7-8K image you would get with just three shots. Again using LIDAR to build a virtual set, these environment spheres were then projected onto that geometry, giving you a relatively photorealistic set (angle limited of course).  

I actually wrote about a similar technique last SIGGRAPH, used in a similar way but with actual HDRs, so you could light the scene off this geometry as well. It's a bit of a time consuming process to build, and can be done either using off-the-shelf software designed for this, or using projections in a package like Nuke and then re-exported back out into Maya or whatnot.

At some point as time permits, I'd like to play around with this same technique, but using the actual images as an input to photogrammetry to bypass the need to scan the set. If I could set up a pipeline to do this task on a commercial schedule and budget, that would be absolutely fantastic.

In all, it was a good day at the show, and I had a chance to catch up with a few old friends for a bit and actually have a meaningful (non-whiny) discussion about the state of the industry and some well thought out potential solutions. I'm hoping to be able to attend the talk on Thursday about this subject, and possibly shed a little bit of light on one area that I think is worth talking about. In the meantime, please feel free to drop me a line at for questions, comments, or random thoughts about any of this you may want to talk about. And now, time to get some sleep and get ready for another day at the show!

David Blumenfeld is CG Supervisor at Brickyard VFX ( in Los Angeles. The studio also has a location in Boston.
Continue reading "SIGGRAPH 2013: Day 1 - Oz, Man of Steel, 3D printing & more" »

Permalink | Comments(0)
June 20, 2013
  Will my stream exceed my grasp?
Posted By Tom Coughlin
By Tom Coughlin

SMPTE Entertainment Technology in the Internet Age, June 18, 2013:  IMF addresses a significant problem in creating alternative versions of a piece of content to meet language, subtitle and other requirements for various markets.  According to Howard Lukk from Disney Studios up to 35,100 versions of a single piece of content are possible. IMF is a master file format that allows mezzanine level data compression and stores differences between versions rather than flattened linear versions. This saves storage space and makes management and repurposing of content much easier. Pierre Lemieux of Sandflow Consulting said that IMF stands between the source master (digital intermediate) and deliverable content for distribution channels.

IFM reuses proven technologies developed for digital cinema. It synchronizes content essence and metadata and provides a composition timeline broken into segments composed of sequences and captions.  An Output Profile List (OPL) governs specified transformations of the essence. According to John Hurst of Cinecert most IMF files are XML format.

Content delivery over the internet is increasingly popular but that popularity has dangers. Mark Watson of Netflix says 84% of their customers stream video at least once per week. YouTube has 13 billion videos with an average user viewing 401 minutes/month. Internet video traffic is now the majority of bits transferred through the internet and without new compression and delivery technologies video streaming could "break" the Internet, especially with even larger 4K video files on the horizon to feed new high resolution consumer TVs.

Adaptive Dynamic Streaming over HTTP (DASH) is one important element in conserving bandwidth assets. DASH features seamless adaptive streaming of content. Will Law from Akamai and Jesse Rosenzweig from Elemental spoke about DASH. 9 companies are providing players today for DASH and they believed that Adobe and Microsoft would switch to DASH in the future, leaving Apple as the only remaining proprietary streaming format.

New compression technology will also help control internet traffic, particularly for 4K content. One conference participant told me that MPEG H.265 encoding  (which promises up to a 50% additional compression beyond H.2645) requires 2-3 X additional overhead for decoding (at the user).  However the processing load at the source is much greater and to get the best quality delivery content about 100 X more overhead is required.  

IMF and DASH provide a framework for digital content delivery over the Internet.  Combined with HEVC compression (MPEG H.265) these technologies pave the way for increases in video streaming in a world where many people don't want to physically possess or even have a local copy of content. These technologies can satisfy that customer demand for more content without exceeding available bandwidth.

Tom Coughlin, the founder of Coughlin Associates ( has over 30 years of magnetic recording engineering and engineering management experience at companies developing flexible tapes and floppy disc storage as well as rigid disks at such companies as Polaroid, Seagate Technology, Maxtor, Micropolis, Nashua Computer Products, Ampex and SyQuest.

Continue reading "Will my stream exceed my grasp?" »

Permalink | Comments(0)
June 20, 2013
  SMPTE and the Internet
Posted By Tom Coughlin
By Tom Coughlin

The SMPTE and Stanford Entertainment Technology in the Information  Age conference June 18-19, 2013 included about 300 attendees interested in all aspects of the critical role that the Internet is playing in the media and entertainment industry. The sessions cover many topics related to connected media.  On the first day these included Content Creation for the Internet:  New Tools and Concepts; Flash Forward-How HTML-5 an Canvas Will Become the Next Interactive Screen for Web Media; Future Fiel Formats for Entertainment Media:  What are the Tech Trends and Implications for Internet Distribution?; Gaming, Entertainment and the Internet; Internet Media Delivery Formats-A DASH to the Races?; Next Generation Content in the Cloud:  Ultra Violet; and Mobile Internet Media: Content on the Go!

Content Creation and Distribution using the internet was an important theme at the conference. Including consumer interactivity and choices are increasingly important in the more open on-line world. New cloud-based technologies have removed the barriers to greater participatory entertainment in the future. Traditional laid back entertainment has it's place but there are new models that include social interaction with content that are changing the nature of entertainment. Games (whether for entertainment or business) are also an important element in the growth of on-line entertainment and in many ways have prepared the ground for the development of second screen and other interactive media.

Ann Greenberg from Sceneplay pointed out that fans and artists are more connected than ever before. Sceneplay allows users to be "co-producers" using micro-metadata that adds intelligence to scripts. Metadata capture and management is an important element in combining content from disparate sources.  Carl Rosendahl from the CMU Extension Entertainment Technology Center in Mountain View, CA has about 20 students per semester developing interactive technology for games and other entertainment.  

Peter Hirshberg from Enterprise Marketing showed some interesting video. He was involved with Bill Gates in doing a video to accompany Gate's 1993 book, The Road Ahead.  Peter said that the video (and book) missed important trends such as the Internet, Long Tail Content, the Ascendent Audience, Open Systems and Social Media.  He pointed out that TV will not make a return to high viewership without the help of other media-note recent Netflix deals with studios. Personal communications, such as twitter, give real time metrics and are displacing traditional measurement methods such as those from Nielson. He said in the future you won't just watch television, television will also watch you.  He pointed out patents that use cameras and other sensors built into TVs to observe a viewer in order to tailor advertising to the viewer.  

Social media also leads to broader game activity such as a Grand Central Game with global financial simulation  New technologies also allow unique ways to communicate such as writing on water streams using water jets controlled by inkjet printer like technology and airborne helicopter drones from MIT with lights on them that can be flown in tandem and controller by individuals to create collective art and communication. An activity called "Conspiracy for Good" in Europe and supported by Nokia involved 130 people in 5 countries with a 3 month long social benefit storytelling alternative reality game.  

Making video content interactive and game-like opens up entire new possibilities  for entertainment and for people to work together in new ways. Clearly this is an area that will develop much further in the future and is a great example of how the internet is changing our interaction with the world around us and as a consequence changing the nature of media itself.

Tom Coughlin, the founder of Coughlin Associates ( has over 30 years of magnetic recording engineering and engineering management experience at companies developing flexible tapes and floppy disc storage as well as rigid disks at such companies as Polaroid, Seagate Technology, Maxtor, Micropolis, Nashua Computer Products, Ampex and SyQuest.
Continue reading "SMPTE and the Internet" »

Permalink | Comments(0)
June 17, 2013
  Thoughts on the new Mac Pro
Posted By Larry Jordan
Last week, Apple gladdened the hearts of power users everywhere by providing a "sneak peek" at the new Mac Pro. Stylish, diminutive, and blindingly fast - at least according to the specs provided by Apple. Since that time, I've been thinking a lot about a system that is directly targeted to meet the performance needs of video editors, and other power users.

First, keep in mind that this was a "Sneak Peek" - a tantalizing glimpse of what is coming in the future, not a formal product launch. (This is similar to what Apple did a couple years ago when they provided an "advanced look" at Final Cut Pro X at the 2011 NAB SuperMeet.) Consequently, while this "peek" provided an overview, it was intentionally sparse in providing details. Partly, I suspect, because Apple wants to gather feedback from potential users before nailing down the final specs.


One of the key things I realized was that this system is envisioned to be highly configurable. Just as the current Mac Pro has a wide variety of options for RAM, GPU, storage, and connectivity, this unit is envisioned to be highly customizable as well.

If you think about it, the current Mac Pro is the most customizable system that Apple makes. Configuration is at the heart of the new Mac Pro as well. While I expect that there will be one physical unit, we will have a lot of choices about what goes into that unit.

This also means that we will see a variety of price points as well, depending upon how each system is configured. In this regard, the new Mac Pro is identical to the current Mac Pro.


Also keep in mind that Apple views Thunderbolt as more than a fast way to move data to and from a hard disk. Apple considers Thunderbolt as a direct connection to the PCI bus of the computer, able to deliver up to 20 Gb/second of data. Think of Thunderbolt as a direct line connecting the PCI bus to the expansion chassis of your choice.

NOTE: According to a reader, Intel is claiming a throughput of Thunderbolt 2 of about 1.6 GB/second, which is still very fast.

For most people, a fast computer coupled with lots of RAM and a really fast storage system will be all they need. In fact, Philip Hodgetts has written that more than 80 percent of Mac Pro users don't have any PCI cards in their system; aside from the graphics card. For those users, the new Mac Pro fits their needs for raw power, without adding tons of unneeded expansion slots.

NOTE: We used to think of PCIe card performance in terms of the number of "lanes" they used to connect to the motherboard. There were four, eight, and sixteen lane cards. The more lanes, the faster the potential communication speed between card and bus. With Thunderbolt, Apple is moving away from the concept of lanes, to straight data transfer speeds.

Thunderbolt 2 is fully-backward compatible with the original Thunderbolt. Thunderbolt devices can be connected by either copper or optical cables. Copper cables can be up to 3 meters in length (about 10 feet). Optical cables can extend up to 100 meters, for users that want to store their computers or RAIDs in a machine room for security, noise, or air conditioning reasons. Currently, optical cable lengths of 10, 20, and 30 meters are available on the market.

For users that need to expand the capabilities of their computer, for example DSP audio cards, video ingest and capture cards, mini-SAS or eSATA cards, more graphics cards, a very real question becomes "how many card slots should the computer hold?" Apple felt that picking any number of internal card slots would be limiting to some number of users. By moving all expansion cards outside the box, then connecting with the very high-speed Thunderbolt 2 data bus, Apple essentially provided a virtually unlimited number of card slots for users that need the maximum in expandability.

NOTE: As a sidelight, one Thunderbolt 2 connection provides sufficient data bandwidth to ingest uncompressed 4K images, or output video to a 4K video monitor, or support VGA, DVI, and DisplayPort computer monitors. Plus Apple put an HDMI port on the back of the Mac Pro just for good measure.

Already, ATTO and Sonnet, along with others are offering Thunderbolt to "X" converter boxes: mini-SAS, FibreChannel, eSATA, Ultra-SCSI. And vendors such as AJA, Blackmagic Design, and Matrox offer ingest and monitoring options connected via Thunderbolt.

The one missing piece is the lack of high-speed Thunderbolt-native RAID 5 storage systems, with the notable exception of Promise. There are plenty of two-drive RAID 0 and RAID 1 systems, but very, very few 5 to 10 drive RAID 5 systems, which we editors need the most. I've heard lots of rumors of what's causing the problem. Without pointing fingers, I hope this bottleneck gets resolved quickly.


We also need to consider that this is a system and not focus on one single element. The new CPU is twice as fast as the current Mac Pro in floating point operations. Memory bandwidth has doubled and now supports four channels of communication between RAM and the CPU.

The big news, though, was the addition of multiple GPUs. Although the ATI FirePros were featured, I suspect other options will also be available as part of the customization options Apple offers at launch.

NOTE: In terms of Final Cut Pro X, the GPUs determine performance for rendering effects, real-time playback of multiple layers, optical flow retiming, exporting, and video output to external monitors.

Here, things get interesting.

On Monday, Apple made a point to say that Final Cut Pro X would release a new version that supports the Mac Pro. That instantly made me think that all applications would need to be rewritten in order to run on the Mac Pro, which would make this new system a non-starter.

This is not the case.

Instead, think of the dual-GPUs in the Mac Pro as similar to when Apple released multi-processor CPUs. All applications would run on a multi-processor system, but until they were re-written to support multi-threading (which is the technical ability software uses to take advantage of more than one processor) the application would be limited to using only one processor. This was one of the big limitations of Final Cut Pro 7.

So, the Mac Pro will run all current Mac software. However, if the software wants to take advantage of the dual GPUs, it may need to be reconfigured to do so. This is not a small task for developers, but it isn't impossible. This is what Apple was referring to when they said a new version of Final Cut Pro X would be released to support the Mac Pro.

NOTE: Once developers know they can count of dual GPUs, they can design new software from scratch to take advantage of it, the way that everyone writes software today to take advantage of multiple processors and multiple cores.

UPDATE: A reader points out: "When using OpenCL, no code modification is required (problem only for Dev's which don't use OpenCL). Some use CUDA-API (Nvidia) - and this requires re-coding.

UPDATE: Another reader points out that the next version of Adobe Premiere and After Effects already support Open CL.

And the performance results of optimizing for dual GPUs can be astounding. Grant Petty, CEO of Blackmagic Design, tweeted earlier this week that they have been testing Resolve 10 on the new Mac Pro and it "screams."


Apple designed the Mac Pro as its most powerful and flexible desktop computer. They architected it to reflect where they see computers going for the next ten years. They provided a wealth of Thunderbolt ports - and converters - so that all legacy monitors, storage, and cards can be supported.

This has the potential to be an amazing piece of gear and I can't wait to learn more at the launch.

As always, I'm interested in your thoughts.

Larry Jordan is a producer, director, editor, author, and Apple Certified Trainer with more than 35 year's experience. Based in Los Angeles, he's a member of the Directors Guild of America and the Producers Guild of America. Visit his website at
Continue reading "Thoughts on the new Mac Pro" »

Permalink | Comments(0)
May 16, 2013
  Open House: A tour of Shure's IL headquarters
Posted By Luke Harper
Right before departing for IL for a Shure press event, I had a lot of conversations like this:

"Wait, you're going where?"
"Niles, IL. HQ of Shure."
"You Shure? HAHAHAHAHA seriously bring me back a '57, k?"
"Ha. Yes. I don't think they have a gift shop."
"They should have a gift shop."

Turns out they don't have a gift shop. But they have a lot of marvelous things... Now,  you might not be an audio nerd, so why should you care? Well, because they're an impressive company. They would be in any field. The biggest and brightest have relied on their gear since the 20's, and still do to this day. From Roger Daltry to  President Obama to NASA, their client list is a thing of beauty. How they've managed to do it all these years is fascinating and worth knowing about. 

So, with that, here's a recounting of the tour. 

Shure is located in Niles, IL. Actually on the border of Niles and Skokie, if you want to be specific. Their HQ is a massive, beautiful work of modern euro-sensibilities, all exposed concrete and glass. It's totally open in the center, and the roof is also transparent so light flows throughout. This is useful in a climate such as theirs, as a little sun goes a long way.

Upon entering we were introduced to our two tour guides, Christopher Lyons and Mike Lohman. Christopher Lyons is the Manager of Technical & Educational Communications at Shure, and a consummate host. The man is the perfect cross between corporate pro, diplomat and genuinely nice guy. Mike Lohman is the Senior Manager of Media Relations at Shure. A massive weightlifting booster, and as loyal an employee as you'd ever want to meet.

Unfortunately cameras were verboten, to the extreme woe of the videographer types representing, but they did have a corporate photog following who was very accommodating. The photog was a technical writer for the company for years, so was also a great source of info.

Armed with all of this, our troop was lead upstairs where we had the privilege of meeting the President himself, Mr. Santo LaMantia. (He's known as "Sandy" around the company, but I wouldn't try it). A note about the leadership at Shure because it's interesting: The company was founded in 1925 by Sidney N. Shure, a salesman of radio parts. Since then, there have been two other presidents, both engineers. Mr. Shure's wife is still the Chairman of the Board, and a very consistent presence within the company.

Mr. LaMantia spoke of Shure's main challenges right now, which are common to the industry: wireless spectrum allocation woes. The FCC wouldn't be honestly described as either fast-acting or wholly clear about their intentions, much to the chagrin of companies like Shure, who have a large customer and product base dependent on the reliability of wireless signals. 

After this interaction, we were lead to the second part of the HQ. The Shure layout is sort of mullet-y. Business in the front, party in the back. The other building, while connected and of similar design, has a wholly different set of functions. Among other things, it also houses the studio, product design and testing facilities, archive and service departments.

First, the studio. Oh, the studio. The Russ Berger Design Group was contracted to do an interesting thing with this facility - they had to design a hybrid. Aesthetically the main room looks more or less like your average big main tracking room. The appropriate angles and materials are all present and where they should be, the acoustics are gorgeous... but there's something fundamentally different about the very basic footprint. This room was designed to mimic real-world spaces, which can often be not ideal. Along the back wall, there's a full backline. There's even a monitor world in the left wing. They need to test real world conditions, and can quite well. A lot of thought went into this. The control room is a symphony of amazing - The Pro Tools system is fed by 192s and Prizms, and the monitoring is all massive ATCs in a modular 5.1 set up. They have more than one theater, and the surround rears can be wheeled between set ups. 

From the studio we went into the heart of the Shure beast - the development and testing area. We all know that Shure gear is hella tough. This is because since 1945 everything they make is "milspec", or up to military specifications. So they have to function in adverse conditions. Like really gross humidity, or being repeatedly slammed into concrete. And by work, here's specifically what I mean: Out of the box the stock piece is given a frequency spectrum analysis. After torture, the equipment is tested again and has to perform within a fractional degree of that response. 

The machinery they've both contracted and built themselves to torture equipment with is pretty comprehensive, ranging from brute simplicity to incredibly expensive and high-tech. For instance, they have a mic stand positioned over various surfaces which is designed to be triggered to fall from certain heights. Six feet is the average for microphones, but rack gear is also subjected to gravity, albeit from lower positions. Rack gear isn't typically six feet up and precarious, though, so I think they've done a reasonable job covering their bases. For the sheer hell of it, the Manager of Corporate Quality Engineering, Boris Libo, dropped a 58 a few times, denting it rather nicely. 

Apparently the spherical filter on the top of the mic is multi-purpose - besides being a popper stopper, it's also very specifically designed to be a crumple zone, and crumples in a pretty specific way to avoid hurting the capsule. Clever. Every single piece of gear gets hurt, though. Every lav, wireless receiver, microphone... everything gets drastically heated and cooled, beaten and maimed, and dunked in the grossest humidity outside of Minneapolis in the dead of summer (90%! Yay!). 

For sonic testing, they have a pair of matching anechoic chambers. I don't know if you've had the pleasure recently, but anechoic chambers are kind of creepy. To be more specific, what they do to your hearing and equilibrium is a touch creepy. 

The entire space is designed to eat any and all acoustic reflections. We are so used to sound coming at us from a specific source or set of sources and then reflecting off whatever we are surrounded by, that suddenly having that removed is disorienting. So you're standing in there, looking down through the mesh floor at the same massive wedges underneath you as are on the walls and ceiling, marveling at the precision of it. And then the guide starts talking, and turning while he talks. The difference in volume between facing you and facing the other direction is remarkable. The voice becomes quiet to the point of slightly hard to understand. 

The point of all of this, of course, is to create as ideal a sonic situation as possible to test the frequency responsiveness of the equipment. There is a track upon which a stand and mic are placed, and aimed at a high-end and incredibly flat loudspeaker, through which test tones are run (it's a concentric-coned Tannoy, just in case you're keeping nerd score). These tones are very specifically tailored to provide a wholly accurate response test from the microphone. You see the results from these represented as the frequency response charts within the documentation that comes with any microphone. Something like this, which is the KSM313.

After the anechoic chamber, we traipsed to the radiation room. Which blocks pretty much every electromagnetic signal that tries to barge in. It's funny to watch your cell phone before and after, five bars outside and zero on the inside. You are effectively shielded. In this chamber they can bombard gear with various frequencies and make sure that they aren't too permissive in reception and easily jammed up, and, if so, where their faults are. 

They can also use this facility to make sure the equipment is transmitting as it should and not interfering with anything else. When you make some of the best wireless systems in the world, there's not a huge amount of room for error.  The Axient® line is about the most trustworthy, solid and comprehensive wireless system in the arena. I'm not just saying that because they are nice people. They took great pains to show us the very guts of the operations, and how even when jammed, the Axient can dance from one transmitter to the other, and to another clean band within significantly less than a second. This means that even a worst-case signal stomping results in the barest flicker in the performance. 

After all this testing and developing, we moved on to the heritage section, where their in-house archivist and corporate librarian Julie Snyder took us on a brief tour of the history of Shure products. And what a history it is. 

Every United States President since Lyndon B. Johnson has had a pair of SM-57s on the lectern. The White House has hundreds of them, and they fly into every location the President is to visit before for set-up and testing. These are stock, entirely off the shelf 57's. Shure also created the world's first wireless mic for artists, The Vagabond. This was in 1953, if you can believe that. It was pretty rudimentary, you had to stand within a circle of copper wire connected to the receiver for it to work. But work it did if the conditions were right, which is pretty outstanding for that era. We were also shown a Shure microphone that has been to space some 22 times. On the OUTSIDE of the shuttle. Tough little bugger. Anyway, no one could argue that Shure doesn't have an amazing history in the industry, and I think that it's superb that they've dedicated an archivist solely to collecting and maintaining a library of memorabilia. It's pretty fascinating for the audio nerds among us.

So that's the past, what's the future? Well, battling the good wireless battle for one thing. Maintaining the standards of yesterday and innovating the products of tomorrow for another. For example, the VP83F LensHopper is one of the best ideas I've seen in awhile - It's a combination mini-boom mic/SD card recorder that fits right onto your standard DSLR hot shoe. It's a brilliant and extremely timely concept that was executed extremely well, and I hope to have a dedicated review of it and the VP83 DSLR mountable mini boom microphone coming up in the next couple of issues.

So that's it. A great American company making great American products. I hope you didn't think my tone too commercial, but let's be frank: We all use their gear. Reliability in the face of mission critical objectives is crucial, and knowing about the process can help ease minds even further. If you're ever given the chance to tour the Shure HQ, I can't recommend it enough.

Continue reading "Open House: A tour of Shure's IL headquarters" »

Permalink | Comments(0)
May 08, 2013
  Can clouds help the surge in post?
Posted By Tom Coughlin
By Tom Coughlin
Coughlin Associates
San Jose

At the 2013 Digital Hollywood Conference (, the moderator in one session indicated that post activity in 2013 is up 40 percent from 2012.  However what appears to be driving this new post demand is production of Internet-based content (e.g. NetFlix sourced video) rather than traditional studios and channels.

If content is made for the Web, then it might make sense to look closer at Web-based tools for video workflows. Since video files are often quite large, digital storage in the cloud will play an important role in Web-based products. We will look at a few of these services that were on display at the 2013 NAB show, including cloud editing and post, cloud-based play-out and cloud-based disaster recovery services for broadcasters.

Web based tools and storage are enabling new capabilities for long range collaborative work which works well for proxy viewing and other lower-resolution download-based services, however the high latency can be an issue for more real-time high resolution, high frame rate work since coast to coast communication latency can be about 100ms.

Prime Focus announced its CLEAR Hybrid Cloud technology platform and diital content services. These services are meant to support multi-platform content production. Prime Focus's CLEAR platform will provide multi-platform content operations, enterprise digitization, mobility, contextual advertising, cloud editing and content analytics.  It is interesting to see "cloud editing" on this list since this requires much faster access to content and potentially higher data rates. It is likely that this approach works best for compressed lower resolution proxy content where the editing is generating an editing decision list (EDL).

Deluxe showcased its cloud-based play-out platform, which they called MediaCloud.  The platform includes select tools for HD video content creation, management and online delivery, as well as additional Deluxe media services that can include archiving.  As a result of working directly on the cloud Deluxe says that it can deliver an exceptionally short time-to-air and significantly lower costs and overhead for their customers.

Front Porch Digital has a cloud-based storage service called LYNX that is being used by broadcasters via Ericsson as a cost-effective disaster play-out recov ery solution providing a redundant capability that can be used if the original content is not available. LYNX can ingest content from tape or as file-based media.  By sharing a standardized recovery platform between multiple customers the individual costs can be significantly reduced.  Note also that Front Porch Digital is integrating Sony's new Optical Disc Archive product into its DIVArchive CSM system (pictured, right).

Remote data center services, including digital storage, are playing an ever more important role in producing content.   We expect that the rise of on-line driven content will make the role of on-line storage even more important. The increase of online driven content will also drive demand for on-line workflow tools. Used together this may make faster time to play-out possible and also provide new tools for disaster recovery and service continuity.

Thomas Coughlin runs the data storage consulting company Coughlin Associates (, which produces the Storage Visions Conference. He is a frequent blogger for Post Magazine.

Continue reading "Can clouds help the surge in post?" »

Permalink | Comments(0)
May 07, 2013
  An interview with Adobe's Bill Roberts at Adobe MAX
Posted By Larry Jordan
By Larry Jordan

Last evening, at the Adobe MAX conference, I had an extended on-the-record conversation with Bill Roberts, director of product management for Video and Audio Solutions at Adobe. Roberts started with Adobe three years ago, specifically to take charge of the future development of all their video applications: Premiere Pro, Audition, After Effects, Prelude, Adobe Media Encoder, Encore and SpeedGrade.

After listening to the Adobe keynotes, and the Executive Briefing afternoon, I wanted to get a lot more detail and hard facts on what Adobe was planning.

NOTE: Unless I've put quotes around it, I've paraphrased many of Roberts' answers in the interest of condensing our 90-minute conversation.

I started off by asking: "Why was so little said at the keynotes this morning about Adobe's audio and video applications?"

First, Roberts (right) said, historically, Adobe MAX was a Web programmers event, not a video event; this year's event focuses more on creativity than programming. Our video event was the 2013 NAB Show, last month, which is where we first rolled out these products. We tailor our product showcases to match the event.

Second, Adobe is best known for print and Web products. The keynotes launched new versions of Photoshop, InDesign, Illustrator and a variety of Web applications. This was their day to take center stage.

Third, last year, the Creative Cloud was the place to download an application or store a file. This year, we wanted to explain that the Creative Cloud was actually much more. That's why we spent time today talking about Behance, an online digital portfolio.

"Behance is like LinkedIn for creative professionals. It's where design and motion graphics professionals can talk with their peers, find work, collaborate and share ideas."

"If I were to describe our video product family," Roberts said, "I would call it a 'train on the track.' We know where it is going, it has a clearly defined path, and its speed is increasing."

I shifted gears to the Cloud. "There is a lot of discussion online about whether the Cloud is relevant for video professionals because the files are so big, bandwidth so constrained, and privacy/security issues are paramount. Is The Cloud even relevant?"

That depends, Roberts replied, on what you are storing to the Cloud. If you are storing source media files, today, probably not. There are lots of issues with storage, bandwidth and infrastructure. And today's explosive growth in shooting ratios requires a rapid and never-ending need for increased storage. The future for Adobe may lie in creating infrastructure, but not now.

What we are seeing now is that editors are not sharing source media via the Cloud, but sharing project files, and linking them to media which is stored locally for every editor.

New with the CC release of Premiere Pro is easy relinking of files. "Relinking is part of the media world for a while to come. But, ultimately, storing multiple versions of source files - one for each editor - needs to go away.

NOTE: Another big concern for the Creative Cloud is encryption and security. Adobe has a page on their Website devoted to this issue. Here's the link to that page.

Roberts continues: What we see Adobe Anywhere providing is the next step up from sharing project files. Computers and storage have both become cheap enough that we can move basic computing functions from the local computer to the server.

When we store the source files on a server located on the customer's premises, an editor can request that file from the server. Instead of copying the file to the local hard disk, the server streams it directly from the server into the editing application so that the editor edits the stream directly in Premiere. The files are created in realtime as they are needed by the editor. No proxies, no local media, accessible from anywhere.

What Adobe Anywhere does is provide a server/editor architecture, which is hosted by the customer, using their servers, storage and editing platforms. What we provide is an ability to move the main compute function to the server, which allows editors anywhere in the world to access the media files, without needing to store them locally.

NOTE: Most of the pilot implementations of Adobe Anywhere use VPNs to handle transport and security. This allows the customer, not Adobe, to make sure their files are safe.

June 17 is the release date for all our Creative Cloud programs, including the video software. "We are actually ahead of the curve at the moment, so I'm not too worried about meeting that date." However, Adobe Anywhere will probably follow a few weeks after that June 17 release, "because we want to make sure we get it right."

I asked Roberts about the concerns I'm reading online about Adobe going "all in" with subscriptions. "Couldn't Adobe," I asked, "continue with both package and online versions?

Roberts said that at the keynotes, Adobe's CEO said that subscriptions allow for more consistent revenue, but there's also a very big reason from the development point of view. The cost of maintaining two separate product lines, one boxed and updated annually, and the other available online and updated much more frequently, causes major reconciliation problems between the two development teams. It also requires twice the developers to accomplish the same amount of work.

NOTE: The Sarbanes/Oxley law has very stringent requirements on how software is updated and how sales revenues from both the initial sales and upgrades is accounted for. Under the law, it is not possible to do incremental updates without major accounting hassles.

Roberts continued saying that subscriptions allow for easy incremental updates, bug fixes, and new features. Then, every few months, we will create an "anchored state" of the software that you can always revert to, if you need to go back a version. This is one reason that all Creative Cloud subscribers will get every CS6 application as well as the CC version. "You can always revert back to CS6 if you need it, or are working with someone else who uses that version."


"It seems to me," I asked Roberts, "that Premiere Pro CC is, essentially, Final Cut Pro 7 designed for more modern hardware."

"Three years ago," Roberts replied, "when I joined Adobe from Avid, I set the objective to make Premiere Pro the Photoshop of video. I wanted it to be an essential creative product."

"My first goal was to put the right team together. My second goal was to look at the competition and see what we can do better. Our user interface was not intuitive. I wanted to find out what our competitive weaknesses were and make them better."

"Premiere Pro CC is the fastest NLE on the market for file-based workflows. It stands on the shoulders of our competition and improves on them. Adobe anchored its work in the professional editing environment and focused on editing faster and telling stories better."

"We didn't want to create new paradigms. We wanted to take the existing paradigm and improve it. Personally, I think we are better than Final Cut Pro 7."

Audition is an audio editing program that I like and use daily. I asked Roberts whether Audition was part of the Creative Cloud?

"Audition is part of the Creative Cloud. Adobe doesn't want to displace Pro Tools, however, we can be Avis to Avid's Hertz. Audition is anchored in broadcast, news and documentaries. You can edit, clean-up, and mix great stories with it.

"We are happy with where Audition is at the moment. The key question we are wrestling with is where do we take it in the future?"

NOTE: It is worth mentioning that Bill started his career in radio, and uses Audition for his music podcasts.

Turning to a new subject, I said that two of the video products that have not seemed to get a lot of love in this go-round are Adobe Media Encoder and Adobe Encore. How come?

"That is a very interesting question. We did not do any work on Encore in this release. The CS6 version of Encore fully supports Premiere Pro CC, and, in fact, we will have a video showing how the two work together at the release."

"However, while optical disc creation is still important to many people, it is not a growing market. Adobe thinks that the current state of Encore CS6 meets the demands of the market today. It is not worth investing engineering resources into improving Encore at this time. And we spent a LOT of time talking with customers and within the company to arrive at that decision."

Adobe Media Encoder (AME) is a different case. Not only is it a stand-alone product, we also provide an OEM version for other developers to use, plus five different versions used in different Adobe products. "This was crazy."
Internally, this year, we restructured the development team and standardized on a single version of AME. When AME CC comes out, it will support ProRes. It will support DNxHD. It will be a great transcoding platform for Prelude.

"Run a test with AME CC and you'll discover how much faster the latest version is. It will be on par, or better, than any major competitor." And we are not stopping there. Wait till you see what it looks like next year.

I asked Roberts to sum up his feelings about this product release.
"Honestly, this is my third year at Adobe. I was involved in every single aspect of this feature set. It's the first [development] cycle where I had a full team of experts."

"I am as proud as I could be of what the team has delivered. The teams outdid themselves - they did an amazing job. The NAB Show was amazing, and I can't wait for the launch."

- - -
Larry Jordan is a producer, director, editor, author, and Apple Certified Trainer with more than 35 year's experience. Based in Los Angeles, he's a member of the Directors Guild of America and the Producers Guild of America. Visit his website at

Continue reading "An interview with Adobe's Bill Roberts at Adobe MAX" »

Permalink | Comments(0)
April 25, 2013
  NAB 2013: Wrap Up & Top Picks
Posted By Heath Firestone
I decided to do things a little different this year. The last two NAB's I've done a video blog, focusing on just a few noteworthy NAB releases. But this year, I decided to go back to a written blog, and just do one longer NAB wrap-up, focusing on a longer list of products.  

This year, the theme of NAB seemed to be 4K, but a lot of NAB seems to be focused on releasing similar, but improved versions of existing products. We also got to see working versions of products, which were announced last NAB. It also seemed to be more focused on eliminating rolling shutter issues in camera. With Sony's release of the F55 with global shutter a couple of months ago, others are following suit, with Blackmagic Design announcing their global shutter 4K Production Camera, and Red adapting an LCD shutter add on, called the Red Motion Mount.  

Here are the highlights from a few of the booths I visited.


Atomos released a new version of the Samurai Camera Mounted Recorder, called the Samurai Blade, which improves over the Samurai by offering S-Log and C-Log Recording, full size BNC connectors and a higher resolution 1280 x720 touchscreen monitor with waveform monitor, vectorscope, zoom, and adjustments.  One of the cool things about the Atamos recorders, is they aren't limited to SSD media, but also support recording to 2.5" HDD drives, which are fast enough for ProRes and DNxHD recording.  This is significant since you can use inexpensive 2.5-inch hard drives, and treat them more like tape, putting them on a shelf when you are done, for short term archive, instead of having to transfer the files off like with expensive SSD media.  They were also showing the new AC version of the Connect HDSDI to HDMI and HDMI to HDSDI converters.  


Convergent Design was showing off their very cool Odyssey 7 monitor/recorder.  I was very impressed by this little monitor. One great feature is a different kind of focus assist in addition to the usual.  This focus assist uses edge detection, and looks a lot like a difference matte. It shows much more detail at the center of the focus range, really standing out as you roll focus through the depth of a person's hair, for example. It's hard to describe, but really cool to see.   

The Odyssey 7 has a great looking 7.7-inch Touch Screen, with lots of menu controls. It also has two HDSDI inputs, and one HDMI, and the same outputs, but what is really cool, is that they have conversion capabilities, so if you feed into the SDI, you can spit out HDMI, or if you are feeding HDMI from something like a DSLR, you can spit out HDSDI.  They also have timecode I/O, which is really useful with the recording functionality of the monitor. If you have an external timecode generator, this gives you a professional feature addition to cameras which you might be shooting with that don't have that feature.  It also has dual SSD slots, which can be configured to run in RAID 0 or RAID 1 configurations, so mirrored, or spanned. It also has Android and iPhone remote control, and will support Avid DNxHD, ARRIRAW, and Canon 4K Cinema Raw, with the additional purchase or rental of those codecs.  

Without codecs, it is priced at $1,295, and I believe they said it will be available in July. The Odyssey 7Q, is the quad version of this with four HDSDI connections, configurable as input or output, and it has quad view.  It will also do simultaneous proxy recording and up to 120fps recording of RAW DNxHD compressed. 


Decimator Design was showcasing their new MD-CROSS, which is a little different than the MD-DUCC, in that it supports HDSDI to HDMI, and vice versa, has an LCD menu, and an extensive Test Pattern Generator, all in one. What makes this stand out from other HDSDI to HDMI converters, is that it handles framerate conversion as well, meaning that it can convert whatever framerate you are dealing with to something your monitor can handle, which is very useful. It also does up and down conversion, and a bunch of other stuff, all for under $700. It is definitely one of those things that can save you in a pinch.


Blackmagic Design was showing their new phone sized camera, called the Blackmagic Pocket Cinema Camera, which is very similar to the Blackmagic Cinema Camera, but has HDMI output instead of HDSDI, and only shoots up to 1920 X 1080, to either ProRes or Cinema DNG RAW compressed, all for $1,000, and available in July.  They also showed their 4K camera which is $4,000, due around the same time, sporting a Super 35mm sensor that has a global shutter, making it an exciting future option.


Matrox was showing a couple of cool new products. The first is the Monarch, which is a stand-alone box, which is really pretty small, which has the capability to input from HDMI and stream to H.264 while simultaneously capturing to H.264 on an SD card, or to the two USB ports, or even to network attached storage. It supports up to 30Mbps for capture, and up to 20Mbps for streaming. Control is through a Web interface over the LAN. It can also be used for video monitoring over the local network with under a second delay, which can be very useful since HDMI is difficult to run long distance. It comes in at under $1,000, and should be available in July. They were also showing their 4K playback card, called the Mojito 4K. It allows full 4K playback, but requires a beefy machine to handle it.


Autodesk has continued to refine Smoke's interface, listening to editors, and making sure that they are whitling away at the learning curve, and making the interface more and more intuitive. They had a huge response to their open beta, and refined the experience to mesh with what editors from other platforms expect, so there is an easy transition.  For people like me, who come from an effect heavy background and always hated having to switch back and forth between their compositing application and their editor, having more power than Combustion, and an awesome editor, combined into one, is a dream.

Make it easy to learn or transition to, and affordable, and I can't ask for much more, except to make it work seamlessly with my 3D graphics app (which happens to be Maya). It seems like Autodesk is listening to guys like me. No revolutionary announcements, but sometimes making things easier and more practical is the best improvement you can make. I'm sold. They also announced support for Blackmagic products, so you now have options on the hardware side as well.  


Adobe is currently offering their Cloud subscription service for $30/month for the first year, something that was reserved for owners of older suites. The cloud service seems to be highly confusing, but once you understand it, it will be hard for you to want to go back, I'll explain. When you are subscribed to the service, you automatically get access to the entire Master Suite, as downloads. You also are kept up to date with the latest releases. When you do the math, it doesn't make sense to purchase the software anymore. It is one situation where leasing makes way more sense than buying.  

They also made some drastic changes to the Windows timeline, which while it takes a little getting used to, saves lots of clicks, making editing quite a bit faster, so lots of productivity enhancements. SpeedGrade has some new functions for matching color from one shot to another, which is really slick, impressive, and promises to be one of my most used functions, especially for multicam shoots using different make and model cameras. I wish I'd had this capability on a lot of my past projects.


AJA announced that they will no longer do pre-release announcements, meaning, when you hear about something new, it will be shipping. It's a cool idea, and after the disasters like Blackmagic had with delays to shipping the Cinema Camera, and Red has had with most of their announcements, this might be a welcome policy. 

They are shipping the KiPro Quad, which really is a cool device, and has some really nice features like built in down conversion, and is just really built solidly. It looks like something that will withstand the rigors of years of production abuse. They also have a cool new Region of Interest or ROI device, which takes DVI or HDMI input, and spits out HDSDI. It is only intended to work with computers, not cameras, and is designed to allow you to spit out a high quality up-converted, genlockable Region of Interest output, so broadcasters can easily incorporate windows, or stuff like windowed YouTube streams into their pipeline.  They also have a new Hi5 Quad converter, which takes in four HDSDI inputs (4K), and spits out 4K HDMI, or it can just act as a HDSDI to HDMI converter (though an expensive one).


The Foundry made a couple of interesting announcements. For NukeX users with up-to-date maintenance, they now get two Nuke Assist licenses, allowing two stripped down versions of Nuke to be used per NukeX license, similar to an on-line/off-line edit setup. You can have two people doing the grunt work, who don't need all of the full capabilities of NukeX, and have one finisher, who polishes it, and brings it up to the level it needs to be, utilizing the more advanced features of the program. Cool idea, but pricing for NukeX and annual maintenance isn't for the faint of heart, and seems to be one of the frustrations of many of its users. No doubt, it is a great program, and extremely powerful, though, and for a lot of applications, has become the industry standard.  


Wacom was showing their new 13-inch Cintiq, which has several improvements over its predecessor. It has a full 1920 X 1080 resolution HD display, is thinner and lighter than the previous model, and doesn't get hot or have a massive power supply either.  It is a sleek tablet, whose kickstand acts as a cover, and seems to be a very useful size. It also comes in at under $1,000.


The only real news with GoPro, is that they are working on a 3D kit for the GoPro Hero3, which should be coming out soon.


One of the products I was most excited about was the new IS-Mini, which is an on-set grading tool... or so I thought. I was a little confused because I was being shown how it works in conjunction with the IS-100 CCBOXX. In conjunction with the CCBOXX, this little $1,300 converter brings in HDSDI, and spits out HDSDI and HDMI.  

It does two functions, which is that it applies a color calibration, so that all monitors match, and grading done to that monitor will be to an accurate monitor, and it can apply a LUT or Look to the corrected image. This made me really interested, especially since some cameras like the Blackmagic Cinema Camera only output the crushed image to HDSDI when recording in RAW mode, so this would allow you to see a rough on-set grade, so you can get a feel for what it will look like when graded. I also saw Fujifilm's I-Pad app, which allows you to adjust your grade. This was all very exciting for a great price... but unfortunately I got it wrong.  It is true that it applies a LUT or Look, and calibration color correction, however it doesn't come with the ability to create that LUT or Look.

The IPad app I saw was for the CCBOXX. This is a very powerful combo, but the CCBOXX, which does some other cool stuff, costs closer to $30,000 configured, making it more of a rental option. Also, I wasn't sure how it did the color correction for the monitor. I was hoping it would have a series of test patterns and visual printouts to match to, as a poor man's calibration tool, but instead it works with a colorimeter. This is great for accuracy, but also pretty expensive. So, this is part of a high-end device, and not as useful on its own. It is too bad Fujifilm doesn't develop a less function version of the color correction app, and offer a poor man's calibration option, or this thing would probably fly off the shelf. It may still have some use for the grading purpose, though with the Blackmagic Cinema Camera.  I believe you can dial in a Look in camera, which is carried in the video stream sent to the HDSDI.  f this is correct, it is possible that this little box could convert the crushed image into a viewable output with rough onset grading. 

There is no doubt this box has potential, the real question is whether Fujifilm will recognize the potential market, or if they will be concerned about competing with the CCBOXX if they add this functionality.   


Sennheiser was showing off their relatively new MKE 600 microphone (it's been shipping since September, but it was new to me), which is designed for DSLR use. It is shorter than the highly regarded ME 66, but similar in design, but really made to work better with DSLR's.  In particular, I like the shock mount, which is much more compact and easy to work with than most, and can fit in a hot shoe mount or has threading on the bottom if you want to mount it to a magic arm. It also has a short, coiled mini to XLR adapter option, internal battery to power it if you don't have phantom power, and low cut filter.  So, it is basically an ME 66 redesigned to work with DSLR's, for about $400. Very cool!  


I wasn't sure what to expect from Think Logical, especially when Bob Ventresca told me they are a KVM over fiber, up to 40 Kilometers, in length. I wasn't sure why I would need a KVM over fiber, but as I got talking to him, I came to understand, that this isn't just a KVM, but rather long distance transmission of modular components, which carry Keyboard, Video, and Mouse, and just about everything else you can think of, from RS232, to quad HDSDI for 4K, all over fiber, through some of the fastest routers in the world. So, if you have a building with massively powerful machines, they can be controlled as though they were under your desk, even though they may exist in a temperature controlled environment in a building across campus, which could definitely be useful.


Microsoft announced that will be using Azure, Microsoft's cloud service for streaming and on-demand for their NBC Sports, NBC Olympics, and GolfChannel. Azure is an impressive cloud storage service, and had been specially adapted for the purpose of rendering massive projects, which it has done for Pixar in the past.  Also, on display was the SurfacePro tablet, which I get a little excited about because it is one of the few Windows based tablets which utilizes the Wacom technology, allowing for 256 levels of touch sensitivity. Also, being Windows-based, means it can be used as tablet within professional drawing and compositing applications.  Plus it costs about the same as a 13-inch Wacom tablet, and while it isn't as large, and doesn't have some of the professional features, it is an awesome tablet that you can sketch on.


It was definitely a great NAB, with too much to see, and too little time. I saw a lot of stuff I am really excited about, and can't wait to get my hands on. 

Here is a list of my Top 7:
1 - Matrox Monarch stand alone MPEG2 streaming and capture device  
2 - Convergent Design Odyssey 7 monitor/recorder 
3 - Blackmagic Production 4K Camera, which now has a proper Super 35 sized sensor, with the reasonable $4K price tag.  
4 - Decimator Design MD-CROSS Up/Down/Cross/Frame-rate Converter.  Now has an LCD screen, HDMI input, and their extensive test pattern generator built in, and still under $700.
5 - Adobe Cloud Version "Next."  I love the cloud-leased software idea, and a lot of the new features of the upcoming Creative Suite.
6 - Autodesk Smoke, all the power of Smoke as an editor and compositor package, now easy to use and learn.  That's pretty awesome.
7 - Atamos Samurai Blade monitor and recorder.  S-Log, Waveform Monitor & Vecrtoscope, can record to ProRes or DNxHD, onto SSD, or more significantly, standard 2.5" HDD, so much cheaper to capture to, and can be used more like tapes, rather than having to clear them off like an SSD.

Honorable Mention - Fujifilm IS-Mini, not for what it does, which is awesome when paired with the IS-100 CCBOXX, but for the potential it has as a standalone device if they decide to develop it in that direction.

Heath Firestone is a Writer/Producer/Director/Editor with Firestone Studios LLC. He has a strong background in 3D compositing and digital effects and owns one of the most advanced virtual production studios in the industry. Heath is constantly creating new ways of making really dynamic and engaging shots, utilizing visual effects... to enhance the story, not distract from it. He can be reached at:
Continue reading "NAB 2013: Wrap Up & Top Picks" »

Permalink | Comments(0)
April 17, 2013
  Editing Storage in the Production Pits
Posted By Tom Coughlin
By Tom Coughlin
Coughlin Associates

Digital storage is a big part of the NAB show.  All that content has to be kept somewhere. Post production, including nonlinear editing, requires a lot of storage and with performance characteristics that are very different from most computer storage applications.  

The NAB has several activities going on with a post production focus.  The HPA had a post production pit in the back of the lower South Hall and Digital Production Buzz (who has a great radio program covering all aspects of post production) was doing radio interviews during the NAB show. 

Atto Technology was demonstrating a digital workflow with HP and Red at the NAB show, editing Red footage with Adobe Premiere Pro over an HP Fibre Channel network and utilizing the Quantum StorNext file system. Atto Fibre Channel Host Bus Adapters (HBAs) and Express SAS HBAs were being used in several other exhibits throughout the South Hall.  In addition to Fibre Channel and SAS connectivity Atto plays an important role in Thunderbolt infrastructure to support many other storage companies.

 EditShare introduced a second generation of its Field mobile storage system (the Field 2, pictured below, left). This storage device, small enough to fit on the overhead baggage on an airplane, provides a portable complete, end-to-end digital workflow with tools for remote collaboration. With this device users can record up to two channels of HD in familiar codecs (XDCAM-EX35, DNxHD, DVDProHD and others). The device allows for edit-while-capturing with EditShare's Flow software. The device can scale up to 24 Terabytes using 3.5-inch and 2.4-inch HDDs and SSDs, including 10k SAS HDDs. And the EditShare Sync Tool allows users to send data from a remote location back home using a simple Internet VPN connection. In addition to the Field, EditShare produces a 60-bay high density storage system providing up to 240 TB of storage, four of which can be connected to a Geevs server to provide up to 960 TB of storage, including LTO tape backup.

Facilis (picture, below, right) premiered their TerraBlock v5.7 and introduced SyncBlock.  SyncBlock manages archive, backup and synchronization for direct attachment to the TerraBlock Shared Storage Sytsem.  SyncBlock include single and library LTO 5 products and include HDDs providing from 8 to 64 TB of storage.  The SynchBlock packages allow automatic archive, backup, synchronization, mirroring and transport of file media ingested into a TerraBlock. The TerraBlock supports 4 Gb/s to 16 Gb/s Fibre Channel and 1 GB and 10 GB Ethernet connectivity. The TerraBlock v5.7 now supports 4 TB HDDs, 16 Gb/s Fibre Channel and also provides ATTO ThunderLink support.  Drive recovery time is improved by 50 precent and its can capture DPX files to multi-user Write volumes.  The latest Apple and Windows post production software are supported.

Hitachi GST debuted its Evolution series of storage devices. These feature interchangeable and up to 1 TB  expandable storage modules that can be used as stand alone external USB 3.0 HDDs (the G-Drive ev PLUS) or in 6 Gb/s SATA docking stations to provide more storage options such as RAID 0, RAID 1 or JBOD as well as Thunderbolt connectivity.  The G-Drive ev PLUS storage modules provides data rates up to 250 MB/s and this can be even higher in the docking stations.  We should expect more developments with the Evolution products as time goes on.

LaCie (now part of Seagate Technology) has a full line of Thunderbolt external storage products for Mac and PC, including vary fast SSD products as well as mobile and desktop storage. They have also expanded their NAS Pro line-up. JMR displayed additions to their SilverStor Desktop storage systems. These products provide SATA/SATA JBOD and RAID storage with Thunderbolt support offering 700 MB/s data rates with up to 100 TB on the standard product and supporting 100's of TB with expansion chassis.  These products also support PCIe expansion, including PCIe SSDs.

There were a large number of other storage companies providing, Flash Memory, HDD, optical and tape based storage systems for post-production, archiving and play-out applications. These were mostly clustered in the lower South Hall mostly but with some located in other places, such as the North Hall. There must be more than 40 companies showing various storage (and related) solutions for post production and special effects; including Aberdeen, Amplidata, Avere, Avid, Chelsio, Exadata, Panasas, Small Tree, Tiger and many others, including those mentioned in our earlier blogs.

With the LG announcement of NCAA Final Four captured in 4K as an Ultra HD TV demonstration it is obvious that the amount of storage and bandwidth required for today and future post production workflows is increasing rapidly. With the support of the digital storage community serving the M&E industry these needs will be met, enabling a new generation of immersive consumer experiences.
Continue reading "Editing Storage in the Production Pits" »

Permalink | Comments(0)
April 15, 2013
  Wrapping Up the 2013 NAB Show
Posted By Larry Jordan
By Larry Jordan

Any NAB Show is too massive to be summarized in a single blog post. This show represents the current state of a multi-billion dollar industry composed of thousands of wildly different companies. I enjoy walking the halls just to learn about gear that I never use - like helicopters, transmission towers, and radio playout servers. NAB is a very cool place.

Still, in our part of the industry, there is lots of stuff going on. Here are my thoughts, in no particular order.

We now have the technological ability to do just about anything we can imagine; in fact, we can even do things that most of us can't even imagine. Technology is no longer the gating factor of creativity. Yes, we can make tech go faster. Yes, we can make it easier to use. Yes, we can create still more eye-popping effects. But, we have NEVER had the range of story-telling tools and technology for every possible budget as we do today.

The industry is still feeling the crunch of hard economic times. Major manufacturers are lowering prices on key software. The time between upgrades is stretching out. Technology is no longer a gating factor, but budgets and deadlines are crunched like never before. Teams are disappearing in favor of the one-man-band; this has good and bad ramifications throughout the industry.

Partnerships are increasing. I was struck by the number of partnerships announced between companies. Pooling resources seems more attractive than competing in today's market.

AJA made the obvious point that sometimes the emperor has no clothes. AJA announced that they would only talk about products that were shipping. In contrast, Panasonic was talking about a 4K monitor that won't ship until October. Why not announce this at IBC in September, I asked? Because, they said, not everyone travels to Europe. Sigh...

The word "3D" has disappeared. Last year it was everywhere. This year it is gone.

The NEW word is "4K," as in 4K images. (Though this term is somewhat vague and encompasses two different resolutions: 4,096 x 2,160 pixels or 3,840 x 2,160 pixels.) Personally, I think 4K is similar to 96k sample rates in audio. Useful for creating massive marketing excitement, but practically useful to less than 10% of the total market.

Red stepped up their resolution to 6K, with a camera surgical theater in their booth to replace old sensors with new ones.

Hitachi was showing an 8K camera, with Japan's NHK announced broadcast support for 8K images coming later in this decade. 8K? Sheesh!! Uncompressed 8K video requires somewhere around 2.1 GB per second - which is TWICE as fast as the current Thunderbolt, and faster than Thunderbolt 2.0, which was announced at NAB by Intel.

NOTE: Though Intel needs to simplify its certification process if it EVER expects Thunderbolt to be successful. Far too many devices are lingering in certification limbo. At some point, if Intel doesn't speed up, key vendors will stop playing Intel's game. And that would be bad for all of us.

Blackmagic Design announced two more Cinema cameras: Production 4K and Pocket Cinema. As usual, their low prices leave you gasping for breath. The key is whether they can ship them within our lifetimes. BMD is saying July.

Let's put all this advanced resolution in perspective. According to studies done by Panavision, in order to see the increased resolution afforded by 4K images projected in a theater, you would need to sit in the first six rows of that theater, in other words, closer to the screen than one-half the screen height. I suspect that means we need to pull the couch EXTRA close to that 4K monitor in the living room... (Like the ones announced by Sony, Sharp, and Panasonic at prices that rival high-end BMWs.)

Higher resolution images allow creating "ROI," or "Regions of Interest." For example, an 8K camera with the appropriate lens, sitting on the 50-yard line can see the entire field from goal line to goal line. The resolution of this camera is so great, that we can create windows, or ROIs, into that massive 8K image. Then, we can follow the action, not by panning the camera, but by panning the ROI as the runner moves down the field.

What seems to be coming is a time when cameras don't move. Instead, we create "Ken Burns effects" within an extremely high-resolution image to create the framing and movement that we need.

HD cameras are now microscopic in size with great image quality. Not just the GoPro Hero 3, but cameras built into cell phones and sunglasses. We are starting to live in a world where cameras are both invisible and everywhere.

Granted, this allows us to create very cool images that were impossible only two years ago. But it also raises massive privacy concerns. How do you negotiate a deal, resolve a conflict, or have a private conversation when cameras are ubiquitous? Reality TV not withstanding, some things are meant to be private.

Turning more specifically to the world of post-production:
The release of Avid Media Composer 7 and its related price drops not withstanding, Avid seems to be struggling to define what it is in the market. I don't have as good contacts at Avid as I do at Adobe and Apple, but the feeling I'm getting is that the word "beleaguered" can be applied.

Adobe is everywhere. Adobe has seized on the current confusion in the market place with both hands and is aggressively leveraging their Creative Suite products to fill the void. The announcement of Adobe Anywhere allowing collaboration between editing team members without regard to geographic location has the potential to transform the entire collaborative process of editing.

Apple's Final Cut Pro X software was visible at the show, with new software, hardware, and alliances announced from a variety of companies. In fact, the week before NAB, Apple announced that it has sold more seats of FCP X than of FCP 7. But, FCP X seems to be playing in a different market than the NAB crowd. Not better, not worse. Just different.

NOTE: Apple is still offering encouraging words that a new MacPro is still coming later this year. "When," not "if," is the key word. Specs and timing are totally unknown. My feeling is that Apple is constrained by an availability of the right chips; but that doesn't lessen the pain.

* Autodesk is revitalized. They may still be at the high-end of the price spectrum, but they are doing everything they can to become relevant to the broad market. The release of Smoke 2013 began a trend they continued into NAB with the announcement of the new 2014 Creation Suite shows that they are not willing cede their market to others.

* The big booths get all the attention, but the cool stuff lurks in the corners. A very cool plug-in for FCP X is SliceX, a collaboration between Core Melt and Imagineer Systems. This automated rotoscoping tool allows you to select a region of any shape within an image, then motion track it for the duration of the clip. Very, very cool.

* Another cool discovery was Quiver, a flat-fee-based aggregator that is designed to process and deliver your films for sales on iTunes, Google, and other media platforms. I was impressed with what these folks are doing.

Larry Jordan is a producer, director, editor, author, and Apple Certified Trainer with more than 35 year's experience. Based in Los Angeles, he's a member of the Directors Guild of America and the Producers Guild of America. Visit his website at

Continue reading "Wrapping Up the 2013 NAB Show" »

Permalink | Comments(0)
April 15, 2013
  NAB 2013: Dispatches From A Young Editor
Posted By Troy Mercury
A funny thing happened on the way to the 2013 NAB exhibition floor.  As a first-time NAB attendee, I was expecting to be blown away by the outrageous displays of cameras, sound equipment, editing software, and wall-to-wall 4K displays. Instead, I was blown away by something much less ostentatious yet somehow more profound and pervasive. Was it the rare sight of interaction and cooperation between the production and post communities? Was it the realization of the size and scope of the industry I am part of? Was it the unbelievably poor selection of food and drink? It was none of these.

It started on Sunday, during the Post-Production World training sessions, which were wonderful and informative.  As I went from room to room, session to session, I could feel something was off and I couldn't quite put my finger on it. This feeling lasted for most of the day until I attended a keynote presentation called "From Concept to Delivery: The Fusion of New Media and Storytelling." As I waited for the speakers to take the stage, I noticed the large monitors both left and right displaying all the speakers and educators for NAB's 2013 Post Production World. While this wasn't the exact picture it looked something like this:  

Does anything seem strange about this picture? It took me a couple head-turns to figure out also, so take your time. If you guessed, "no women," then you win the prize. Now, the picture that was displaying at the keynote did have some women in it and from what I could make out, maybe 5 or 6 of about 50-60 people.  This struck me in 2 ways. 

First, I work as an editor for a small/medium post house in New York City. We work on everything from commercials to films and handle editorial through finishing. At jumP, I work with lots of women on a daily business spanning all occupations; executive producers, editors, assistant editors, Flame artists, After effect artists, designers, partners, etc...So due to my personal work experience it  was quite shocking to see A) so few female trainers/speakers and B) So few female attendees (in relation to how many men I saw). It was very unexpected and it didn't match the reality that I am used to.  Perhaps my work experience is unique, but I would venture a guess there are many more women in the industry than are represented at NAB.  

Secondly, and more importantly, is how this impacts the business we are in. Now during the keynote, the discussion revolved around workflows, 4K content, Vimeo and Youtube delivery, NLE choice, and all the things you would expect professionals to be discussing at this year's NAB. What wasn't really discussed was the nature and quality of the content and, in particular, how it relates to potential audiences.  We all tell stories in this business.  How well can we do if the stories we are putting out there aren't actively engaging 50% of our audiences?  The lack of women in leadership roles (despite women being very prevalent in the industry) that I saw reflected at NAB does no favors to anyone in our industry. 

There is a huge demand for more content and huge downward pressure to reduce costs in our industry. There are numerous ways to view content, from movie theaters to mobile phones, and every opportunity to reach new audiences should be embraced.  We will not reach those audiences unless as an industry we have a diverse set of people leading the way.  In many ways, this reminds me of our politics here in the US, where a younger generation is usually way ahead its leaders. I believe that younger audiences will not care what they watch their stories on, what resolution they are shown at, whether it is 24 or 48 frames per second or what NLE they were cut on. 

They will care if the stories relate to them and move them. Until we understand that, I see continued disruption to the all of our business models.

 Troy Mercury is an editor at NYC's Jump (
Continue reading "NAB 2013: Dispatches From A Young Editor" »

Permalink | Comments(0)
April 11, 2013
  How To Do NAB In 2 Days!
Posted By Katie Hinsen
With 92,414 attendees, plus exhibitors, press, staff and special guests, this year's show is huge. The LVCC is over 2 million square feet of indoor exhibit space. That's a lot of exercise for both the feet and the senses.

So with only two days on the floor, I approached this like I do a typical edit. 

Firstly, I planned and scheduled. I got as much information I could, drank a lot of coffee and loosely considered my goals as I might look at my deliverables. Time was fixed, and the content was great, so I had to figure out how to squeeze a decent coherent experience out of something with an impossible shooting ratio. The start and the end were important. Key points in between were important, and montages of the rest of the pretty things could be squeezed in around all of that.

I focused on the message, trying not to be tempted by the halls and booths that were not post focused. Cameras and microphones and trucks and antennae are cool, but a distraction. I looked over the conference sessions and had to be tempted away from things I knew I would be amused by, but were not necessary to attend. 

I loaded my calendar with more bookings, sessions, talks and plans than I could ever manage, only so that if I found myself with a spare few minutes, no time would be wasted. There was always somewhere to be. I spent most of my time in the South Hall with post production, attended two conference sessions and did a quick lap of the other halls just in case there was something I had missed. I can't go to to a show without checking out NHK and Fraunhofer Institute, as they have the most imaginitive and astounding technology that has me imagining what I'll be dealing with in the future. But my focus on the South Hall was important. 

I had to check out Avid, and they do have the only cloud-based workflow that I can actually see myself using anytime soon. MC7 comes out in 8 weeks time, and in 7 weeks we can download a demo. Here's what it looks like, to have your timeline up online in a web browser.

If you have Avid installed, you can use the full software and just link to your Isis (the old Unity) server media and it streams pretty fast, holding frames in RAM.

I got very excited about Baselight Editions at IBC, now it's fully released and they have in pre-production a mini version of the Baselight Blackboard color grading panel, called Slate.

One conference session I attended was a panel discussion on The Art of Editing. It was nice to get away from the tech of editing for a short moment. The panelists had one very interesting prediction. That 4K is this NAB what 3D was two years ago. It was true that this NAB, you were not relevant unless you had something 4K or higher on your stand, but there was a markedly smaller presence of 3D than before. Now every NLE advertises their ability to handle 4 or 5K material. The prediction was that "unlike 3D, 4K is a reality that's here to stay". It's true that with the availability of cheap high res cameras, we will be seeing more and more material coming through post that is of higher than HD source resolution.

I started NAB 2013 with a conference session about Sony's Ci cloud and their 4K consumer products. I ended NAB with a conference session about the art of editing, a nice overview of the state of the industry and reflections on the show from editors. In between, I visited important and interesting booths, played with the new post toys and occasionally amused myself with pretty toys I'll never need (like the phantom helicopter camera mount). 

How to do NAB in two days? I don't recommend trying. It was a rush, and I have arrived home in New York wondering about all the stuff I missed seeing. But it's like an edit where you have lots of material, and a tight deadline. You have to plan well, take the content you think is most important and the first idea that works. Fill in any gaps with the fun shots you can't live without and let go. Because tomorrow it's on to the next job. 

Continue reading "How To Do NAB In 2 Days!" »

Permalink | Comments(0)
April 11, 2013
  NAB 2013: The Theme is 4K
Posted By Heath Firestone
This year, the theme of NAB seems to be 4K, but a lot of NAB seems to be focused on releasing similar, but improved versions of existing products.  

Here are the highlights from a few of the booths I visited.


Atomos released a new version of the Samurai Camera Mounted Recorder, called the Samurai Blade, which improves over the Samurai by offering S-Log and C-Log Recording, full size BNC connectors and a higher resolution 1280 x720 touchscreen monitor with waveform monitor, vectorscope, zoom, and adjustments.  One of the cool things about the Atamos recorders, is they aren't limited to SSD media, but also support recording to 2.5" HDD drives, which are fast enough for ProRes and DNxHD recording.  They were also showing the new AC version of the Connect HDSDI to HDMI and HDMI to HDSDI converters, which don't use wall wart power supplies.  

Convergent Design

Convergent Design was showing off their very cool Odyssey 7 monitor/recorder.  I was very impressed by this little monitor.  It has a great looking 7.7" Touch Screen, with lots of menu controls.  It also has two HDSDI inputs, and one HDMI, and the same outputs, but what is really cool, is that they have conversion capabilities, so if you feed into the SDI, you can spit out HDMI, or if you are feeding HDMI from something like a DSLR, you can spit out HDSDI.  One of the coolest features, however, is the edge detection focus assist, which basically shows up like a black and white difference matte.  So rather than just have a red outline, you see much more detail of what is at the center of your focus, so it is much more accurate, and promises to be a favorite focus assist tool.  They also have Timecode I/O, which is really useful with the recording functionality of the monitor.  If you have an external timecode generator, this gives you a professional feature addition to cameras which you might be shooting with that don't have that feature.  It also has dual SSD slots, which can be configured to run in RAID 0 or RAID 1 configurations, so mirrored, or spanned.  It also has Android and iPhone remote control, and will support Avid DNxHD, ARRIRAW, and Canon 4K Cinema Raw, with the additional purchase or rental of those codecs.  Without codecs, it is priced at $1295, and I believe they said it will be available in July.  The Odyssey 7Q, is the quad version of this with four HDSDI connections, configurable as input or output, and it has quad view.  It will also do simultaneous proxy recording and up to 120fps recording of RAW DNxHD Compressed. 


Fujifilm was demonstrating their new IS-Mini On-Set Preview and Monitor Calibration tool for $1300.  It's a small little box, which has HDSDI in and out and HDMI output.  It has some really cool functionality when paired with IS100 CCBOX, color corrector, but I'm going to focus on what it does as a stand-alone product.  The first thing it does, is work as a monitor calibration tool, which allows you to set up your monitor to show accurate color.  This is great because you know that what you are coloring to, is what it will look like when it is displayed on professional equipment, regardless of what you are using.  This also means that if you use one of these on each of your monitors, they will all have very similar colorimetry, whether they are professional OLED displays, or consumer LCDs.  It also allows you to view Log or Look previews, to a calibrated monitor, and using an iPad interface, you can dial in onset looks.  It's a very cool tool and idea, and extremely useful, for both onset monitoring, and for post applications.

There's a lot more to talk about, but you'll have to wait for my next blog.

Heath Firestone is a Producer/Director at Firestone Studios LLC. He can be reached at:
Continue reading "NAB 2013: The Theme is 4K" »

Permalink | Comments(0)
April 11, 2013
  NAB 2013: Is Your Archive Active?
Posted By Tom Coughlin
Since professional video content has long time economic value, archiving has a recognized economic return.   As the resolution, frame rates and total amount of captured video content increase the amount of digital storage needed increases as well.  This trend is expected to continue in the future leading to a steady accumulating archived storage capacity.  Since video archives are likely to be accessed to monetize the content these archives are more often than not active rather than passive.  This means that the time to access the content is much shorter than a traditional passive archive.

There are a number of storage devices that are used for active video storage archiving.  These include hard disk drives (HDDs), magnetic tape and optical discs. For various reasons some archivists prefer one type of storage technology more than another.  Also, in the same active archive multiple storage devices may be used to provide the right blend of write as well as read performance and cost.  In fact there are even archive systems that use flash memory or DRAM as a layer of very fast caching storage.  It is common to use HDDs as a cache for content that eventually will be stored in digital magnetic tape.

Many companies at the 2013 NAB show were showing LTO tape archiving technology.  The latest rev of the LTO digital magnetic tape specification (rev 6) provides for a 2.5 TB disk cartridge. The LTO Consortium had a popular exhibit at the 2013 NAB promoting the new LTO 6 tape standard.   LTO revision 6 as well as 5 incorporate a file system called the LTFS file system.  The LTFS file system allows LTO tapes to appear as separate storage volumes on a computer (like  an external HDD or USB storage device). LTFS also allows new options for storage systems using LTO tape.

One of the interesting developments in LTO storage in the past year is its use for on-line or cloud archive storage.  Fujifilm, the company that manufacturers most of the magnetic tape raw media, introduced the Permivault LTO tape based cloud storage in 2012.  At the 2013 NAB show the company announced that it was using the Crossroads Systems Strongbox tape archive in its Perivault archives.  The Permivault cloud archive stores customer data on their own dedicated LTO tape cartridges giving the customer the ability to control access to the individual storage devices.

Many other companies were exhibiting LTO storage products at the 2013 NAB show including HP, IBM and Quantum (some of the leading members of the LTO Alliance), Storage DNA, Oracle, and Xendata.

The Active Archive Alliance embraces all storage devices and systems offering active archive systems and had their own booth at the 2013 NAB.  The companies represented make HDD arrays storage systems, digital tape systems as well as optical disc libraries as well as the components that service these products.  This group has undergone significant membership growth over the last year as the important role of archive archives in the Media and Entertainment and other industries has increased in importance.    

Quantum had a particularly interesting combination of storage to support active archiving including LTO tape, a distributed HDD object storage systems called Lattus (using the Amplidata distributed storage technology) a performance HDD storage layer and a metadata server product which together provide general backup and access support to many parts of a modern media workflow.

There were a surprising number of optical archive products on display at the 2013 NAB show.   Both Sony and Panasonic have announced 12 disc Blu-ray disc cartridge products with storage capacities of about 1.5 TB and expandable to at least 3 TB.  Companies such as Q-Stor were showing optical disk library storage systems.  There are media and entertainment clients that prefer optical storage to digital magnetic tape archive storage.

Other interesting archive offerings included those of Front Porch Digital (FPD).  FPD has a line of SAMMA products for digitization of older analog storage formats as well as its DIVA on-line archive service.  The latest Diva 7.1 version has modularized the application and allows more rapid introduction of  new features that are making this product more useful in all layers of the video workflow.  

As digital storage content increases in resolution, frame rate and total hours of video in storage the demand for content archiving will increase both in required storage capacity as well as archive performance.  The need for access to archived content has led to a strong movement towards active archives.  Faster access to large content files has also resulted in multiple layers of storage used in today's archive systems including flash memory, HDDs, magnetic digital tapes and optical storage.

Tom Coughlin, President, Coughlin Associates is a widely respected storage analyst and consultant.  You can find out more about him at  He is the organizer of the 2013 Creative Storage Conference, June 25, 2013 in Culver City, CA, 
Continue reading "NAB 2013: Is Your Archive Active?" »

Permalink | Comments(0)
April 11, 2013
  A Flashier NAB in 2013
Posted By Tom Coughlin
While the vast bulk of digital content is stored on hard disk drives, digital tape and even optical discs; flash memory is finding its way into more and more video workflow applications. and as a result it is changing the face of storage systems and architectures.  The 2013 NAB show gave a glimpse of a faster future for content storage using flash memory.

Read and even write speed for NAND flash memory can be many times faster than HDDs and decreasing flash memory cell sizes are providing solid state storage at lower prices.  Currently flash memory minimum line widths are at about 19 nm but they will probably go to 15 nm within the next couple of years.  While narrower line widths create challenges for flash controllers to reduce cell wear the major flash controller companies seem up to the challenges.  

As the prices go down on flash memory it will become used more and more frequently in storage systems.  The prices of other storage devices such as HDDs will also decrease with time but the access time to data on rotating storage devices has created limits to storage system performance that a bit of flash memory can help overcome.  

This has resulted in faster storage interfaces, many built upon the intrinsic speed of the PCIe bus in computer systems.  At the 2013 NAB show Intel announced that the copper-based PCIe-based Thunderbolt interface is going from 10 Gb/s raw data rates to 20 Gb/s raw data rate for the next generation products (probably due out in 2014).  Earlier in the year the USB Implementer Forum announced that the USB 3.2 spec, to be released in 2013 would increase USB interface speeds from 5 Gb/s to 10 Gb/s.  

Likewise the next generation of SATA and likely SAS storage interfaces running at 12-24 Gb/s or higher will also use the PCIe bus.  These faster interfaces have been designed to take advantage of the performance of solid-state memory devices.  Faster storage devices and interfaces are encouraging storage system providers for the media and entertainment market to include some (or a lot) of flash memory in their storage architectures.

Promise Technology was showing their Pegasus J2 mobile solid-state storage device with up to 512 GB of storage capacity and sporting a Thunderbolt connection, along with many HDD-based storage products with Thunderbolt and Fibre Channel connectivity.

Solid-state memory has become the storage technology of choice for all new professional video cameras, increasingly displacing other storage media (at least for field recording).  Panasonic said that their 64 GB and 32 GB microP2 cards (an SD card form factor) for their line of professional video cameras will be available this month.  

Fusion-io was showing up in a number of booths at the NAB show.  At the NVIDIA booth the 1.6 TB ioFX card was driving four 4K displays in real time.  Systems integrators are looking to include these cards in their systems, including HP integration of the ioFX cards in their Z-series workstation products.

Toshiba debuted a new flash based content distribution system at the NAB show.  The On-Air Max Flash Flash Memory Playout Server was on display with at least two partner companies that will use the product for content delivery applications.  The flash memory in this system consists of chips on a blade rather than having an SSD based system.  Toshiba has a whole family of flash-based content delivery servers for content delivery networks (CDNs).

Many other companies exhibiting at the 2013 NAB used flash memory for caching and other acceleration applications.  These included DDN, EditShare, EMC, IBM, HP  and NetApp among many others.  On the other hand there were some exhibitors who have decided to go flash memory all the way and replace all their other storage with solid-state memory. 

CloudSigma, a cloud services provider, announced that it is replacing its HDD infrastructure for all SSD storage systems from SolidFire.  The SolidFire  storage boxes provide 26 TB of raw SSD per box and the company uses a combination of performance virtualization, deduplication and compression to provide net storage costs to CloudSigma users of $0.14/GB/month for replicated content while providing a 100% storage performance improvement and a 40% overall performance improvement.  Since with this setup storage is no longer the bottleneck to overall solution performance the company reports 25% lower CPU and RAM requirements.

Clearly solid-state memory has found a solid niche in the media and entertainment industry.  It has enabled faster storage systems and interfaces that can support higher resolution video streams and thus should increase the productivity of modern video workflows.  It is also widely used as the storage media in modern high performance professional video cameras.  From content capture through content distribution flash is providing better overall storage performance for media and entertainment applications.

Tom Coughlin, President, Coughlin Associates is a widely respected storage analyst and consultant.  You can find out more about him at  He is the organizer of the 2013 Creative Storage Conference, June 25, 2013 in Culver City, CA, 
Continue reading "A Flashier NAB in 2013" »

Permalink | Comments(0)
April 09, 2013
  NAB: In the clouds
Posted By Katie Hinsen
At a conference like this, it was pretty funny to see that at the Starbucks in the convention center, the televisions broadcasting the morning news were all different aspect ratios and none of them were correct. This one, both pillarbox and letterbox with visible VITC was my favorite.

I've been in Vegas two days, and today was my first day at the NAB Show 2013. The buzz word this year is "cloud".  

The big thing I keep hearing is "what's the most exciting thing you've seen so far, other than the new Blackmagic camera?"

I'm in post. I shouldn't really care about the Blackmagic pocket cinema camera. But I suppose all of us in post care a little bit, for two reasons. Firstly, it's a really cool toy. It's $995, it's 2.5K resolution, it's a micro 4/3, and records DNG (raw) and pro res SxS. It's a slightly less sharp version of last year's Blackmagic cinema camera but it's the size of an iPhone. Secondly, they are marketing it to students and documentary filmmakers, so in post, we will have to deal with it at some stage. So here it is.

Cute, huh? Their standard cinema camera is about to do 4K as well. 

The more interesting thing on the Blackmagic stand was of course Resolve 10.  It has over 50 new features, mostly requests from users of previous versions. 

You can now organize projects into folders. It has a new editor, and now unlimited audio layers as well as the unlimited video layers. The biggest improvements I saw, are that you can now do a bunch of things that we all wish we could do in Resolve before. Slip and slide, titling (although pretty basic), generators, and open FX plugins. You have unlimited shapes in CC mode. There's a new gradient tool, which isn't in every colorist's box. There's now a splitter mode, where you can grade R. G or B channel separately or even offset them if you're doing restoration. You can cut and paste keyframes in tracking/stabilization. Also really cool is that they have added optical flow to their retiming tools. Basically, Resolve is something many of us have in our Swiss Army Knife of tools. Now it's got a few more bits and pieces we all thought "this is an awesome program, if only it had certain features..."

Thunderbolt keeps getting better. More and more of us are seeing it in our professional world now. Exciting news, for us nerds, is that they are working on a 2.0 that is now 20Gbs up and down. Next year we will see that, and also longer thunderbolt cables. Right now we can get up to 30 meters, we're about to get 100 meters optical. And thinner- 3mm. I'm not sure that size matters to us sitting behind the desk, but the length means that we might see more thunderbolt in our edit suites. Which would be nice.

Tomorrow I'm going to see if I can break a few of the new post tools, like Avid MC7. But today, I was wandering around hearing "cloud is the latest thing for post". 

So I looked in to what that really means. 

And don't worry, the reality is "not much". All the companies asserting that their cloud solution for post is the latest game-changer, when I actually started asking questions, had little to offer.

Lots of companies have been offering cloud solutions for a while now. I've seen it used a bit for news, but most of us just use DropBox and are happy with that. Sony has announced their cloud service, called "Ci". Deluxe and Prime Focus are two companies that have been long established post houses now selling some sort of cloud solution. It's an asset, project and data management wiki with an interface for dummies. The only advantage to using one of these new services is that it keeps your production more organized in a central (virtual) place. But speakers on the "post production in the cloud" panel all admitted that no editor cares about metadata tagging enough to get it as accurate as it needs to be, and that no service will ever keep up with the size of files we keep creating. So none of these services can ever really be something we will have to deal with in the near future. It's really a good service for our producers, but it's not going to be a place where our source files live. Even our proxies are too big to be worked with online. I'm yet to see Adobe's new cloud product, I will see if it's any more exciting than this.

So post people, you'll hear all about clouds this year. Don't fret, they're just something we rarely see because we're stuck in a dark room all day. Our productions will (or at least should) start working in some sort of managed cloud platform, so the biggest change we'll see is that our production workflow management will get a bit better.  

More importantly, the free swag report. If you're at the show, there are a few company branded oversized t-shirts floating around. If you ask nicely, Deluxe has portable device chargers. Flanders is giving out a good screen cleaning kit, possibly the most valuable freebie, but if you're really special Canon has the best swag in the form of a USB stick that's a mini replica of their EOS 5D camera. The "prime lens", complete with lens cap, comes off and it's the 8 Gig USB stick part. It even has a little strap. 

For those of you not here in Vegas, your freebie is this little link to content from the Intel Studio Experience booth where they have a lot of really nerdy and interesting sessions every day.
Continue reading "NAB: In the clouds" »

Permalink | Comments(0)
April 08, 2013
  NAB 2013 is under way. It's all about 4k.
Posted By TJ Ryan
Another April, another NAB. Things got off to a wet cold start as the wind and rain ripped through monday morning set up. Nothing like seeing crazed exhibitors try to get last minute gear into the convention center and out of the rain. Blow dryers were in short supply while exhibitors attempted to dry out wet logic boards.

Last year it was 4k, 3D, cloud and thunderbolt. 2013, all the talk is about 4k workflow and the range of products available is phenominal. Looks like 4k is taking off in the way 3D didn't. All the arguments from last year still hold true. 4k is too expensive now and it's not really going to be viable till next year. Why shoot 4k when it all ends up on my ipad in the end. 4k image looks great but I can get a better looking picture shooting 2k at a higher frame rate. All true statements but it doesn't change the fact that a big transition is in store for business models across the board.

If your looking to build, expand or upgrade to 4k workflow, where do you start. Well let's start with a good work station. There are several options available. The mac pro is at the bottom of the food chain with supermicro workstations and the HP Z-series blowing the doors off apple in terms of processor speeds and PCI slots. Once you pick a work station with a fast processor and a bunch of ram - now you need fill it. You will need quite a few graphics cards depending on your workflow. Nvidia quadro is the choice for any 4k workflow. Usually starting with 1 and going up to about 4. If you go with 4 or more you will need an expansion chasis, Qubix had some new ones on display at a decent price. If you are working with RED footage then it's always nice to add in some Red Rocket cards. Now you have your workstation and your graphics cards it's time to look at a capture card. Last year at this time your choices were slim. Now everyone has a 4k card. The AJA kona 3g is always top on my list but if you are putting in a DaVinci system then you will want to go with a Black Magic Design 4K card. Next up is storage. This is were things get pricey. 8gig fiber storage is best, but there are other options like 10gig and 40gig ethernet as well as 6gig SAS. Choices will depend on your budget but there are a few reasonably priced options if you are doing a sigle workstation with direct storage and don't need a SAN. Sans Digital has a 32 TB raid for under 15k and sonnet tech can put you in a 32 TB 8g fiber raid for around 20k. Now that we have a system we will need to figure out what software we are using. Apple and Avid are still top choices for editing with the new kids on the block being Adobe. AJA wasn't too happy about being left out of the DaVinci workflow but not too worry, AJA has several cards that will work with the Adobe options and Adobe now has speed grade as part of its solutions to give you a full 4k workflow from edit to color to sound. If you are doing color correction and need a control surface there are several options. DaVinci and Pablo surfaces can range in the 30k area. That is a bit high so for smaller budget there is the Avid artist series and tangent devices.element both starting at under 2k. And finally no system is complete without a monitor. I looked at the TV Logic 4k monitor and was impressed with the new glass. Sony unveiled a 30 and 56 inch OLED professional grade monitor. Of course this one is only a prototype with a release date next year but it was nice to look at.

All in all you can put together a 4k workstation with storage and control surface for under 25K . Still a bit pricey but not so high that you have to mortgage your house. Then add in a projector or monitor and if you have some deep pockets get a nice control surface. So buy in now or start saving cause it looks like 4k will be around for a few more years. Hopefully long enough to recoop the money it will cost to build the system.
Continue reading "NAB 2013 is under way. It's all about 4k." »

Permalink | Comments(0)
April 08, 2013
  My First NAB
Posted By Tucker Corson
April 06, 2013

So I'm off to my first ever NAB and I couldn't be more excited. There is so much to look forward to in such a small amount of time! For the next three days I will be covering the National Association of Broadcasters convention floor, filled with hundreds of booths, speakers, and techies alike, which represent our industry at its technical finest. I cannot express my excitement about attending the convention and checking out all the new toys and tech coming to us this year.

A few "must see" features for me this year will be the all new Autodesk 2014 products, along with the Foundry, and the next-generation multi-screen displays that have been so popularly advertised prior to the convention. As a technical 3D artist, but also as an IT representative of Gentleman Scholar, I also look forward to the vast amount of hardware and storage possibilities at the convention. Our studio has gone through some dramatic changes in the last four months, and with these changes comes a need for adaptation. We are currently looking into larger storage options and setups, specifically with Rorke, who seem to offer an inexpensive and fast storage solution that may be a game changer for us at GS.

Above all I'm excited for the good food, fun company, and all the techno goodies I can imagine! This trip will be an amazing experience and a huge help to our technological advancement at Gentleman Scholar.

See you on the floor!

Tucker Corson is with Gentleman Scholar, a group of solution driven artists situated at the intersection of story, style and technology. They bring together experience in live-action, design and animation for commercials, music videos and film.

Continue reading "My First NAB" »

Permalink | Comments(0)
April 04, 2013
  Adobe reveals next-gen video software
Posted By Larry Jordan
By Larry Jordan

On April 4, Adobe revealed the next generation of its video software, announcing new versions of Adobe Premiere Pro, After Effects, Audition, Prelude, SpeedGrade, Story, and Media Encoder. (If your favorite product was not on this list, don't worry, it didn't die, it just wasn't part of this announcement.)

▪    Adobe is more tightly integrating network accessibility into their projects. Not just the Creative Cloud, but the new Adobe Anywhere, which provide network-based team-oriented workflows.
▪    Everything is faster, including improved user interfaces, customization and  64-bit memory support
▪    Tighter integration between Story, Prelude, and Premiere Pro
▪    Built upon the current, familiar interface, but adding some killer new features.

In this announcement, Adobe did not announce release dates, pricing, or even final names. With the 2013 NAB Show coming up, Adobe wanted to showcase their new software, while still taking some time to wrap up development and get final versions ready to release.

Let's take a look at some of the new goodies.

Adobe announced Adobe Anywhere last January. What this provides is a collaborative environment where all media resides on a server, while editorial groups can access this same media and projects via the network.

What makes this new technology especially attractive is that editors don't have be in the same facility. In fact, Adobe stresses that this system doesn't even require proxies, while the master files remain on the server. This allows collaboration without regard to geographic boundaries. Media can be shared between different applications, run by different people, in different geographic areas; without relinking media, without transferring files and with complete version control.

For one-man-bands, this is no big deal. For workgroups, this provides a whole new definition of collaboration. Best of all, it integrated into the next version of Premiere Pro, Prelude, or After Effects. In other words, once you upgrade to the new versions of each of these, you can edit stand-alone, or with Adobe Anywhere using the same software.

The initial release of Adobe Anywhere is targeted at the Enterprise (read, large workgroups) and a PDF listing of recommended hardware can be found here:


NOTE: The only downside to Adobe Anywhere that I've seen so far is that it requires a Windows server in order to work. I am sure there will be minimum network bandwidth considerations as well.

I spend about a third of my time editing in Premiere, the rest is split between Final Cut Pro X and Final Cut Pro 7.

What impresses me most about this new version is speed. Adobe Anywhere is fully integrated. The Mercury playback engine is faster, and supports more GPUs. There is also improved support for importing Avid and Final Cut projects using AAF and XML.
Adobe has continued streamlining the user interface, with more customization is available.

Relinking missing media is faster and more intelligent. Audio control is improved, and Adobe added the Lumetri Deep Color Engine which allows adding LUTs and other color looks quickly and easily to your projects while still inside Premiere.

Audition, which has become my daily go-to audio application has seen a lot of improvement.

The entire app is now 64-bit, allowing for much larger and much faster audio projects. They've improved the interface, making is faster, easier to customize (and it was already really customizable), with more keyboard shortcuts. Favorites were beefed up with improved automation and an enhanced Favorites panel.

Two new features are amazing: Sound Remover (which reminds me of Sony Creative Software's Spectral Layers) can remove unwanted sounds from a clip, such as a siren, without removing other sounds, such as dialog. This is very, very close to magic.

The other really hot new feature is Preview Editor, which allows you to see a before and after on the same waveform, before you actually make a change. This prevents that "duh!" feeling that occurs right after you've done something stupid.

NOTE: Audition will not, initially, support Adobe Anywhere.

The big news here is a Live 3D Pipeline integration with Cinema 4D. Create a shape in Cinema 4D, bring it into After Effects for lighting, positioning, and compositing into the final effect.

On the list of improved effects is the Refine Edge tool for rotoscoping, Warp Stabilizer VFX lets you choose which object in a shot you want to stabilize (which is just WAY cool!), improved camera tracking, and a new Pixel Motion Blur.
Support for Adobe Anywhere will be added to After Effects later this year.

The big news with SpeedGrade is an improved user interface, the ability to load SpeedGrade looks into Premiere, and the ability to automatically match shots and check color continuity between scenes.

The big news with Story is integration with Prelude. You can now write your script, shoot your script, then import your script into Prelude to make finding shots using text searches a whole lot faster.

You can now assign permissions to shared scripts so that your collaborators only see, or correct, what you want them to see or correct.
Shooting scripts can be automatically generated, script syncing is faster, reports are dynamic (meaning they are adjusted on the fly as new information becomes available).

Like I said, the Adobe theme of speed is everywhere - and Story is a huge beneficiary of this.

I've already mentioned the improved integration between Story and Prelude. By combining the script text from Story with the speech recognition that Adobe migrated from  Premiere, you can quickly match text to transcript to find exactly the clips you want using text searches.

Files can now be renamed on ingest. Gone are the days when you need to figure out what "Clip 054″ actually is. Improved tagging and customized templates means less typing, but more essential clip information captured easily.

The application is now 64-bit, ingests faster and with less setup, allows you to specify where you want ingested files stored.
Prelude is also Adobe Anywhere enabled, and allows you to create a rough-cut that can be instantly sent to Premiere for editing. This allows a production assistant to create a quick selects reel, allowing the editor can concentrate on shaping those selects into a story.

Media Encoder has been updated to support the new features in these applications, but hasn't changed much from the current version.

Adobe did not mention anything about Adobe Encore in their announcements this evening. However, do not assume from this that the application is dead.

All in all, these new announcements from Adobe show that they are working VERY hard to improve the applications that most of us use every day.

I am already in the process of revising all our Adobe training and will have new training available on, or shortly after, these new applications ship.

If you are going to the 2013 NAB Show, or the Supermeet, you can see these new applications in action.

I'm looking forward to the release of the final versions. What was already good, just got a whole lot better.

Larry Jordan is a producer, director, editor, author, and Apple Certified Trainer with more than 35 year's experience. Based in Los Angeles, he's a member of the Directors Guild of America and the Producers Guild of America. Visit his website at

Continue reading "Adobe reveals next-gen video software" »

Permalink | Comments(0)
36 posts found. Viewing page 1 of 2. Go to page 1 2   Next