Current Issue
April 2016

Recent Blog Posts in August 2012

August 16, 2012
  Click-To-License Vs. The Sales Rep
Posted By Donna Kaufman
Whether you are new to the world of stock footage, or a seasoned and successful stock footage researcher, now is a great time to take a closer look at the evolution of point-of-sale purchasing in the stock footage industry. The power of the click and pay online experience has substantially changed licensing and how those of us in the media production industry tend to communicate and relate to one another. However, just because the tools exist to speedily click, license, and download stock footage, it remains essential to develop relationships with your licensing representatives to ensure you're getting the best service and pricing available.

As a seemingly unlimited pool of new producers enter the film and video arena, their experience of looking for footage has become more dependent on Google and online searches than ever before. Click-to-license and download stock footage sales have become standard for those new to the industry as well as long term professionals. Distribution of stock footage today requires terabytes of storage, instant online preview and licensing, and snap-fast digital delivery. While this technology has improved the ease of locating and purchasing stock materials for all kinds of productions, this streamlined experience has significantly devolved the dialogue between licensee and licensor. In an industry that for decades has depended on the strength of personal relationships and tailored pricing to meet the needs of a wide range of productions, the advance of easy online stock footage acquisition has changed the way in which many producers approach stock footage licensing and pricing. 

Many stock footage agencies have invested significant capital to develop instant online licensing for both rights managed and royalty free footage. So why would I recommend to contact a sales representative directly before placing your next order? The answer is simple: a good sales representative can assist with subject ideas, provide comprehensive research, help with format requirements and delivery options, and extend discount opportunities, saving you time and money in acquiring the best stock footage for your production.

Discount opportunities include bulk discounts, selecting the appropriate license terms for your production, making the best choice between royalty free, rights managed, and premium content, and inquiring about non-profit and preferred vendor discounts. A sales representative can often save you money by discussing your project prior to licensing online. Such full-service access to skilled professionals is an opportunity to expand your production team free-of-charge. So, the next time you are about to place an online order, ask yourself: How could I be doing my job better?

Donna Kaufman is Chief Strategy Officer of Footage Search, Inc. ( ). Footage Search represents OceanFootage, NatureFootage, 3DFootage, AdventureFootage, and other premium stock footage collections.
Continue reading "Click-To-License Vs. The Sales Rep" »

Permalink | Comments(0)
August 14, 2012
  4TB Drives
Posted By Steve Modica
Those new 4TB drives that are coming, there's more than just one more Terabyte to be excited about.

We've all watched over the years as storage and memory capacities have continued to grow in conjunction with Moore's law. It seems true that Intel and other manufacturers have figured out how to overcome barrier after physical barrier to double the number of transistors on a chip (or double storage density on a platter) and continue to overwhelm the industry with more space, cores and storage than we know what to do with! (I myself date back to the age of full length, full height, 25pin SCSI devices that didn't quite get to 1GB. I also got to repair more than a few of those washing machine sized "tubs" that you would lower spindles into with a handle and a few reel to reel tape devices as well.)

So what's all the excitement about these new 4TB drives?  It's not really very much of an increase now is it? 

What's exciting to Small Tree is that the vendors are now implementing "Advanced Format." This is a new drive standard that changes how data is stored on the platter. On previous devices - going back even to my storage "tub" days - data was stored on the platter in 512byte increments. You could "not" write data to the device in chunks smaller than 512bytes. That was the size of a sector.  Each sector required a Disk Address Mark, some ECC or error correction bits and a gap to separate it from the next sector.  Much like TCP and the idea of jumbo frames, the more things you have to look up and decode, the longer and more cpu intensive it is.  

The new Advanced Format implemented in Hitachi's 4TB drives moves the minimum sector size to 4K. This allows for fewer DAM's to look up, less overall gap space and fewer ECC decodes to run. With this change, the smallest size a file can theoretically be is 4k. Given the larger size of files today and the relative size of the drives by comparison, this will hardly matter. 

So, will this new format be supported by RAID card vendors? Most definitely yes.  However, they have not all jumped on the bandwagon just yet. For those that have not, these new drives implement a spec called "512e." This allows the drives to continue to accept, write and read requests for smaller increments to maintain compatibility. These requests are handled by doing a "read-modify-write" cycle to the drive. To write 512bytes, the 4K sector containing that data is read in, the 512bytes is modified within the sector, and then the entire sector is written back out - RAIDS have used this technique for a long time to write less than a full stripe.  

Overall, we've seen this technology offer a 33% increase in the number of streams Small Tree can support from a single storage array. We expect to see continued improvements as vendors begin to adopt the technology and hone their OS tuning to take advantage.

Steve Modica is CTO of Small Tree
Continue reading "4TB Drives" »

Permalink | Comments(0)
August 13, 2012
  SIGGRAPH: Keynote Speaker Jane McGonigal
Posted By Jeff Kember
Jane McGonigal, author of "Reality is Broken", gave an insightful keynote to a packed house. She extolled the virtues of gaming, cited interesting facts from a number of case studies (including her own research) and claimed she would increase our lifespan by seven and a half minutes (more on that below). 

After suffering a severe concussion, she created the game "SuperBetter" to help her through the depression of a long recovery. In another study, young cancer patients who played the game Re-Mission for a minimum of two hours showed significantly improved outcomes. We learned that thirty minutes of online game play a day could outperform pharmaceuticals in treating clinical anxiety and depression.

She led the crowd through a set of activities to designed to improve physical, mental, emotional and social resilience. An example of social resilience was shaking hands with someone for six seconds to increase the levels of oxytocin - the trust hormone. She suggested the effects of increased oxytocin would last through our lunch break and that we should take advantage of networking opportunities. The combination of these activities, performed daily, contributes to increasing one's lifespan (up to ten years in several studies, seven and a half minutes in our case today). 

Another positive aspect of playing games is that it is one of the few areas in our life that a high rate of failure is OK.
Continue reading "SIGGRAPH: Keynote Speaker Jane McGonigal" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH: Inspiring Workflow To Handle Large Data
Posted By Scott Singer
A common theme at this years SIGGRAPH was how different studios are
handling the unique demands of shows with ever increasing scope and
complexity.  Two examples of this were Method Studios' work on Wrath
of the Titans, and CineSite's work on John Carter. These two
productions had enormous environments and sets that quickly defied
their standard workflow techniques of wrangling data. In both cases
their teams adopted scalable data driven descriptions of the
environments as separately addressable, hierarchical elements managed
outside of a traditional Maya or Houdini workflow. While these
techniques have been in heavy use in CG Feature animation for quite a
while, especially in pioneering work by PDI for wrangling the jungle
environments in Madagascar, the increasing complexity of live action
environments is making these issues imperative to VFX workflow as
well.  And both CineSite and Method rose to the challenge with some
inspired answers to some hugely vexing problems.

In Wrath of the Titans, the Kronos sequence involved massive
environmental destruction with dizzying camera flybys. The Kronos
creature literally breaks from the mountainous cliffs that it's made
from.  These shots were as thrilling to audiences as I'm sure they
were terrifying to the VFX artists. But Method studios rose to the
challenge by creating in ingenious system of data-driven, resolution
independent scene elements that could be accessed differently to
achieve maximum rendering and animation efficiency, but within a
unified texturing and rendering paradigm that ensured a consistent
look a variable levels of detail.

They broke down the overall model of the huge mountainous environment
into a collection of useable rocky crag shapes called greebles.  These
were located both on and within the volume of the mountain and could
be called up when needed, and in the form most appropriate to that
particular use case. For instance greebles close to camera would be
called up at their highest resolution and those farthest from camera
at there lowest.  Taking this hierarchical methodology one step
further, even the rock face texturing was handled as volumes which
could blend regions of differing resolution together. Because the
data locations for each greeble could be distributed within the volume
of the mountain, the actual geometric assets did not have to exist in
the scene until the overlying rock faces crumbled away to reveal them.

Likewise on John Carter of Mars huge data savings were had based on
instanced reuse, but in this case, the instancing itself was leveraged
to provide the actual animation. The large machine environment of
Zodanga was a walking city on centipede like legs.  To illustrate the
ingenuity with which CineSite addressed problems of scale we can look
at the legs of the city.  Each leg is essentially a piston and the
city walks by articulating these in sequence like a centipede. One
library animation of a single piston cycle was stored as a cache and
then new instances were created from this cached animation database
offset to their locations under the city as well as offset in time.
This meant that new firing sequences could be choreographed without
reanimating hundreds of individual animations, likewise changes to the
animation of the underlying cached cycle would be automatically
inherited by the instances. It also meant that the actual geometry of
the piston animation only had to be stored once for the single
canonical piston animation. Cine Site wrapped all of this
functionality in a very nice, user accessible interface both in
standalone and Maya hosted forms; this allowed artists to access sub
components of the structures for specific effects on a granular level
during shot production.

Because the CineSite team stored all of this data existed in a
queryable object database, it could be programmatically filtered to
check for visibility, set render quality levels and perform other
manipulations that helped to achieve maximum efficiency in terms of
rendering, disk storage and artist interaction speed. Not being
satisfied to sit on their laurels, CineSite is already looking into
making the system more robust by evaluating new technologies like
Alembic, and Renderman's new instancing capabilities to make their
approach even better.
Continue reading "SIGGRAPH: Inspiring Workflow To Handle Large Data" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH: The 'Real Steel' Presentation
Posted By Scott Singer
The Real Steel production presentation at SIGGRAPH 2012 illustrated
the benefits of including the VFX units from the very beginning of
production.  A panel of industry veterans including Eric Nash and Ron
Ames and moderated by Mike Fink told the story of how Real Steel came
together as a production and what went into making it such a smooth

From the beginning the decision was made to include the VFX
representatives as collaborators in the film making process.  The
close integration of CG robots with their practical counterparts as
well as the elaborate choreography of the CG fight scenes within
contained practical locations, required technical considerations to be
a central to production.

Rapid prototyping techniques were used to design and construct the
practical robot puppet components which provided 3D assets to  Digital
Domain.  This meant that DD had early visual targets to hit as well as
exact digital representations of the practical models. These early and
ongoing exchanges provided very clear visual criteria to drive
approvals in the look development process. By coupling these two often
disparate aspects of the visual development approvals process they
avoided many costly last minute technical changes.

The elaborate fight choreography meant that motion capture techniques
would have to be integrated to drive key narrative elements in the
film.  Using the advanced virtual camera technology Simulcam,
pioneered at DD, to attain the necessary level of CGI/live action
cinematography meant that another aspect of the VFX crew was brought
in early on. The obvious benefits of having instant feedback of CG
element placement within the camera operator's and director's monitor
feeds, not only sped production along at the shoot, but also cut down
on extraneous takes usually made as "insurance" during post
production. It also seemed to play an important social role in
reinforcing the presence of the VFX crew as an integral component of
the daily shooting process.

A story from the shoot that illustrates this synergy was how VFX
stepped in to help iron out a major production wrinkle during the
Detroit location shooting of the fight sequences. Production was
unable to secure the two fight venues needed to shoot in on the time
frame in the schedule.  The only location that was available was in
very bad physical shape.Since a majority of the location would have
been covered by CG crowds and set extensions anyway, DD suggested
creating entirely digital arenas, providing that production could
design the two environments within the constraint of keeping the arena
floors to a matching footprint. This saved the production time and
money and opened greater possibilities for the design of the fight

The panel's overwhelming opinion was that the inclusion VFX at the
very earliest stages of pre-production on FX heavy shows, not only
smooths out technical hurdles but actually allows for greater creative
opportunity while keeping costs down.  There is a definite feeling in
the VFX community that it is the redheaded stepchild of production
that is often called onto the job after everyone else is done, and
after their knowledge and unique expertise can help make the
production process much smoother. Once again we have evidence that a
well conceived plan carried out by a collaborative and inclusive team
of dedicated professionals can result in a project which can be
successfully completed with the minimal amount of difficulty. And just
for fun, they also brought a full size version of the robot Noisy Boy,
with them which was a huge hit as a photo-op with the audience.
Continue reading "SIGGRAPH: The 'Real Steel' Presentation" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH: Conference Recap
Posted By Sung Kim
I've been fortunate enough to spend the past few days at Siggraph at the L.A. Convention Center. Some highlights came during the production sessions for the making of Hugo, The Avengers, and Brave. It was interesting to get a glimpse at the thought process that went behind these mega-blockbuster hits. The Hugo panel talked about their very efficient production pipeline, and their 3D stereoscopic workflow. The Avengers talk was presented by ILM and WETA team, and they dove into set replacement, digital stunt doubles, as well as the methods they used to recreate the New York landscape with survey data.

As for the Brave talk, Pixar provided a glimpse into their process, from concept art and layout to animation, set dressing, lighting and rendering. They talked in more detail about the Fluid simulation pipeline, built with Houdini and Pixar proprietary software, and described the techniques they used to create the movie's gorgeous river scenes.

Another highlight for me came from learning about new software updates and releases in our industry. "iPi Soft" makes motion capture software that uses Microsoft Kinect. The Kinect was originally designed to play games on Xbox 360, but iPi Soft extracts the z-depth information for performance capture. Interestingly enough, they'll be releasing a new version with multiple performance capture by end of the year.

Another new release comes from "Eye tap," which makes software and hardware solutions for real-time high dynamic range video images. It captures high-exposure, low-exposure, and mid-tone in real-time, and tone-maps it to single video images. I could see a lot of potential applications for this software.

Elsewhere, "Camera Culture Group", and "Holografika" were demoing their glasses-free 3D display.  While there have certainly some major advances, I did note one big disadvantage to this method--namely, that content must be shot with an array of cameras, a technique they refer to as "multiview". Regular stereoscopic content won't work for this new technology, although there is a company called "Fraunhofer" that can convert your existing stereoscopic 3D video to multiview by generating virtual camera images.

Finally, while on the exhibit floor, I found out about some exciting updates that will surely make a splash in our industry in the coming months. Real Flow 2013 will be coming out at the end of the year. They rewrote their hybrido solver to be a particle/volume solver. The advantage of this is that you get a more detailed simulation with less memory and particle count. Also, Krakatoa for Maya is coming out at end of the year. And The Chaos Group, which brought out "V-ray" renderer, came out with its own fluid simulation software called "Phoenix", which looks very promising.

Click 3X, its interactive division ClickFire Media, and the recently launched C3X Live create engaging film, TV, web, and branded content. Click operates a full service, 11,000 square foot state of the art studio in Manhattan outfitted with 60 full-time staffers.
Continue reading "SIGGRAPH: Conference Recap" »

Permalink | Comments(0)
August 10, 2012
  SIGGRAPH 2012 : Dreamworks Animation Open Sources Volume Data Format
Posted By Scott Singer
I sat down with Ken Museth of Dreamworks Animation, to discuss their
latest in a series of open source software development efforts - Open
VDB.  Open VDB is a set of core programming libraries that seeks to
simplify the writing of programs that deal with volumetric CG
elements.  These include clouds and smoke, but also fire, water and
other spatial phenomena that are often difficult as well as expensive
to implement.

The main advantage of the Open VDB approach to storing the data is
that it's very efficient at describing not only the data and its
memory footprint on disk, but it is also fast to access that data at
run time from within the software applications that use it.

It is a multi-resolution sparse grid description which means that only
the data necessary to describe an event needs to be stored.  For
instance in a cloud, only the data representing the outside surface of
the cloud needs to be stored at a high resolution.  The inside, where
most of the cloud volume is uniform, only needs to be described once.
This offers huge potential savings over more traditional octree data
storage methods.

Another unique aspect about DWA's release of OpenVDB is their close
working relationship with Side Effects Software which means that Open VDB
will be supported as a built in, first class, type in Houdini and will
soon be ready to use right out of the box.  This is potentially a huge
win for small and medium sized VFX facilities that don't have the R&D
resources to build the middleware necessary to leverage many open
source contributions.

Dreamworks is very committed to being an ongoing leader in the open
source software community.  This latest offering sees them raising the
bar even higher.

Continue reading "SIGGRAPH 2012 : Dreamworks Animation Open Sources Volume Data Format" »

Permalink | Comments(0)
August 09, 2012
  SIGGRAPH: Blue Sky's Open Ocean System For 'Ice Age: Continental'
Posted By Scott Singer
In Blue Sky Studio's presentation as part of the Wild Rides
presentation at SIGGRAPH 2012 they demonstrated several novel
approaches to creating their open ocean environment in Ice Age:
Continental Drift, for a sequence in which the characters sail a small
iceberg across the sea. Some challenges were technological and others
aesthetic but all served the goal of turning what can be the arduous
and expensive process of water animation into a manageable package.

At the base of their solution is a library of wave animation based on
mathematical descriptions of open ocean wave forms.  These were
collected into look libraries that could be classified, for instance
calm or stormy, and easily called into the pre-vis artists Maya

Once in Maya these libraries could be easily "scouted" by the PreVis
department for ideal locations and appropriate animation layout paths
could be derived and pushed out to the Animation department.

Even before handing the the scene data off to animation, Blue Sky made
sure that a lot of visual detail could be added that helped to set the
tone of the shots early on. They included environmental elements, sky,
sea foam, as well as hero wave splashes. These details allowed for
creative decisions and approvals to happen early on in the process
when changes are easier to accommodate.

Hero shot specific elements, like the giant tsunami wave, could be
added into the ocean look library and handled as though they were any
other shot, thereby limiting the costs that usually accompany large
hero moments.

On a technical note, because the FX Department used Houdini they were
able to leverage the point based water look libraries, and then blend
new library elements from existing ones, for instance to smoothly
transition from open ocean to calmer shoreline water. In addition the
encoding of all of that data allowed the FX Department to derive
visual details like foam and ripples on the water.

Continue reading "SIGGRAPH: Blue Sky's Open Ocean System For 'Ice Age: Continental'" »

Permalink | Comments(0)
August 09, 2012
  SIGGRAPH: High Frame Rate Cinema
Posted By Scott Singer
At the SIGGRAPH 2012 presentation on High Frame Rate Cinema, an
assembly of industry leaders including Douglas Trumble and Dennis
Murren, along with a very informative, pre-recorded presentation by
James Cameron, gave very convincing arguments for why using a higher
frame rate is a better direction for film production than adopting
higher image resolutions.

The main argument revolves around visual artifacts created by the
industry standard 24fps frame rate.  This standard itself is a
holdover from decisions made as cinema moved into the sound era
because it was the slowest frame rate that could still support
sound-sync. Perceptual modeling of human vision by IMB suggest that a
frame rate of 72fps is necessary for fully seamless playback.  The
24fps limitation introduces flicker and strobing onedges during fast
motions and camera movements.  These artifacts are particularly
jarring in stereo. Going to a higher spatial resolution or larger
frame size does nothing to alleviate the strobing, however the
strobing is noticeably reduced as the frame rate and shutter time
increase.  A frame rate of 48fps with a shutter of 270-360 degrees
(digital cameras have no inherent shutter limitations) provided a
reasonable reduction of the strobing artifacts.

Douglas Trumble also showed that footage captured at higher frame
rates can be effectively downsampled to provide content for a standard
24fps 35mm projection, thereby allowing studios to provide the same
content to theaters of different capabilities albeit with the same
quality compromises inherent with those slower frame rates.  And
current 2nd generation digital cinema projectors can already support
these higher frame rates with only a software upgrade.
Continue reading "SIGGRAPH: High Frame Rate Cinema" »

Permalink | Comments(0)
August 08, 2012
  SIGGRAPH: Custom Is The New...Custom
Posted By John Parenteau

Booth 1

"So if you just hit the button on the right, it displays the graphic interface that allows you to manipulate the nodes on an individual basis in real time..."


Booth 2

"Our system is unique because we use less tracking markers and a larger volume that allows for a wider detectable range, and thus a smaller margin of error after processing..."

double ugh.

I set out on a journey at SIGGRAPH 2012, not to find out what's new in the world of 3D animation, but rather to find out what was undiscovered. I'm not knocking the technical innovations from the very talented companies on the floor. But I'm of the mind that everything out there is really just further development of the same technology we've had for a number of years. Sure it's faster, more accurate, more flexible, easier to use and more realistic. But is it new?

Decidedly not.

So I told myself, "self, you go walk the floor, look at every booth, and find 3 things out there that were truly new."

I only found one.

When I ran across the Shapeways booth, at first I thought it was just another 3D printing service. I'll be honest. The idea of 3D printing is pretty amazing. I still love it. But it's definitely a technology that is not accessible to little ol' me. I can buy a 2D printer for $50 at Staples. I can't buy a 3D printer for less than $8,000 or so. And then my thoughts run to those darn ink cartridges! I mean if they suck for 2D, how much would they suck for 3D? Do I have to shake the entire machine when the colors aren't right? Will I get 3D ink on my hands? Is that a whole other dimension harder to get off? 

But I digress.

Their front table was filled with a lot of little objects. Some in plastic, some in metal, all with some unique design that felt very bespoke. On the wall they had three signs: Create, Discover, Community. I was intrigued. The idea is that you can design your own object, pretty much anything from jewelry to a shoe to a child's toy, and send the model to Shapeways. They will print it in anything from plastic to silver, and send it back. All within a couple weeks. And the price is based on material alone. I asked for more explanation and was told that at a jewelry shop, if you asked for engraving inside the ring, they would charge you somewhere around $50 to add the engraving. But at Shapeways, the addition of an engraving decreases the cost (by about $5) because it slightly reduces the amount of material they have to use. Prices range from low to not too bad, such as a plastic ring running about $6, and a silver ring about $50. All within reason. 

And not all the designs were simple. Some of the works were latticework chains dangling freely, intricate children's toys made out of tiny parts... very complex stuff that I was told was printed from a single model. Very impressive. But like any 3D printer, they do cool stuff and are exciting.

But what's more exciting is the world that Shapeways is creating. With the Create pages, they give you not only a resource to send in your own models, but if you can't model they provide tools for even the novice to create your design. In the Discover section, you can find some amazing works created by a myriad of artists that take part in the service. Under Community, you have the opportunity to chat with fellow designers, share concepts, discuss and attend events. They have created a unique community of custom designers that has a strangely familiar flare.

In the early days of the pre-industrial age, the only products you could get were custom. Everything was hand made, and made just for you. Those days went away in the industrial age, where mass production and cheap prices replaced the value of an item made to your specifications. Shapeways has brought back this sense of custom artistry by providing a potential world of designers an inexpensive and fast way to create whatever you can imagine. Better yet, the prices are aggressive enough that you could start the business you always dreamt of, designing and selling those custom ceramic kitten paw holders, just like the pros! 

Check them out. Pretty amazing.

John Parenteau is with Pixomondo. Check out the Website at:

Continue reading "SIGGRAPH: Custom Is The New...Custom" »

Permalink | Comments(0)
August 08, 2012
  SIGGRAPH: Noisy Boy, You're My Hero
Posted By David Blumenfeld
Tuesday at Siggraph started out as a normal day for me,
cup of coffee and a slow drive on the freeway.  This is
always my time to think about the plan for the day and
figure out how I'm going to accomplish the effects du jour
that the current project requires at work.  Unfortunately,
my autopilot mode took me down the wrong freeway, heading
into work.  It wasn't until I was half way there that I
realized I was supposed to be driving to the convention
center, so after some mild cursing, I made "immediate
u-turn when possible" and finally arrived at the proper

After chatting with a few friends, I decided to see the
production session on the making of Real Steel with some
of my old partners in crime from my days at Digital
Domain.  Upon entering the room, my attention was
immediately drawn to the full scale model next to the
presentation panel of Noisy Boy, the purple and yellow
Asian robot from the movie, built by the wizards at Legacy
FX.  I recalled seeing the full size Adam puppet at Legacy
a while back when I was there for a project we were
working on at Brickyard, and as always, the various
practical rigs never cease to amaze.  Unfortunately, a few
too many people were more interested in taking a picture
with the prop than sitting down for the presentation, but
this was quickly overcome with some tempered humor by the
moderator.  The panel was chaired by veteran Michael Fink,
and featured Eric Nash, Ron Ames, John Rosengrant, Dan
Taylor, and Swen Gillberg.  I had the pleasure of working
with Swen for a few years on Stealth, and it was nice to
see him up there (bunny suit not included...if you're
reading this Swen, we will never forget!).

The presentation was well paced and pretty standard fare
for a making of.  DD had the benefit of starting very
early on in the project, and they were able to get
everyone in production on board from the get go, which
provided for a higher level of collaboration than is the
norm on large fx films like this.  They spoke about their
use of the Simulcam system for the robot-on-robot fight
sequences (allowing for realtime in-camera overlay of
pre-rendered animation, generally motion capture or previs
quality, allowing for live camera operation while viewing
non-existent cg actors, negating the need for pretending
or the old tennis ball on a stick trick).  This virtual
production technique received a lot of media converage
during Avatar, though I recall using a similar system both
on Beowulf and Open Season back at Imageworks.  They also
went into the robot design process, various motion capture
tracking techniques (volumes when possible, optical when
necessary), and their image based lighting methodology.
Unlike some of the other studios I discussed yesterday,
DD shoots their HDRs identically to how I do mine (3
angles, 7 exposures each).  They too took their stitched
environment balls and projected that geometry, using Nuke,
onto a reconstructed 3d set model, and then, using VRay as
their renderer, mapped those textures back to that
geometry to use as their lighting/reflection.  There are a
number of ways to accomplish this, some using third party
software, others using techniques in comp packages like
Nuke, and yet others using spherical projection directly
in Maya.  As opposed to using a straight environment ball,
this method provides much more accurate placement of
lighting in relationship to the surrounding environment
(such as in front of windows and other light sources) and
gives significantly better results.  One small tidbit of
interest that Eric brought up (that I thought I would
share here) is the magic formula used for determining the
gravity scale factor when shooting miniatures (or
oversized items as in this case).  The simple formula is:

Gravity Scale Factor = Square Root of (Size of Performer
divided by Size of Character).  To better explain this,
when you shoot something that is smaller than reality (for
instance a 1/10 scale airplane or spaceship), it's natural
movement due to gravity will seem to be far too fast
because it's mass is far less than reality, but gravity
itself in the real world where this is filmed does not
change.  To compensate for this, you apply this formula,
which tells you how much to speed up or slow down the film
to get a more realistic rate of speed (though other things
need to be taken into account because, as I mentioned, the
real gravity it was filmed in is still the same
regardless).  As applied to CG, this means that with a
slower rate, some motions will still need sped up during
animation, such as (in the case of Real Steel) punches.
If you're wondering why this was an issue in this film,
as the models were built to their proper nine foot scale,
the reason is that, since motion capture was done as a
starting point for the animators, that motion was still
captured by live action humans, who are generally not nine
feet physiologically, regardless if they are standing on
stilts for proper positioning and eyelines.  When the
presentation ended, everyone once again flocked Noisy Boy,
so that was my cue to get out of dodge.

After lunch, my next stop was the course on Character
Rigging and Creature Wrangling in Game, Feature Animation,
and Visual Effects Production.  As I've done my fair share
of the latter two, I was only particularly interested in
the game portion since I've never really worked on that
type of project.  I took a few tidbits from this,
including some which seem obvious but take on a different
meaning when actually presented to you.  The first was in
relation to the notion of "real time game rendering",
which again does exactly what it says.  What isn't obvious
about this is that the game system is actually not only
performing the rendering, but running animation on a joint
skeleton and using various logic operations to drive
these, all of which are being solved first before being
augmented with additional effects such as dynamics, and at
THAT point finally being rendered with a realtime shader.
On a standard 30 frame per second refresh game, this
means that each frame must be fully computed and rendered
in 33 milliseconds (16 milliseconds for a 60 frame per
second refresh) in order to update for fluid playback.
This of course is impressive in its own right, especially
when you consider the system is constantly taking user
input and running all of this through an AI (artificial
intelligence) engine, and optionally a physics solver with
cloth as well).  The presenter covered various schemes for
culling this data for speed optimization, such as skeletal
LODs (the same idea as a geometry level of detail, but
instead stopping certain joints from solving/animating
based on their distance to camera) for finger and facial
joint reduction, animation sampling reductions (running
keyframes on 2s, 3s, etc.), and lowered update rates of
the animation depending not only on distance, but on
importance of character and placement.  And of course, the
reminder that unlike in film and commercial production,
where most offscreen elements can be mostly ignored (aside
from shadow casters and ray reflectors), in a game the
entire environment and all characters must be done
completely as the user playing the game decides what will
be visible in many cases.  It was definitely an
interesting talk, and to be honest some of these
optimization tricks can definitely be applied to non-games
specific work, especially in the land of crowd work.

At this point, I took a break to peruse the expo floor
which opened today.  Honestly, I was a bit uninspired by
what I saw, as there didn't appear to be any new or
groundbreaking technology present.  Instead, there was
just more of the same thing from many shows past,
including the requisite motion capture booths, rapid
prototyping machines, new graphics cards, and seemingly
more show floor bookstores.  I also took this opportunity
to check out the emerging technologies area before heading
to the last talk for my day.

I arrived a few minutes after the 25th Anniversary Rhythm
and Hues presentation began, and I must say that what
followed was a well paced, fun talk by at least ten people
about the history of the company and various achievements
and techniques employed there.  Moderated by the legendary
Bill Kroyer, they talked about the 1999 merger with VIFX
as well as their multinational expansion from LA into
Vancouver, Mumbai, Hyderabad, and Kuala Lumpur, bringing
their total workforce to over 1,400 worldwide employees.
I particularly enjoyed some of the older footage they
showed from the early days, and of course seeing some of
their recent work from Snow White and the Huntsman was a
nice contrast.  It was great to see Markus Kurtz
presenting, now Vice President of Production Technology,
whom I had the pleasure of working with back on Stealth as

Well, that about covers today's fun.  As always, if you
have any thoughts on some of the topics I brought up,
please don't hesitate to drop me a comment or an email.
And now for some sleep...goodnight all!

David Blumenfeld is with Brickyard VFX. Check out their Website at:
Continue reading "SIGGRAPH: Noisy Boy, You're My Hero" »

Permalink | Comments(0)
August 08, 2012
  SIGGRAPH: Pixar's 'Brave' Work
Posted By Scott Singer
Pixar faced some unique challenges in bringing their Scottish river to
life in the film "Brave". Not only did they have to construct,
simulate and render a convincing and beautiful mountain river with a
waterfall, they also had to have a girl and a huge bear interact with
it.  At their presentation during  the "Wild Rides" talk at SIGGRAPH
2012 they gave some insights into how they approached this situation.

One main problem with simulating any body of freely flowing water is
that the simulation resolution, the fineness of detail, needed to
provide a convincing interaction with character animation is very
expensive.  The coarseness of what might work to capture the feel of
the overall river set location wouldn't be nearly fine enough to
capture the nuances of how the water needs to flow around the
characters.  And the resolution needed to capture the character
interactions would be prohibitively expensive to use on the larger
scale. Pixar solved this with the tried and true technique of divide
and conquer.

How they approached this was ingenious.

They defined regions where they needed higher resolution, for instance
around the characters, and then defined what they called a "Windowed
Simulation" in that spot.  Essentially it involves plopping a finer
detail resolution simulation into the middle of the larger scale
simulation and using the conditions of the larger to control the
smaller.  Basically the boundary around the finer simulation gets data
from the surrounding simulation and interpreted it as a series of
faucets and drains that maintain the flow of water into and out of the
region.  They did this using the "PhysBam" physics computing engine.
They found they had to add some additional artistic controls, like
virtual hot-tub jets, to make all of the mathematical accuracy look
good though. The result was a river of water that looked as though it
had been simulated at an impossibly high resolution, and with even
greater flexibility.

Scott Singer can be reached at: 
Continue reading "SIGGRAPH: Pixar's 'Brave' Work" »

Permalink | Comments(0)
August 07, 2012
  SIGGRAPH: A Game Of Inches
Posted By John Parenteau

I made it to SIGGRAPH this morning, eager to get to our recruiting booth and talk about how awesome we are! But as I made my way to registration, I hadn't taken in to account one major problem. It would take me an hour to move 50 feet. I'm not talking about registration. Actually that was efficient and well managed. What I'm referring to is the problem of having been in this business for 25 years. I had barely walked out of the parking lot before I ran it to my Head of Production, and we began discussing the day ahead.  A former artist appeared, stopping me to say hello. Another few steps and a former instructor and friend from USC stopped to tell me that I should visit the new school. A few more and I ran in to a marketing director for a friendly competitor. I arrived at 9:30. By 10:30 I had walked 50 feet and given away five cards. 

When I was at USC Film School (now called the School of Cinematic Arts), it was all about being at the school, how cool it was, all the facilities, the prestige. But by the end of film school you realize it's not the school or the classes that make it special. There are a lot of schools that offer the same education. It's not like USC invented film (well, maybe...). But when you leave, you realize it's all about the people. Connections are everything, and helping your friends, supporting and even nurturing each other makes the difference in an industry that is very cut throat and honestly impossible to navigate. But the "USC Mafia", as it's called, helps you manage it all, and if used right (meaning YOU support others as much as they support you), that community is the difference between success and failure.

SIGGRAPH has a bit of that mafia feel to it. I don't really attend as much these days to learn anything. My job has me focused on other areas, and I'm usually being Mr. Manager more than Mr. Artist these days. But what it does allow is an opportunity to meet old friends, make new ones, further develop relationships, and to confirm that I'm still around, and my friends are as well.

So the 50 feet is the reason I attend. Meeting people I've spent a career working with, getting to know and struggling with helps support our own type of mafia. It's these people, who I've worked with for nearly 25 years now, that I can turn to for a laugh, a job or a shoulder. That's why I come to SIGGRAPH, and why I will always come until I retire... and maybe a few years after.

John Parenteau is with Pixomondo. Check out the Website at:

Continue reading "SIGGRAPH: A Game Of Inches" »

Permalink | Comments(0)
August 07, 2012
  SIGGRAPH: Through The Eyes Of A Middle-Aged CTO
Posted By Robert Keske
Though SIGGRAPH has been held annually since 1974, this will be my first year attending the world-renowned computer graphics convention. No, I'm not some fresh-faced college student, participating in this year's events as a component of my first internship. In fact, I've been attending NAB and IBC conferences for well over 15 years.

Of those 15 years, I spent nearly a decade working with the brightest minds in visualization at a premier Finishing, Compositing, and 3D Software Development company, helping to build many of the technologies that have set the stage for today. After that, on to the largest independent creative studio on the east coast (that's Nice Shoes), where I've led a team revamping and changing technologies, workflows, and toolsets, integrating just about everything (departments, machinery, people - anything that can be integrated, we've done it) along the way.

All of which is to say, I can't believe I'm about to attend my first SIGGRAPH! The conference's allure has always been the allure of the future. It presents emerging technologies at their best and most promising. This is why I'm attending the SIGGRAPH Emerging Technologies Conference, and why you should too.  

I've picked out a couple of exhibits that are "must see" for me. These exhibits are just a few of the many examples of how our everyday world and creative visualization are poised to merge even more closely.
- A Colloidal Display: Membrane Screen That Combines Transparency, BRDF, and 3D Volume
- Augmented Reflection of Reality
- ClaytricSurface: An Interactive Surface With Dynamic Softness Control Capability
- HDRchitecture: Real-Time 3D HDR Imaging for Extreme Dynamic Range
- Magic Pot: Interactive Metamorphosis of the Perceived Shape
- Tavola: Holographic User Experience

Having not once attended SIGGRAPH until now, I can't wait.

Robert Keske is the CTO at Nice Shoes in NYC. The studio is a full service, artist-driven design, animation, visual effects and color grading facility specializing in high-end commercials, web content, film, TV and music videos.
Continue reading "SIGGRAPH: Through The Eyes Of A Middle-Aged CTO" »

Permalink | Comments(0)
August 07, 2012
  SIGGRAPH: Green Steve And Wondermoss
Posted By David Blumenfeld
It seems like Siggraph 2010 was just here, and now I'm back two years later in my home town for another round. While I was unable to attend yesterday, I showed up early to get the party started Monday morning. Attendance seemed to be down quite a bit from last time I was here, though the three sessions I went to had long lines. I'm sure tomorrow will draw a larger turnout as the expo floor will be open.

After getting the lay of the land once more and doing some back and forth meandering to get registered, I was off to the Keynote speech in the West Hall. After a nice down-to-Earth, warm introduction by the conference chair Rebecca Strzelec, some awards were presented to varying researchers by the ACM President and CEO. This was followed by a comical audience participation exercise in self-help by author, games developer, and futurist Jane McGonigal.

From here, it was time to head on over to the production session for "Assembling Marvel's The Avengers" in the South Hall.  Jeff White, VFX Supervisor for ILM, started off the talk, presenting a fantastic array of work on this effects heavy film.  Topics ranged from their creation of digital doubles, the Leviathan creature, building destruction, new suit damage and transformation techniques on Iron Man, building their digital recreation of New York, and an in-depth look at the character and look development for Hulk.  One of the more interesting things I found during this talk dealt with their HDR acquisition for New York.  

While they shoot their spheres much the same way as I do (they use a Canon 1d Mark III, while I still use a Mark II), what was impressive was the degree they went to in order to capture the entire area of the city their characters would be moving through.  They were able to send a team out to shoot environments this way every few hundred feet down city blocks, as well as up on cranes and on rooftops where possible.  If I recall, they shot something on the order of over two thousand environment balls.  Additionally, they acquired LIDAR scan data of the buildings, and then using GPS coordinates of the HDR images, were able to piece back the exact location of these environment spheres and project these photos back onto the building geometry.  Combined with some scripted vehicle and prop placements tools (and hundreds of models for this purpose), along with clever building window reflection generation and office interior replacement (using ILM offices as a substitute), not only were they able to create a highly believable 3d city they could traverse through, but enough image based lighting environment spheres to provide a relatively complete HDR lighting scheme for everywhere their characters needed to move through the city.  While of course this was augmented with traditional lighting, the sheer scope of this acquisition was very impressive.  I would've liked to have found out what they did to rectify the fact that they would've been forced to shoot all these HDRs at different times and under different environmental lighting conditions, but I would imagine it would've just been a job for an artist to go in and color correct the stitched maps to match closer, and then be done with it.  Anyway, impressive nonetheless.

One humorous aspect of the Hulk portion focused on the on-set reference bodybuilders who were used.  The primary man, nicknamed Green Steve as he was decked out shirtless in green body paint, really got into the role, acting out shots to the best of his ability.  While these were definitely humorous outtakes, the reference he and the other stand-in provided was definitely useful, as it was clearly visible in some of the animation roughs as well as during lookdev and lighting tests.  They talked about various facial, hand, and dental casts they took of the actor Mark Ruffalo, as well as a full lightstage capture session.

Next up was Guy Williams, VFX Supervisor for Weta.  He spoke briefly about their HDR and LIDAR acquisition, also mentioning that they took multiple exposure photographs for individual stage lights to use as texture mapped area lights in their image based lighting/spherical harmonics setup, providing the same high range detail for reflectors and such that environmental HDRs give.  Of added interest to me was his mention of capturing their HDRs with 6 positions.  I typically shoot three (using a Sigma 8mm).

I used to capture these on a panoramic head kit to ensure that the camera was nodal to the lens, but finally stopped using that as the extra inch or two of offset didn't give me any trouble stitching, but it was significantly less to paint out with the rig no longer present.  I did for a while try shooting 4 positions (every 90 degrees instead of every 120), but I didn't find that I had any better stitch results than with three (in other words, I get quality stitches almost all the time anyway).  I assume they're doing every 60 degrees to simply provide even more data for stitching, but with 7 stops, that's 42 pictures to my 21, and more opportunity for the crew and other people who prefer to look directly into the camera and smile instead of walk off set for a few seconds to mess up the pictures, so I'm not sure what the specific advantage is, but I would've liked to have found out (time unfortunately became a concern and I was unable to get in there and ask).  Finally, Aaron Gilman spoke about the over 200 shots his 30 animators tackled, complete with personally filmed reference of himself and his team. Having an in-house motion capture stage is nice as well for this purpose.  In all, both studios did a truly remarkable job on the incredibly complex shots they were tasked with, and my kudos to all the artists who surely put in some long hours to achieve such high quality results.

The final presentation I attended today was for Pixar's Brave.  They touched on the visual development for sets, characters, and props that the art department did, the character development for facial, posture, mannerisms, and style for a number of the movie's cast, and cloth, hair and fur, and simulation dynamics setups and challenges.

Sets and environmental modeling and lookdev was discussed, as was color and lighting, both from a technical and creative/artistic point of view. However, the one part that I found of particular interest was their custom development of the moss, lichen, grass, and undergrowth system.  Rather than taking a guide curve/hair approach, a paint fx style stroke/tube system, or a particle instancing method, what they came up with was an almost entirely render-time solution affectionately dubbed Wondermoss.  Starting from an underlying surface, whether an uneven ground plane, a rock, or a tree trunk, the system would quickly create an offset upper bound using some simple trig math (sine waves added to each other with some offset functions), and then create a subdivided cubic volume to encompass a minimal area of interest via a raymarching algorithm.  Utilizing these small volumes, shading densities could be interpolated to essentially fake self-shadowing and color darkening, as well as quick solves for psuedo ambient occlusion.  Semi random patterns could then be applied along with predefined plant shapes, allowing for very fast rendering of this growth with little user-input or tweaking required.  While of course the complexity of this system and what additional artist input was necessary didn't get much attention in the talk, the end results of this were nothing short of phenomenal, and the applications of this shading based system seems to be easily extensible to other types of detail fill in varying cg sets, not just for plantlife.  I'm sure the fine folks at Pixar will get a great deal of mileage out of this development, and I for one was definitely impressed by it.

At the very end of the talk, during the question and answer session, a comment was made that they were taking advantage of Renderman's Deep Compositing feature.  For those who recall my blog from two years ago, I gave this technique some extra overview as I was greatly intrigued by the benefits it provided Weta in their making of Avatar.  At the time, they had written their own solution for this into Renderman, and implemented the back end into Shake.  Anyway, it seems from the comment today that this functionality is now part of Renderman Studio 3 (I'm assuming).  While we have RMS3 at our studio, I'm still using RMS2 at the moment, though shortly I will be making the switch as well as using VRay to a greater degree as well.  I am definitely interesting in doing some research into finding out not only how to access this secondary output of depth metadata, but if this can be read into both Nuke and Flame on the compositing side.  If anyone reading this knows off hand, please drop me a line or a comment.  When I get a chance to look this up (hopefully in the next few days), I'll add an update to that blog.

For those who are unsure, Deep Compositing allows the artist to output a pass which contains floating point z-depth data as a numerical pixel position table, meaning not only is the fidelity of the data far greater than that of a traditional z-depth render pass, but it allows for any given element to automatically be composited properly without the need for holdout matte creation, and without the drawback of any sort of edge blending issues inherent with z-depth passes.  Even when custom z-depth compositing nodes are created to handle transparency, motion blur, and anti-aliasing (as we did during my time on Beowulf), the ability to re-render an element and update it in the comp without the need for holdout creation is a huge timesaver, and one which I would love to take advantage of directly out of the box.

Well that pretty much sums up my first day at the conference.  I ran into a few old friends and garnered a few tidbits worth looking into, so all in all, it was a nice start.  I'm looking forward to tomorrow, though I'll have to pick and choose as a number of talks I want to attend are happening concurrently, so we'll see which one wins out depending on my mood.  Here's hoping you enjoyed these random thoughts, and as always, drop me a line if you have any of your own to add!  Goodnight all.

David Blumenfeld is with Brickyard VFX. Check out their Website at:
Continue reading "SIGGRAPH: Green Steve And Wondermoss" »

Permalink | Comments(0)
August 06, 2012
  SIGGRAPH Is Starting.... And I Don't Have A Thing To Wear!
Posted By John Parenteau

Damn. I already blew it and the convention hasn't even officially opened! I was supposed to attend my own company's presentation, this one on Hugo with the Pixomondo and New Deal guys, but as usual technology failed me, and despite the fact that I'm ALL set to be at the convention center NEXT Monday morning, I doubt the crowd will wait for me.

Alas, I hope the rest of the week goes better. I'm excited about this convention. Last year I attended SIGGRAPH in Vancouver, and despite the fact that everybody had funny accents and a strange affinity for hockey (even in the summer), the show was good. Small, but good. It's usually that way. The alternating philosophy of the organizers, namely an event out of Los Angeles, followed the next year with one back in LA, makes a lot of sense. Not everybody wants to come to Hollywood, despite what it appears, and it isn't fair for us to hog all the nerdy stuff. But needless to say, the Los Angeles years are usually huge! I intend to stand creepily in front of A LOT of booths this year, pretending to pay attention to some presentation on sub surface blah, blah, blah... while secretly catching glimpses of the aspiring actress in a motion capture suit. 

I'm curious to see what the trend is. I teach a class at Gnomon, and make a pretty big deal of the fact that once we got to the King Kong era (Watts, not Wray), the industry had largely invented... well, everything. At that point we could do anything. Water, fur, explosions, annoying animals. All photo real. Sure, we've gotten even more real-er, as computers get faster, programmers' heads expand and technology continues to grow exponentially like the hair on that peach I left in the pantry. But nothing really NEW was left to actually invent. So each year at SIGGRAPH I hope for that "wow" moment. That revelation that makes me feel like there's something amazing still out there. Heck, if everybody is right, we'll have photo real game engines in 10 years! If you can do this stuff in real time on your XBox at home, what is there left to do?

But that's the wonder of this show. There's always a new frontier. Something great to discover, some toy still unannounced. I attend for that reason, with eager anticipation no matter what city they hide from me in. 

Oh, and for the t-shirts. Gotta get some t-shirts!

John Parenteau is with Pixomondo. Check out the Website at:

Continue reading "SIGGRAPH Is Starting.... And I Don't Have A Thing To Wear!" »

Permalink | Comments(0)
August 02, 2012
  The New Talkies
Posted By Bee Ottinger
Six years after Time magazine named "You"(Tube) Person of the Year, communicating using compelling video content is simpler than ever. With little more than a smart phone, anyone can become a video content creator - or a critic. Viral video stars can become famous overnight. And almost anyone can become one. As The New York Times noted last year, venerable film houses such as United Artists have given way to maverick video production companies like Maker Studios that distribute online. More video footage is uploaded to YouTube in just one month than the three major US broadcast networks produced in 60 years.  


Now what? 

Yes, rabid consumption and the rapid production needed to feed it create online sensations providing instant, diverting video entertainment. These novelties rub elbows with material from established brands and legacy sources. But the big business picture extends far beyond these small screens: video now infuses every aspect of personal communication. We use video to talk about video, and share video to communicate opinions or express emotions. Video works as a communication shortcut because its subtext usually contains a common cultural and/or personal resonance.

Against this backdrop, with endless hours of produced material already easily available, the real power lies in how we package all this content. Its maximum cultural, emotional and financial impact - for marketers and those who rely on monetizing content - will come from the platforms that offer the most personalized filter to help us give this material real context and meaning.   

Social media tools applied to content are already driving this trend. Think about all the memes that get shared and re-shared across social networks. A Facebook commenter linking to Taxi Driver's famous "You talking to me?" scene. A spurned lover tweeting a link to Adele's "Someone Like You." The myriad connections made through "The tribe has spoken," "You've been chopped!!" or "Make it work." We the consumers are repurposing existing content and making it new every day, using today's hit song, hot reality show or yesterday's classic film to act as the Greek chorus for the emotional currency of our lives. My own company, SnapCuts, gives users greater access to such content to more fully express themselves. In making a snapcut, you use clips to tell someone you're sorry or thank them for their help. You can let a friend know that you're mad or that you forgive them.

A creative medium that early on embraced the "remix culture" is pop music, which often snaps together pre-existing content snippets. (We know them as "samples"). Look at the democratic and mainstream success of an artist like Girl Talk, former Pittsburgh-based biomedical engineer (!) Greg Gillis. The popularity of an artist like Girl Talk is based not so much on what we think of as traditional musicianship. Rather, Gillis' ear is so attuned to the current pop music landscape, as well as its historical landscape, i.e. its legacy content, that through the interconnection of fair use snippets, he is creating a sound the likes of which we've never heard before. 

One place where it's easy to find short bits of legacy content is from the advertisements and commercials of yesteryear. Like the MTV music videos I edited in my former career, this is primarily promotional content, but it's also content focused on packing in the most entertainment minute-for-minute.  The popularity of television in the last century has made this content more strangely powerful by spawning many shared cultural references. Those who believe in an art with no commercial component may take issue with this point, but think of how evocative a great commercial can be, often the main attraction of that undeniably American event, the Super Bowl. "A Diamond Is Forever." "Where's The Beef?" "Just Do It." "Think Different." 

Not only does the best of this content evoke emotional reactions, as it was intended to do, but using it also offers an easy way to enhance communication, and to create new forms of self expression. Video communi-makers win with nearly unlimited access to popular content and the rights holders of these works win by creating a new avenue of promotion. It's a mutually beneficial relationship that not only maximizes creativity, but new profit potential for archival content that would otherwise be forgotten.

Literally, technology has enabled us to re-form our language through video.  The evolution of this new communication medium has its own grammatical rules and syntax well understood by anyone who has grown up watching TV, going to the movies or surfing the Internet.   It is precisely because of this emerging relationship between video as art and video as 'alphabet' that I believe it's not content itself that will be king in the next decade. Instead, the most creative, attractive and engaging methods of packaging and mixing content ultimately will be crowned.
Continue reading "The New Talkies" »

Permalink | Comments(0)