Advertisement
Current Issue
September 2014

Recent Blog Posts in July 2013

July 30, 2013
  SIGGRAPH 2013: The Spark CG Society
Posted By Scott Singer
I sat down with Larry Bafia, Sly Provencher, and Dennis Hoffman of Spark Animation and Spark FX, to find out more about their organization and there connections with SIGGRAPH.  The Spark  CG Society is a group from Vancouver that is dedicated to being a nucleus of community building in the Vancouver Animation and VFX arena.

Among the many events they sponsor there are two important conferences - Spark Animation and Spark FX, which bring people together from schools and production companies in the Vancouver area for several days to attend talks by industry professionals, artists, and educators. The talks and presentations are first and foremost educational - sometimes discussing specific techniques and concepts, but also bringing in industry luminaries to share their thoughts on the creative process.

They have recently entered into an official three year partnership with the SIGGRAPH organization to cooperate on conference organization.  In fact the Spark CG organization grew out of the Vancouver SIGGRAPH chapter recognized as their most active chapter in the world. They were instrumental in bringing SIGGRAPH to Vancouver in 2011 - the first time the conference was held outside of the US.  This was so succesful that SIGGRAPH is going back to Vancouver in 2014.

this years conference, Sept 11-15, will feature lectures, presentations and screenings along with  an "Anijam" where the audience will be asked to actively contribute to the making of an animation during a session.  The theme is "Story and Storytelling" and there will be a special screening of a Disney animation "Get a Horse", which was started by Walt Disney over 80 years ago, but just recently taken back up and finished by current Disney animators.  It made its debut at Annecy earlier this year, and this will be only it's second public screening.

Spark is committed to making their conferences affordable, and rather than selling a single non-tranferable pass, they sell a set of tickets to each event. Attendees unable to see a particular session are actively encouraged to give those tickets to colleagues who can.

During the year they sponser numerous other events including guest lectures, educational events, and special screenings that try to keep the history of animation and cinema alive. They have a commitment to keeping the Vancouver Animation, Game and VFX industry an informed and coherent communities.
Continue reading "SIGGRAPH 2013: The Spark CG Society" »

Permalink | Comments(0)
 
July 30, 2013
  SIGGRAPH 2013: The effects omelet
Posted By Scott Singer
The "Effects Omelet" presentation at SIGGRAPH is always a great source for inspired creativity on the ground by VFX artists and TDs.  David Lipton, Head of Effects at Dreamworks Animation, gave a particularly interesting talk about he achieved the Jack Frost frost effect in DWA's "Rise of the Guardians".

Interesting use of old school approaches to get more controllable artistic results in the expressive effect of Jack Frost's frost in DWA's Rise of the Guardians.  The frost needed to be a highly stylized, very art directable and expressive effect, where Jack's staff would freeze objects by propagating elegant, icy arabesques that skated across surfaces, covering them in stylized frost patterns.

Lipton said that they were helped immensely by the copious notes, reference images and concept art prepared by the Art Department.  This gave him and his team a very distinct target to aim for, and helped to narrow the problem at hand.

The first approaches were simulation based, but proved to be hard to control, especially because the effect itself needed to be an expressive actor in the film, with its performance often leading directing the eye through key story moments.  The winning approach was to look far back into the history of computer graphics to an old standby of cellular automata.  These are systems in which cells of a grid, like pieces on checker board, follow simple rules that determine how each cell becomes filled by its neighbors.  In this case the rules would determine how ice would grow from square to square as time progresses. The speed at which the squares were filled defined paths, like roadways, along which the delicate and stylized crystal patterns would be constructed.  Because the automata exist in a grid, the rules could be "painted" in like pixels in a digital photo providing a high degree of control.  The end result was a controllable, yet very organic looking crystal propagation that added a sense of magic and expressiveness to the scenes.
Continue reading "SIGGRAPH 2013: The effects omelet" »

Permalink | Comments(0)
 
July 30, 2013
  SIGGRAPH 2013: A slice of Monster Pi
Posted By David Blumenfeld
My third and final day of Siggraph came and went.  It was a pretty nice show this year, although as I mentioned before, it was smaller and definitely less energetic and enthusiastic overall.  I'm sure this is a combination of the economic downturn in this business coupled with the location.  I think it would be fair to say there also wasn't a ton in the way of new innovation being showcased.  Almost every year, there's at least one major theme (evidenced by the booths on the expo floor as well as the talks) in which some new development or tech topic sticks out, but this year really seemed to be more of the same thing with not much new.  I attended three talks during the day, the first being the production session for Life Of Pi, the second the State Of The VFX Industry discussion, and the final had some making of Monsters U and the Blue Umbrella information.  All were interesting, and I'll get into that shortly, but there was one topic I thought I would mention first.

I've been in the visual effects business since the mid 1990's, and while I've done my share of various types of work, it would be fair to say I spent most of the first decade working on feature films and shorts (with the occasional ride installation, music video, and special venue project thrown in).  Aside from around six commercials during that time, most of my work focused on large scale, longer term productions.  However, for the last six years, I have spent the bulk of my time engaged almost solely in television commercial visual effects, racking up easily over one hundred spots during that time.  As most everyone knows, commercial post production today is a different beast than it was a decade ago.  Every spot is made at a minimum of full HD resolution, with some larger format on occasion, and the type of effects necessary must be at the same caliber of those in feature films.  Of course, as schedules and budgets shrink, this work must usually be accomplished from start to finish in a few weeks at the most, using crews from small to miniscule.  This is what draws me to this work, and makes it fun and exciting.  While there is often a sacrifice on the r&d portion of the process due to time constraints, the ability to quickly figure out a solution, implement it, and create a number of shots in that short period is challenging and extremely rewarding.  The sheer volume of advertising that makes use of this type of work is increasing all the time, and due to this, there are a number of studios out there, both large and small, whose work focuses on commercial post production entirely. 

Additionally, because the hardware, software, and skillset used to produce this work is now identical to that of feature vfx, many artists cross over between the two types.  The reason I bring this up is that, as in the past, commercial visual effects is nearly entirely absent from SIGGRAPH in all forms.  There are no production talks on the making of these spots, there are no screenings (that I am aware of) of the top spots of the past year, and there is not even any mention of this sector of the workforce in the state of the visual effects industry talks.  I find it a great disservice to this large portion of practitioners and their work that this has been left out from the show, and I hope to see this added in the near future. 

As I watch my peers discuss their techniques on the films they are presenting, I see things that I fully understand and agree with, but have to forgo and "fake" in order to make the deadlines.  I am often able to match the quality in the result, but it is these types of techniques that I feel should be shared with the vfx community during these talks, as I think there is merit and benefit for those in attendance.  One reason commercial vfx houses continue to succeed (those that do, of course) is because they have found a way to keep their overhead low and the effects budgets in check by performing in a highly efficient manner.  This is an area that could be utilized by some of the larger scale film vfx studios so they can remain in business and operate at a more profitable margin, something that is of great concern recently and one of the many factors forcing these companies out of business or to look for financial relief elsewhere. 

Having worked at many of these larger studios in the past, I have witnessed firsthand a large amount of waste in regards to inefficient workflows, including too many layers in their vertical stratification, as well as development that, dare I say it, makes for a better SIGGRAPH paper than it does for a necessary step in the vfx production process.  Anyway, in summary, let's try and push for some commercial visual effects representation at the show, it is a large part of the community and would provide a greater amount of knowledge for the attending crowd.

For my first talk of the day, How To Bake A Pi, a panel of five supervisors from Rhythm & Hues spoke about some of the technical challenges and processes on the production, including the creation of photoreal animals, digital oceans and skies, artistic imagery (not just real but pretty), and working in stereo 3D.  Highlights included the problems they solved during production by constructing a practical in-ground 70x30x3 meter water tank complete with wave generators and special concrete "tetrapods" used to counter the effects of reflecting waves from the walls (called the bathtub effect) for greater realism of practically simulating an open body of water.  Houdini, Naiad, and other custom tools were used to realize the CG ocean extensions.  While not directly mentioned in the talk, I assume the underlying deformations of the water are based on the Tessendorf algorithms.  I have done some pipeline development at Brickyard using these algorithms and tools such as the Houdini Ocean Toolkit (ported to Maya) for this purpose, with great results.  For anyone looking to do open ocean simulation, this is a great place to start, and a simple web search will get you to some source code and precompiled tools you can use on your own for development purposes.  Back to the presentation, they further discussed the fx animation for whitecaps, mist, foam, churn, spray, and other water interaction, as well as splashes which were added as separate elements. 

Moving onto the discussion of their characters, they talked about muscle and skin development and bone simulations as well as the hair and fur systems for the animals.  Of note, the tiger had approximately ten million hairs, while the zebra was in the twenty million range.  I was a bit surprised by this amount, thinking it would be higher, as when at times I have had to do animal mug replacement, I tend to use approximately two million hairs for the front of the face alone.  They talked about some of the advances they made in rendering this fur, such as intelligent raytracing and importance sampling, coupled with partial hair transparency and ray occlusion, coupled with the use of subsurface scattering on the hairs themselves.  They spoke briefly about some of the crowd work as well, for shots of flying fish, meerkats, etc.  At the end, they presented a few metrics, the most interesting of which were a total of (if using a single processor) 1,633 years of total render time, with a peak disk usage of 260 terabytes.  In all, the talk was well done, informative, and the work they accomplished was beautiful.  I hope the talented team there is able to emerge from their financial woes and put all the artisans who created this back to work soon.

From here, I took my last walk around the expo floor, checking out a few technologies that I missed over the last few days, grabbed a bite to eat, saw a few more old friends, and then headed to the State Of The VFX Industry talk.  The talk itself started off with some good history of the industry and such, and then spoke to some of the major problems facing the business at this time.  I won't get into this too much as it can be a very polarizing subject and clearly open to interpretation, but I still encourage anyone working in this field to visit some of the websites they listed for more info, including:
    www.vfxsoldier.com
    www.vfxunion.info
    www.vfxtownhall.org
    www.fxguide.com
    www.effectscorner.com
    www.vfxsolidarity.org
    www.visualeffectssociety.org

I would encourage everyone to become educated on the subject matter so that any future discussion or participation can begin from an informed standpoint.

The last talk of the day was in the same location, and was about some various Pixar technology, including some areas of Monsters U and the Blue Umbrella short.  They spoke about the character rigging, crowd simulation and pipeline, vegetation such as trees, hedges, and grass population and simulation, and how this differed from that in Brave, as well as the rain and lighting/compositing pipeline for the Blue Umbrella.  The work was well done, though I honestly didn't find any of it groundbreaking or particularly different than before.  One thing I did take away, which I would like to look into and encourage the reader to do the same, is the NGPlant open source library, presumably a set of base code they took advantage of for their tree development.  As I may have mentioned before, I did some looking into SpeedTree software (a commercially available solution), which may be on my short list of upcoming software when I need it for a job.  I don't know if this uses a similar code base or not, but I will definitely check out this other library as well to see if I can glean any techniques or useful processes out of it.

In summary, I enjoyed the show and learned a couple new things, as well as found a few areas I will be researching more in-depth in the coming months. 

I found it interesting that the show seemed to be attended by a significant amount of people who didn't appear (at least at first glance) to actually be working in this field, at least in my opinion (one can never really know).  As mentioned above, I would really like to see more real-world examples of vfx production in areas besides large scale film production and university or tech industry research discussed, as there is definitely room for a greatly expanded set of classes, talks, and other show events that would be of interest to many in attendance.  I likely won't be going to next year's show in Vancouver, but hopefully I'll return to the show in two years if it's local to Los Angeles again.  It would be great to see it bigger and better than before, and I hope to see you there as well!
Continue reading "SIGGRAPH 2013: A slice of Monster Pi" »

Permalink | Comments(0)
 
July 30, 2013
  SIGGRAPH 2013: Printers, Trees, and Raytracing, Oh My!
Posted By David Blumenfeld
Hello again loyal reader!  Today was a fun day at the show, albeit more freeform. I wasn't able to arrive in time for the IronMan 3 Production Session, so I decided to spend the day doing some more in-depth research about the items and ideas on the expo floor.  This also gave me a good chance to check out some of the parts I tend to explore less, such as the Emerging Technologies, Studio, and Art Gallery sections.  As always, I ran into a number of old friends and coworkers, and had a chance to catch up with them for a while.  In many ways, this is one of the most enjoyable parts of the show.  It's funny how time flies, a large number of people I hadn't seen in nearly a decade!  Along with some old colleagues and such, a few former students of mine came up to me as well.  Nearly all of them are actively employed, mostly at the same company for quite a number of years, so that there is a great thing.  We talked about what we were up to and the like, but a few of them sincerely thanked me for the help I gave them, and the fact that my piddly few classes actually made a difference in their lives and careers.  I only taught for a few years back in 2002-2004 or so, but to hear something like that really made me feel good, knowing I made a positive impact on some people and that the time i spent doing that actually meant something to someone.  If the world had more people who actually cared about seeing others succeed, it would definitely be a better place.  Anyway, enough about that.

I had a chance to check out a number of different things on the floor, and while there were some cool booths for NVidia, Intel, and Epson, as well as some of the major software vendors (notably absent was Autodesk), it was some of the smaller setups I ended up gravitating towards.  As I mentioned in yesterday's column, there was a booth for a piece of software called Flux, made by FXGear.  They have a few other products, including a hair and cloth simulator, but the fluid sim is what interested me the most.  While very Naiad like, the performance looks to be faster than that as well as RealFlow, and most importantly, the multiprocessing it provides is actually exponential the more cores you have in your machine.  The person I spoke with also indicated I could run the simulation simultaneously on my farm as well, so we have received a demo of it and are in the process of setting it up.  I am excited to try this out, as if it performs as indicated, the price to performance ratio seems like it might be a winner...time will tell once I get a chance to play with it.  I also spent a fair amount of time at the SpeedTree booth.  

The last time I looked into this, it was Windows only, with a version that had the feature-set I desired at the five thousand dollar price point.  This time around, there is a Linux version available, and in the new version 7, which is supposed to come out of beta in a few weeks, quad export will be standard now in the midline version at one thousand dollars.  While this is node locked as opposed to floating, I don't see this being a problem to have it on a dedicated machine, especially since there is really no benefit to using the farm to process anything anyway.  And if it was needed for more artists at one time, you could still pick up four more before the price of a single floating license.  I'm hoping that this package might fit into my pipeline soon, and I intend to write about that as well once I have some feedback.  I also looked at more of the 3D printing technology and services, as well as a cloud rendering solution and a non-gpu based real-time raytraced viewport for Maya and Max (sadly it was Windows only, so I won't be trying this out until they port it to Linux).

Another interesting thing happened while I was in the Emerging Technologies area.  There was a studio class going on which I opted to watch for a little bit.  While sitting there, I looked around a bit and out of the corner of my eye saw a roving object.  I doubt this is the first time this thing was there, and the technology may not be cutting edge as I know there are similar assistive devices out there as well as similar equipment for defusing bombs and such, but what I saw was basically a thin vacuum cleaner base with motorized wheels, two vertical five foot poles, and a monitor, video camera, speaker, and microphone mounted on top.  In the monitor, you could see the image of the driver who was operating this device from a remote location.  It was clear they were moving this around, talking to people, watching things going on, etc.  It occurred to me that this device, albeit primitive, is the beginning of a Surrogates-esque avatar, and in the future if our species opts to embrace this type of virtual interaction, I can look back and remember this primitive device of the early 21st century and laugh.  Oh, the humanity!

In conclusion, today was about catching up with people, seeing some new things on the expo floor, and taking in the atmosphere of the show.  Tomorrow I intend to do a few production presentations and talks, so I should have more to report on.

Continue reading "SIGGRAPH 2013: Printers, Trees, and Raytracing, Oh My!" »

Permalink | Comments(0)
 
July 25, 2013
  #Siggraph2013 Emerging Technology
Posted By Scott Singer
Girish Balakrishnan, a masters candidate from Drexel University was demonstrating his performance capture camera rig made entirely of commodity consumer components. It's centered around an iPad and attached Playstation3 controllers that provide the rig's spatial tracking as well as the user interface components.

The virtual world which the camera operator navigates is provided as a Unity game engine scene running on the iPad. As the the operator moves through space the iPad displays that motion through a virtual camera in the game scene - like Avatar on a beer budget. The iPad integrates data from the playstation with its own, storing it as a file that can be imported into Maya or Motion Builder.

Balakrishnan has been interested in performance capture for years and feels that the current crop of tools leaves users too tethered to the mouse and keyboard. He wants to change that using tablets, commodity cameras, and game technology. His enthusiasm for the project might just make it a reality.

In its current configuration it can serve as a low budget indy game production tool or a very inexpensive previs tool for independent film and video production. Girish is looking into how to incorporate new HD cameras like the Blackmagic to build a more robust camera performance capture system that could expand the creative palette of independent film makers. Performance in the venue was hampered by the huge amount of wireless interference in the Emerging Technologies hall, but it would be interesting to see how it performs in its intended environment - the mocap green screen stage in your garage.
Continue reading "#Siggraph2013 Emerging Technology" »

Permalink | Comments(0)
 
July 24, 2013
  SIGGRAPH 2013: Day 1 - Oz, Man of Steel, 3D printing & more
Posted By David Blumenfeld
Hello again! Another year, and another SIGGRAPH is here. This is my first time in the Anaheim Convention Center, which is somewhat exciting, as I've always wondered about the inside of the Arena building since I was a child coming to Disneyland long ago.

The venue is quite nice, a bit smaller than LA, but clean and easier to navigate. I was finally able to make it here, as work obligations took priority at the beginning. After a two-hour, heavy-traffic drive from home, I arrived to begin my day.



My first visit was to the "Production Session for Oz, The Great And Powerful," presented by some of my ex-colleagues from Sony. The main topics they covered were about the digital environments, VFX, animation, and character work.  The presentation was well done, and they had some nice shots from on-set in Michigan, which definitely helped the talk. Some of the more interesting topics included shots which had the entire set, including the ground contact, replaced entirely (which seems more common these days), but that these plates had to subsequently be scaled down in frame to emulate a much wider lens, and still hook up properly with the extensions - a more cumbersome way to work for sure, but well done to say the least.

One thing that always sticks out in my head is the extra steps they are usually able to take on the set for these large films, such as LIDAR scans of the set. Not only having access to the equipment (or the budget to outsource it), but the time to run the scans and the cooperation of the production crew in acquiring them is a huge benefit. As someone who has spent the past six years almost exclusively on commercial production, these luxuries are usually impractical, both time- and money-wise. I'm always forced to solve these problems a separate, often less-accurate way.

As with any other technology in this business, I'm anxiously awaiting the day when I can purchase a LIDAR scanner for 10 percent the current price (there's a $50K model on the expo floor being demoed), and that runs faster than the five to ten minutes it takes currently. On a commercial production, you're lucky if you can get the crew to pause for :30 for you to manually shoot an HDR as fast as you can, and heaven forbid anyone clears the set: just standing still in one place is about as much as you can hope for.

After the environment portion, the discussion shifted to the FX department, responsible for stormy skies, snow, wind, water, fireworks, explosions, crowds, destruction, rainbows, bubbles, etc. The talk finished up on the character animation, and the part I enjoyed the most was their use of what they termed a "puppet cam" - basically a hand-held boom with a monitor and video camera on the end. The other end of the set-up was a second monitor and video camera in a trailer, where the voice talent sat. This enabled the actor on-set to verbally interact with the non-existent CG character in a method far better than a ball on a stick, allowing them to see each other and react to their facial cues and such. A great idea if you ask me.

After this, I visited the exhibit floor for a bit. As seems to be the trend, it was smaller than the previous year. I'm not sure if this is due to the location, the rising costs associated with having a booth, or both, but in either case, it's unfortunate. Nevertheless, a few things stuck out. For one, there were less motion-capture booths than before, and of the few I saw, one opted to get rid of the girl in a skintight suit and went for a skater on a mini-pipe enclosed by a net to stop an errant skateboard from whacking some unsuspecting visitor.  

Having skated myself when I was a teenager, I couldn't help but feel a bit bad for the guy in his ever-so-revealing suit. Oddly enough, I happened to walk by at the exact moment he munched it hard, causing a large group of people to react in shock at the loud noise. I half expected the cloud of bees to show up and form the shape of a needle and yell "Skate Or Die" at him, but alas, that didn't happen.  I would be curious if they could track the motion of each bee in the swarm though... that would be impressive! But I digress.  

There were a few 3D scanner companies and services showing, but not as many as I would've liked to have seen. Of the two big ones, StrataSys and 3DSystems, only the former was present. It doesn't seem like the prices have come down drastically on the machines, which is a bit unfortunate. There were definitely less scanning systems, and as far as I could tell, only a couple glasses-free 3D displays (which was nice for a change). One company was there showing their new fluid simulator, Flux, which I intend to have a deeper look at. Overall, I would say there wasn't anything in particular on the floor that really stood out as new and groundbreaking this year.

My next stop was the "Man Of Steel Production Session," held in the very cool Arena (which was smaller inside than I had expected). There seems to be a greater push this year in the sessions to really pound home how they don't want the attendees recording the sessions through any form. There are now volunteers pacing up and down the aisles trying to enforce this as well. To be honest, anything more than a simple mention of it is distracting, and I found the people walking around during the talks to be slightly annoying as well. In many ways, it reminded me very much of my visit seven years ago to the Sistine Chapel. The entire time in the main room, while everyone stands shoulder to shoulder in an effort to view the indescribable beauty, you are bombarded with people half-yelling "no photos."  They used to use the claim that the flash would ruin the paint, but it sure seems the reality is that they just want to sell you the pictures THEY have taken instead. That seemed oddly similar to what was going on here, and considering some clown will likely record the thing anyway despite the rules, it just seemed like more of an annoyance to have nearly 10 minutes of the "due diligence" speech. Anyway, on to the presentation. 

Five VFX studios were represented, including Weta, Scanline, LookFX, MPC, and Double Negative. Now personally, I haven't seen the film. In fact, I rarely see movies anymore simply because my current schedule and other factors make it difficult at best. However, looking at the footage each company showed, to be blunt, the work looked absolutely fantastic. The extensive use of digital doubles was close to flawless in what I saw, and the sheer volume of set work, building destruction, volumetric FX, and such really looked great. It was interesting to note that multiple shops are using Esri CityEngine for building creation, adding their own custom functionality to it where needed. I remember doing my own city builder development nearly 10 years ago in anticipation of some work for the last Superman movie, but this tool seems to be a great solution for what used to be a difficult problem.

The fact that there's a commercial solution out there now that is extensible and pretty full featured really solves this problem in a big way. Weta's work on the liquid geo display sequence was very inspiring and well done, and MPC's Enviro-Cam solution, where they used a 5D to capture HDR style panoramas, only with a single exposure, but using 74 pictures to stitch together into a 55K image instead of a 7-8K image you would get with just three shots. Again using LIDAR to build a virtual set, these environment spheres were then projected onto that geometry, giving you a relatively photorealistic set (angle limited of course).  

I actually wrote about a similar technique last SIGGRAPH, used in a similar way but with actual HDRs, so you could light the scene off this geometry as well. It's a bit of a time consuming process to build, and can be done either using off-the-shelf software designed for this, or using projections in a package like Nuke and then re-exported back out into Maya or whatnot.

At some point as time permits, I'd like to play around with this same technique, but using the actual images as an input to photogrammetry to bypass the need to scan the set. If I could set up a pipeline to do this task on a commercial schedule and budget, that would be absolutely fantastic.

In all, it was a good day at the show, and I had a chance to catch up with a few old friends for a bit and actually have a meaningful (non-whiny) discussion about the state of the industry and some well thought out potential solutions. I'm hoping to be able to attend the talk on Thursday about this subject, and possibly shed a little bit of light on one area that I think is worth talking about. In the meantime, please feel free to drop me a line at blumenfeldvfx@gmail.com for questions, comments, or random thoughts about any of this you may want to talk about. And now, time to get some sleep and get ready for another day at the show!

David Blumenfeld is CG Supervisor at Brickyard VFX (www.brickyardvfx.com) in Los Angeles. The studio also has a location in Boston.
Continue reading "SIGGRAPH 2013: Day 1 - Oz, Man of Steel, 3D printing & more" »

Permalink | Comments(0)