Advertisement
Current Issue
December 2014

Recent Blog Posts in July 2010

July 30, 2010
  SIGGRAPH: My Final Thoughts
Posted By David Blumenfeld
All great things must end, and so follows that SIGGRAPH 2010 has finally come to a close. Overall, it was a nice conference this year, with some interesting new technologies to be seen and a few new techniques and ideas to walk away with. Unfortunately, it was definitely, in my opinion, not one of the best expo floors I've been to, likely due to a few things. For one, despite a few new advances this year, there weren't many revolutionary breakthroughs in the industry. In this realm, it seemed as though the biggest push of new technology which is finally starting to just take hold are the GPU embedded graphics cards and some new real-time or near-real time software that is finally beginning to take advantage of this hardware. More on this in a bit. Second, there seemed to be quite a number less vendors on the floor showing their products. While some companies were simply absent from the show altogether, others were no longer present because they have either merged or been acquired by another vendor at the show. This industry consolidation (which seems to be a cyclical thing over the years) is to be expected, but regardless has an impact of the implied presence at the show. Finally, I'm sure the recession and economic state has a great deal to do with it as well. Between budgetary cuts and marketing restructuring, the “freebie” giveaway factor was definitely much lower this year. Aside from a few random t-shirts, pens, and the ridiculously long line for the Pixar teapot toy, gone were the squishy toys, magnets, keychains, mint boxes, and random collectible things that tend to fill up the free bags of many an expo-goer.
On the topic of the teapots, what a fantastic marketing job Pixar has done with that, and kudos to them for once again, as always, being geniuses. Being a Renderman user, my “special” limited edition teapot which is somewhere in the 400's out of 1000 was obtained at the Renderman User Group meeting. I am not, however, a collector of such fine memorabilia, so I gave it to my three-year-old son who is playing with it around the house now. Watching the smile on his face as he winds it up and runs around the house shouting “Renderman Teapot” is worth far more to me than a tin box on a shelf collecting dust. While there is nothing wrong with wanting a toy or collectible like this for free, it saddened me a bit to see how long the line of young aspiring artists was for this, as perhaps more of their interest should have been in about learning more of the new tech about this field, which is supposed to be the whole purpose of this show. Maybe some of the video card manufacturers can start giving out limited edition toys of “Jimmy The GPU”, and the rendering companies can offer “Irving The Image Based Light” to drum up a bit more interest from the student crowd.

While I had intended to possibly squeeze in a few more talks, I ended up instead spending more time on the floor. I wanted to make sure I had an opportunity to take in the new offerings and get some one on one time with the developers and their products. A friend and colleague of mine, Erik Gamache, was also presenting a half hour talk on Digital Domain's making of the Hydra character in the recent Percy Jackson film, so I decided to sit in at the Autodesk booth to check it out and support him. The presentation was very well delivered, and the work looked great. A number of old friends of mine worked on that, and to them, congratulations for another job well done. To try and get the most out of the floor time, I decided to do a quick pass through all the aisles, grabbing a brochure from any booth that was a place I wanted to visit more in-depth. Afterwards, I sat with one of my coworkers and formulated a plan of attack. In the software arena, it was interesting that many of the packages were Windows only, with a Linux port either in (slow) development or not available at all in the foreseeable future. The only reason I could think of for this is that most of these tools are targeting markets with are either more involved in games production, or doing commercial/film work using Window's based tools such as 3DS Max. Being a Linux based studio, and with many of the larger studios using this platform, I would have thought more of these new tools would be available for it. This basically leaves three options if I would like to use this software: set up a few Windows based machines (we currently do this as dual boot systems or on Linux via VMWare), try to get on a beta program for the unreleased Linux versions, or simply lose the ability to take advantage of these new tools. There were definitely some interested items out there, and while I won't discuss any particular toolset at this time, suffice it to say that many of the products are definitely maturing to a point where they are becoming particularly good at what they do.

The other thing of interest, as I mentioned above, was the GPU technology and some of the software taking advantage of it. I am still a bit under-educated in how this processor works, and my attempts to get some explanation at the chipmaker booths yielded me with little more info. I simply got the runaround and was handed from rep to rep, only to finally give up when the fifth person on the hand-off chain was “back in an hour”. However, what I do know about these vector processors is that they are extremely efficient at processing many of the same type of calculation simultaneously, and this so happens to fit the requirements of raytracing. Therefore, this was the big sell, and there are a number of new software tools available taking advantage of this. Trying to figure out how this could be of immediate significant benefit to our studio was the challenge. First off, being in the commercial production and design business, we have no real requirements for real-time performance of visual display. Of course, it goes without saying that if I could obtain the same desired result in an image that I get with a software render out of a real-time hardware render, I would be all for it. However, there are certain types of rendering results that, for the time being, can still only be calculated using a software render, and until that changes, that is what will have to suffice for final images. While many of the GPU images (whether real-time or near real-time) were very impressive and beautiful, they are still not photoreal, and if that's the requirement of the spot you are making, this will not work. Of course, if the result approximates the software render in type (in other words, it is a valid representation of your shading setup), then it could serve as a huge benefit during the process leading up to the final software render. Of course, there are some renderers now which are taking better advantage of the GPU, such as VRayRT, basically being used to aid in certain parts of the render while other parts are rendered on the CPU. I am looking into those renderers as well to possibly augment our current setup. One though I had and would like to look into is potentially using the GPU to accelerate the creation of point cloud and voxel cloud data for things like occlusion, reflection, etc., and then use that quickly generated cache as an input on the software render portion, greatly speeding up the overall render time. In order for this to work, Renderman will need to be able to process those portions using the GPU (not sure if the new version will allow us to do that or not), and of course I will need to make sure we are using machines which have GPUs on the graphics cards. This also means that, unless we place graphics cards into our racks on the farm, these caches will have to be generated on our local workstations, which isn't the most efficient way to work either. Of course, I will need to do a fair amount of research on this to find out how to best leverage this technology and with what combination of hardware and software. There are a few other products out now which are GPU based and interface with Renderman, such as MachStudio Pro 2, so this may provide this type of solution, along with many of the new features in Renderman Studio 3 and Renderman Pro Server 16 due out in two months.
The rest of the things I saw at the show weren't really my cup of tea. There were a number of 3D scanners being presented, but unfortunately nothing that was really a one-stop solution. By that, I mean that I would like to see a scanner which can be operated handheld, with range adjustments to accommodate close, small subjects, as well as large, mid range subjects such as a full body person or car, and the ability to further scan far large structures via LIDAR, while at the same time taking high resolution (at least above 10 megapixel) full color images for texture purposes, all in a single wireless unit. Additionally, it would be nice for the price point to be below the five thousand dollar range for a unit like this. The same goes for 3D printing. The technology seems to be improving in some of these units, but the entry price point seems to be around ten thousand, extended up to the half million mark. As I mentioned in a previous post, the only way to get below this is to use a unit like the MakerBot where you assembly your own tool, and then have to work hard to get results which approach some of these pre-assembled units. While extremely large facilities and research institutes may be able to afford these units, smaller studios like ours find it difficult to justify the expense of these units when we simply wouldn't get all that much benefit from them, and when we do occasionally need to use them, we can just pay for the one-time service from a provider. I personally would love to bring this technology in-house, as it would save time to be able to perform these operations on set and in the studio at will. Hopefully, we see a price evolution that mirrors the other, longer established hardware, in a short amount of time, opening up this market a bit more. 

A funny thing happened later in the day. Not paying much attention, I didn't realize when the show ended and thought it would go until at least 6:00pm. I hadn't eaten any lunch, so around 2:30pm we decided to walk over to the LA Live area and try one of the new restaurants. That was quite enjoyable, and we ended up coming back around 4:15pm. On the way, I ran into some more friends, chatted briefly, then headed over to the expo floor again. I was literally shocked when I got there and the place was already half dismantled, the lights full on and multitudes on workers disassembling scaffolding and crating up equipment. I didn't realize the show ended at 4:00pm, and was simply amazed at the efficiency of the convention center staff and how rapidly they were putting the show to bed. We then headed over for one last talk which was still going, and finally called it quits just before 6:00pm.

Now back in the studio, I have my work cut out for me. I have this giant list in my head of things to go over with the other artists, research and development I want to get started on, software and hardware to look into, and a multitude of workflow improvements I think we can pursue. I took a great deal away from this year's show, and hopefully you the reader had a similar opportunity and experience. I hope you enjoyed my recap of the events and my various random thoughts on things, and I would welcome any comments or questions any of you might have at my email address listed in my profile. Until next time, good luck with your endeavors and the pursuit of the magical art that is animation and visual effects. This field is what we all make of it, and frankly it's one of the coolest things to be involved with.
Continue reading "SIGGRAPH: My Final Thoughts" »

Permalink | Comments(0)
 
July 29, 2010
  SIGGRAPH: Only One More Day
Posted By David Blumenfeld
Good evening loyal readers.Wednesday is rapidly coming to a close, and so is this year’s SIGGRAPH exhibition and conference.Tomorrow is the last day, and there are a few classes left I might try to attend. ’ll also be spending a fair amount of time on the expo floor, talking to different vendors about new products I’m currently interested in while learning about how some of the newest technologies, such as the multi-GPU video cards, actually work and how my studio might be able to take advantage of them.

Today, the front lot appeared filled by the time I arrived, so I had to resort to parking down in the bowels of the underground parking structure. I had planned on attending a number of courses and talks, but I ended up running into a number of old friends and colleagues and getting into conversations which overlapped the start time of some of the courses, so alas I skipped a few. I did however attend a “birds of a feather” special interest group on 3D printing. This was essentially an open session, complete with full introductions from everyone present, where everyone was free to discuss their knowledge of both 3D printing and subtractive rapid prototyping (the technique I am most interested in). Although the two are definitely related (one builds up a three-dimensional model in successive layers using a polymer deposited into some sort of removable support matrix, while the other takes a solid block of material and eats away at it until the resulting milled object is complete), they have different uses, benefits and drawbacks, and relatively separate user-bases. The ensuing conversation was informative and fun, with brief discussions on laser cutting, paper folding, sign manufacturing, and molecular model representation. This year, there is a new 3D printer called the Objet (http://www.objet.com) which allows for printing in other materials, including metal. There was a lot of discussion as well about a do-it-yourself 3D printer called the MakerBot (http://makerbot.com/) which is obtainable for as little as $750. While this needs assembled by the user, it is definitely an affordable entry level 3d printer with a loyal fanbase and user support community.

Around midday, I attended a press event for Jon Peddie Research (a technically oriented multimedia and graphics research and consulting firm). This was held at the Palm restaurant, an upscale eatery located only two blocks from the convention center. I have been to this particular restaurant nearly a year ago for my birthday, where my wife took me for a fantastic dinner consisting of a tender filet mignon, a 4-pound lobster, and one of the best Singapore Slings I’ve had outside of my home bar. This was definitely a terrific location for a press event, and the lunch served was quite delicious. The panel of speakers was also quite impressive, including Eric Demers (GPG Chief Technology Officer at AMD), Brian Harrison ( Director at SolidWorks Labs), Rolf Herken (CEO and CTO at Mental Images), Bill Mark (Senior Research Scientist at Intel), and Paul Stallings (Vice President of Development at Kubotek). In addition to market forecasts, the discussion revolved greatly around the notion of an HPU, or Heterogeneous Processor Unit, which is a multiple core processor chip with integrated CPU and GPU cores and what impact this will have on graphics computing over the next few years. Some talk also revolved around cloud computing and what benefits and drawbacks this will have for users, as well as its impact on the industry as a whole. At one point during the talk, there was an interruption and the doors to the private section of the restaurant where we were sitting swung open and a commotion ensued. Before anyone knew what was happening, into the room walked William Shatner and Dick Van Dyke. It turns out they were presenting later in the day at one of the expo booths, speaking on their thoughts of how the visual effects industry has changed and impacted their careers. I can only assume that they came to the Palm for lunch before that event, and learning about the type of meeting we were having upstairs, decided to crash the party and have a little fun. It was definitely quite entertaining to see both of them, and it really made the lunch something to remember.

From there, I returned back to the expo, where I decided to walk the floor for a bit. I was lucky enough to run into some more old friends and catch up on things for a little while. Once the evening came, it was time for the annual Renderman Users Group meeting, held this year at the brand new Marriot at the LA Live center. While I won’t recap all the new features coming in the next release of Renderman Studio and Renderman For Maya, suffice it to say that there are a great number of fantastic improvements and I am really looking forward to this new release. The new Tractor job queue management system was also demonstrated (this is a replacement for the older Alfred queue system), and as a clever trick for everyone before the show, where attendees could log into a local wifi and play a trivia contest about renderman, what was actually happening was Tractor was being used to perform distributed rendering of a mandlebrot fractal image, demonstrating not only the flexibility and power of the new system, but finally bringing to fruition the often joked about notion of using everyone’s iPhone on the farm to render images. The presentation finished up with the requisite “Stupid Renderman Tricks” and a raffle for some various Pixar stuffed animals, hats, and t-shirts. Of course, no Renderman Users Group meeting would be complete without receiving the wind-up teapot toy in a tin. This year’s version was red with a black hat.

Tomorrow, I might try to check out a studio presentation on “Hi Res Rapid Prototyping for Fine Metals and Jewelry”, a course on “Global Illumination Across Industries”, or a talk on “Fur, Feathers and Trees”, time permitting. If any of you are still interested in seeing the show, make sure you go tomorrow, lest you miss this year’s event. Thanks for tuning in, and check back tomorrow for my final wrap up.
Continue reading "SIGGRAPH: Only One More Day" »

Permalink | Comments(0)
 
July 29, 2010
  SIGGRAPH: that is a wrap
Posted By Michael Sanders
Siggraph is pretty much a wrap, I would have written earlier but literally have not stopped since stepping off the first plane in to Burbank Monday morning. I headed straight in to the talk on computational photography - I have to say this is one of the more exciting research areas to me. Some development has been evolving in this sector for a little while and is now getting more focus (ha). A few years ago at Siggraph, there were a number of fun emerging technologies related to this field - encoded apertures, origami lenses, lens arrays - and plenty of work on up-res-sing, deblurring, infinite depth of field, etc... As with most tech, there are tradeoffs, loss of contrast, ringing or other artifacts, but what I find really exciting is to think of this research more philosophically, thinking of light in a new way, changing photography and imaging by being creative about how we analyze light and how that will change the creative process. This is definitely a sector to watch.

On with the show, with a full conference pass, I always feel like I need to be in three places at once, luckily ILM sends a few qualified folks to cover the subject matter (and contribute of course). I tend to hit up the replicating realism sessions, anything to do with faces, lighting acquisition, 3d spatial things, human motion, and as usual there are a few good pieces here and there, sometimes there are things we know but haven't implemented, and sometimes it is the result from a different approach that can be leveraged in an entirely new way. One of the things I'd like to see more of is research collaboration between industry and academics, I find that some of the research going on is solving something that certain companies already have but can't/don't share. I understand the protection of intellectual property, but at a minimum I think the big shops can help consult and guide some of this academic research by collaborating (not saying it doesn't happen, just that I'd like to see more of it). Quick example, I attended a talk on analyzing and extracting 3d human performance from a single 2d video source. The research included a number of secondary techniques that helped the process and rounded out the work in a nice way, but also distracted from focusing on a more elegant solution to the principal issue. Of course research can head in a magnitude of directions for many reasons, I just like to think that with some more collaboration, I can integrate some of the techniques sooner and keep raising the visual bar.

I always enjoy seeing the research coming out of ICT, specifically the facial capture stuff using normal mapping - and even the headrig version of this tech. Great datasets for shapes, bump and spec, performance - if and when you have a controlled setup. The headmount frees some of that, but still leaves plenty of fun in the research arena for capturing facial performance in context to a live action environment with minimal tech/footprint.

On to the floor, it seemed nice to see the major shops all with recruiting efforts. I know all Lucas divisions have a number of opportunities for folks on the upcoming slate of work. As for the vendors - nothing like the magnitude of a show like NAB, but I guess that is a good thing, you can get right down to seeing the tools you really need. There was plenty of opportunity to play with desktop scanners and 3d printers, a good all around representation of those market options that you could touch and feel. And as usual, all the mocap players were representing - optical systems, accelerometer based, etc... nothing super new here except that everyone is jumping on the virtual cinematography band wagon. It is nice to see the tools becoming packaged and available for expanded use at reasonable price points. Most form factors look like shoulder mount broadcast cameras, and give that kind of look to your virtual camera move. Guess what people, if you have a tracking technology, you already have the capability of doing virtual cinematography (still need the talent, but piecing the tech together isn’t that hard). Heck, put your tracking object on a dolly, a hot head (or just port the rotations directly), or a steadicam (make sure to add mass). Point here is that you can leverage traditional tools with the technology to replicate the cinematic look.

One observation, emerging tech always has the most random graphical interaction technologies, cracks me up to use augmented reality to get a virtual smell, crazy haptic feedback contraptions and fun little graphic games. One cool tech that is letting us peer into the future, a 3d display that had a little 3d video game within the hologram-like spinning LED display – this tech will be fun in the near term. These projects make me think that it would be fun to be back in school working on something like this – I was working on animatronics back then, I guess that runs the same vein. Come to think of it, a lot of the research we do to solve a vfx problem is extremely similar. We leverage hardware and software in some undiscovered way, usually no where near solid state, yet just capable enough to get the job done, then we start all over again.

Thanks to Jim Morris, for his keynote, covering many of the pivotal moments in computer graphics, amazing to see how far in a short amount of time this industry has evolved. It is the moment when we get to see our hard work and innovation pay off through the amazing visual and the audience appreciation, that keep me inspired. I rounded out the conference with plenty of socializing and networking, thanks to all the vendors and sponsors for Bordello Bar, J Lounge, Club Nokia and 7 Grand – good thing the conference is only a few days long, now back to the pile of deadlines waiting for me back at the studio.

Continue reading "SIGGRAPH: that is a wrap" »

Permalink | Comments(0)
 
July 28, 2010
  SIGGRAPH: What a Day!
Posted By Scott Sindorf
This morning the SIGGRAPH trade show opened. By all accounts there are less exhibitors from previous years. Although the industry is slowly climbing back from last year recession, it is about half the size from five years ago. By far the most attended and popular booth is Pixar. The line to visit Pixar literally winded around the entire exhibit hall. I assumed that most in line were aspiring animators in hopes of landing that coveted job with their dream company. I was in fact wrong; they were giving out small plastic teacups. I truly do not understand what would make people wait for hours in line for a plastic toy, but then again everything Pixar touches seems to turn to gold, even plastic toys. In the past the teacup was a symbol for a type of typology used in 3D graphics and the default icon for previous SIGGRAPH shows. I wonder if the endless young people in line waiting in line were even aware of this.

The floor was full today and as expected 3D is all the buzz. There were virtual cameras that enabled the viewer in real time to view stereoscopic virtual sets. There were also real time 3D motion capture rigs, both with those iconic markers the actors have to wear and setups were no markers were needed at all. There were numerous 3D television monitors that require polarized glasses to view 3D. Presently this is by far the most popular consumer choice for home viewing of 3D. The competitor to this technology is autosteroscopic monitors, The viewer does not require glasses in order to view 3D. In order to experience the 3d, there are "sweet spots" where the viewer must stand in order to experience the 3D. This technology still have a long ways to go before consumer in my opinion is ready to spend the money, but still the technology is progressing.

Our company utilizes 3D cgi for our work. (Not to be confused with 3D stereoscopics.) There have been a lot of developments on this front. We heavily use Softimage XSI in our pipeline. The behemoth company Autodesk recently acquired this software. Autodesk in addition to XSI, also owns Maya and 3ds Max. I am relived to hear Autodesk will continue to invest in this product and make it stronger and continue evolving the software. As an educator, Autodesk will be offering an entertainment creation education suite. This package will offer the following Autodesk products: Maya, Motionbuilder, Mudbox, Sketchbook Pro, Softimage and 3ds Max. Students today will probably have to be multi-versed in the aforementioned softwares. We are finding as well as other design houses, that it is hard to stick with one software for our pipeline. I am hoping Autodesk will continue upgrading each of these products and not push one software at the expense of another.

I also attended two panel discussions today. The first was entitled "Blowing $hit Up." A great name for a panel discussion on the scientific understanding of how 3d cgi is creating photo real explosions and destruction of cities for film. These panels are not for the technically deficient. The first team to present was hosted by the always amazing Industrial Light and Magic. They were explaining some of the techniques utilized in making the film Avatar, primarily the explosion sequence involving the Dragon aircraft. Without getting in too much detail, much of the initial work was done with model proxies, rigid body dynamics, and later highly detailed models were substituted for final shots. The next panel again was ILM. This time they were explaining some of the work they accomplished for Transformers 2. The work spoke for itself. What I found most interesting is ILM’s goal is to make the tools as easy as possible to allow the artist to do what they do best….to be an artist and not a scientist. For the layman there is no magic explosion button to blow things up. There is a tremendous amount of pre-calculations and iterations to get the right “look.” And for ILM is not enough to have an exact explosion, the look and feel and composition is always at the forefront. It more important that the shot that looks right, as opposed to scientifically right. Digital Domain represented the last panel and they explained the earthquake sequence from the movie 2012. The work was outstanding and the ability to destroy downtown LA photo realistically was unimaginable only a few years ago.

The last panel discussion I attended was "The Making of Tron : Legacy." This movie may have the record for the most amount of time for a sequel. This was a full house and the crowd was looking forward to hearing from the directorial debut of Joseph Kosinski. As someone who also received his Master’s of Architecture from Columbia University, I was excited to see what Joseph imagined the world of Tron would look like. I was relieved and truthfully blown away at the seven-minute 3D preview of the film. It stayed true to the original vision of the artists Sid Mead and Mobius. Apparently the original star of the film Jeff Bridges will be playing reprising the role of Kevin Flynn as well a digital double playing himself as thirty five year old. Maybe the uncanny valley has finally been crossed. Also it was nice to hear with Pixar’s new relationship with Disney they were able to make suggestions on how to make it better. I hope in this case their touch does make this film turn to gold.
Continue reading "SIGGRAPH: What a Day!" »

Permalink
 
July 28, 2010
  SIGGRAPH: Motion Controlled High Res Photography
Posted By Damijan Saccio
The most interesting thing I saw at the Emerging Technologies and Studio area was a project that a consortium of people have created to allow the masses to have easier access to create motion controlled high resolution still camera time lapses.  For some time it's been extremely expensive to attempt any sort of professional-level motion controlled timelapses.  However, there has been mounting interest in this subject in recent years. 

Addressing this problem, a group of individuals and companies formed an open source collective called OpenMoCo. Two companies that are members of this collective, Dynamic Perception and XRez are at SIGGRAPH showing the fruits of their labor. This is definitely something you will want to check out visually on their respective websites (above) to get a good idea of the exact rigs you can make and the kind of results you can get. However, essentially the idea is that they have sourced various open-source software and hardware solutions along with off-the-shelf pieces to allow any hobbyist or professional to be able to construct a good quality rig and controller for a motion controlled set up.  Dynamic Perception has even assembled a number of these components together and can offer a largely ready-made solution for you from their site at very reasonable prices. The basic MoCo set up involves a standard DSLR camera, an astronomic mount (made by companies like Meade, Orion, or Merlin), a track with motor, and a small piece of hardware called an Arduino controller that is programmed to allow for the communication between the DSLR camera and the motorized track.  See a couple examples below: 


and

(These images are property of Xrez)

The applications for these rigs are manifold, from strictly art pieces such as those well known works of Tom Lowe, to educational applications like sky domes in planetariums, to commercial applications like television commercials and music videos and the like.  Xrez has a great example of a 3D CGI camera move created by combining a still camera stationary time lapse with a 3D data set from NASA.  The company mapped the video imagery they got from a stationary time lapse session onto the 3D geometry (photogrammetry, but with video in this example) and then added the CGI camera move.  This creates a piece that would have been near impossible to achieve otherwise. See example below:

xRez Time-Lapse Studies from xRez Studio on Vimeo.

Being an amateur photographer myself and having a design and production studio, all of this really interests me and gives me a lot of creative ideas for possible future projects. I'm very excited at the OpenMoCo project's dedication to spreading the knowledge base for this interesting field.  his is obviously a complex topic and I would invite the reader to learn more by visiting OpenMoCo.org and Xrez and Dynamic Perception.

Continue reading "SIGGRAPH: Motion Controlled High Res Photography" »

Permalink | Comments(0)
 
July 28, 2010
  Tech Highlights from the Exhibition Floor
Posted By Damijan Saccio
The main exhibition floor at SIGGRAPH this year was not overly amazing in general, but two small booths really left a mark with me so I wanted to make sure to let everyone know about them.



Direct Dimensions is a company that provides 'rapid solutions to 3D problems'.  They are exhibiting an extremely clever and simple solution for creating very nice and usable facial scans.  Their system is just a simple rig with four cameras (two for each side of the face) that are able to capture depth information due to their stereoscopic layout.  Basically each pair of cameras use the same technique that your eyes use, to detect depth.  This 3D data then gets the photos applied back onto the geometry and within minutes you have a fully 3D textured model of your face.  Simple, elegant, and cost effective.  I was extremely impressed!



Eye Tech Digital Systems is a company that has built an eye tracking solution that allows you to fully control the mouse movement and clicking on a computer.  At first glance (if you'll forgive the pun), I didn't think much of it, but after a quick demo which just involved a quick calibration, I was up and running controlling a computer without any problems.  Just look at what you want, blink to click and that's it!  Very fascinating and it's easy to see quite a number of applications for this technology.  Its use for users with disabilities is obvious, and they outline a number of other uses on their site.
Continue reading "Tech Highlights from the Exhibition Floor" »

Permalink | Comments(0)
 
July 28, 2010
  SIGGRAPH: Tuesday's News Array
Posted By David Blumenfeld
Good evening loyal readers!  Well, SIGGRAPH is officially more than half way through.  It’s been an interesting show so far, with some really great talks and courses that have definitely prompted me to make a list of some RandD projects for when I return to the studio.  I’ll recap some of the presentations from today while interjecting a few thoughts and ideas here and there, as I am clearly so fond of doing, and so without further ado, on to the day in review.

The morning began much like the previous two, with my best-of-the-best elite parking spot right in front eagerly awaiting my arrival.  Driving in at the crack of dawn definitely has its advantages, well, at least one.  Just as yesterday, the buses were letting the throngs of attendees off in droves, and for a moment, I thought perhaps a hasty jaunt up the escalator in search of a fast pass was in store, but alas, this was quite a different attraction indeed.  Ahh, to wax poetic about a technical exposition, the humanity of it all… what a world!  I stopped in briefly at the Media Suite for an orange juice and to check the press releases, then made my way to the 9:00am presentation of “Iron Man 2: Bringing In The Big Gun”.  The presentation was given by Doug Smythe (digital production supervisor at ILM) and Marc Chu (animation supervisor at ILM).  They talked about a number of production challenges, including the CG expo environments, various suit design, rigging, and animation, and motion capture acquisition and processing.  As in some of the other talks I’ve seen on Iron Man, as well as talks on Avatar and courses on the new physically based shading models, one area that particularly interested me was using HDR imagery as projection on texture cards to be used as area lighting.  Before getting into that, I wanted to touch back on something that was cleared up a bit for me.  In one of my other posts, I mentioned how Ben Snow (VFX Supervisor from ILM) talked about shooting his HDR bracket exposures 3 stops apart, where I am used to shooting closer to 1-1.5 stops apart.  In today’s presentation, Doug gave more detail on how they shoot their HDR images, and it turns out their camera is one generation older than the one I shoot mine with.  This only allows them to shoot 5 brackets per angle, where I am able to shoot 7.  What this really means is that with their method, they are achieving a difference of 15 stops from their darkest to brightest image.  For me to approximate that range, I would need to shoot at 2 stops apart, giving me a range of 14.  Right now, I’m capturing between 10 and 11.  One test I plan to do back at the studio is to shoot a series of four sets of HDR domes, using stops of 1, 1.5, 2, and 3.  I’ll merge and stitch them all up, clean out the rig, and then ensure that they are all color balanced to match at their midpoints.  Then I’ll light a test scene with the environment lighting only using each of these four different HDR images and wedge out the results to compare on both a gray and chrome ball over the backplate as well as over a neutral plane to see what sort of results I get and if the increased stops actually provides a better lighting or not.  If anyone is interested, I would be happy to post up those wedges as a follow up, simply let me know.  Anyway, back to the HDR texture card as area light.  Up until now, the way we generally set up our lighting (this is using non-physically based shading models in Renderman, essentially appearance networks built out using the Delux shader which ships with Slim) is using two environment lights with the same HDR image mapped onto them, one emitting the diffuse component only as well as traced occlusion, the other emitting the specular component only (providing the ability to control each one’s intensity and other values independently).  A minimum of one spot light with shadows (either deep shadows or raytraced depending on the scene and desired look) is placed at the highest intensity position of the environment ball, sometimes adding additional lighting and other times casting a shadow only.  Additional spots may be added for more complex shadowing as necessary.  From here, we’ll add additional environment lights mapped with HDR images of studio lights, such as big area light boxes in different shapes and patterns, to achieve additional rim lighting or specular hits.  Next, we’ll add blocking geometry where we want to darken areas, and mapped cards for reflection purposes if desired.  While this is a relatively standard way of performing image based lighting combined with standard lighting, what intrigues me is the talk of using HDR images as textures on cards and being able to use those as additional environment lighting.  To do this currently, I would have to create a lat-long image of my texture on a black background and then map this to an environment light (ball) to achieve this.  What I would like to be able to do (and what I’m assuming these fine folks are talking about) is simply shoot a non-fisheye bracketed exposure photo of an actual light on my stage, merge it into a single radiance file, and map that onto a card which I can place in my scene, and have Renderman calculate that as environment lighting.  This is something I will definitely look up, and if anyone has some info on this, I wouldn’t mind a point in the right direction.  While it’s not a huge amount of work to have to shoot stitchable images, merge and stitch them, clean out all the unwanted portions of the map, and then assign this to an environment light, this is a lot of extra work which could be avoided if I could go directly to a card with a single image.  Anyway, definitely something I will be looking into.

The next talk I attended was “Expressive Rendering And Illustrations”.  What this covered was a number of different non-photorealistic rendering styles, mostly created a research projects, all with very interesting and thought provoking results.  From Joe Schmid’s system for non-traditional motion depiction, where commonly used motion blur is done away with and instead substituted with strobing, weighted lagging, and colored speed line generation (both blurred and tubular), to Wilmot Li, Dong-Mind Yan, and Maneesh Agrawala’s presentation on “Illustrating How Mechanical Assemblies Work”, which provided for automatic solving of spatially configured systems such as gear chains and their accompanying causal chains of motion (complete with arrows, sequences, and automatically solved animation).  The use of rendering techniques such as these, as well as the logic in the programs used to identify and solve the motion with minimal user input was definitely impressive.  Stephane Grabli’s presentation on “Programmable Rendering of Line Drawing From 3D Scenes” provided a fantastic style of hand sketched tracing, with a far superior look to traditional cel shader implementations.  The availability of his development from his website (http://freestyleintegration.wordpress.com) is also a fantastic bonus, and I’ll definitely play with this when I return to the studio.  Finally, Alec Rivers’ presentation on “2.5D Cartoon Models” illustrated his clever solution to creating a fully tumbleable 2D drawing in 3D space without the result looking like a cel shaded piece of geometry was very creative and definitely headed in a cool direction.  His research and a working application can also be downloaded and experimented with at his website (http://www.alecrivers.com).
 
I had wanted to attend the “Blowing $h!t Up” talk, but a prior commitment caused a timing conflict with this, so alas I was unable to go.  I did end up with a bit of free time before my next scheduled course, so I took an opportunity to get out on the expo floor.  For some reason (not sure if this was reality or just my skewed perception), the number of vendors seemed much smaller to me this year than in recent shows.  The number of freebie items was almost non-existent as well, and while I really don’t need another box of mints, keychain, squeeze toy, or t-shirt, it would’ve been nice to have something to bring home for my young son.  While there was a standard slew of software vendors and the requisite demonstration lectures, the rest of the floor was mostly dominated by 3d printers and the samples of models they created, a number of stereo displays and televisions, and a handful of GPU based rendering engines that frankly I’ve never heard of before today.  There was also a selection of booths showcasing different 3d scanning solutions, and at least three or four motion capture technologies showcased as well.  I definitely intend to spend some more in-depth time on the floor, but that will likely have to wait until Thursday due to my course schedule.

My final course of the day ended up being “Pipelines and Asset Management”, moderated by an old friend and colleague of mine, Erick Miller.  While the talks were interesting, I mostly went to simply get a chance to catch up with Erick, who I hadn’t seen in over 3 years.  I spent the better part of a decade of my career building and supervising the creation of large scale feature film and facility pipelines, and while each studio and set of artists have unique problems to tackle and creative ways of solving their issues, there’s frankly not much new in the methodology for this particular task.  File referencing, nesting and swappable proxy representation of collections of publishable assets, level of detail generation, automated scene population and crowd manipulation, metadata storage and retrieval, and push-pull scene propagation are the standard fare.  While I do still find the creation of pipeline solutions interesting and fun, I think over the last four years my interests have shifted more towards the creation of a final image and all the parts in between.  For a long time, I was always a part of a large scale facility, with highly departmentalized structures and specializations.  Working for a much smaller studio like Brickyard, where my close-knit team of technical artists and myself are responsible for every aspect of the process from design through final renders really gives you a different perspective on the process and a greater appreciation for the sum of all the parts.  There is no such thing as “over-the-fence”, and at the end of the day, like all of us, I simply want to tell the story well with the most beautiful images possible, on time and under budget.  With that said, one part of particular interest to me during the talk was Christian Eisenacher’s discussion of “Example Based Texture Synthesis” which he has developed at Disney.  While the result of his process is fantastic, relying on Exemplar Analysis to find the most appropriate missing pixels in an image to thereby create non-repeating seamless tiles for large-scale textures, what interested me most was their use of non-UV’d polygon meshes, using something called pTex, now an open source code base available at http://ptex.us/ for everyone.  What this essentially provides is a meaningful flow based texture coordinate system at every polyface or subvidision with adjustable density on a per-face basis.  While the notion of not having to create UV maps for polygonal meshes is terribly appealing, from the demo videos I have seen online, it feels much like a camera view projection system which works well in either 3d paint scenarios (either using brush strokes or actual pixel maps) or in viewport projection situations.  I am not sure how this would work if I wanted to take a flattened representation of the object into Photoshop to actually paint a number of aligned textures (such as signage on a complexly curved building surface).  In one video, a snapshot of the current 3d view is brought into Photoshop, where an image is stuck onto the surface projection style and then viewed back in the 3d system, but this is not the same thing.  This analogy is much more similar to painting through geometry from an underlying picture in Z-Brush or Mudbox, which although also quite useful, is definitely not the same thing.  In either case, my lack of knowledge about the software simply means it is an area I would like to look into more and find out how this can potentially be leveraged in my own workflows, either now or in the future.  I may have to hit up some old colleagues from Disney who are intimately involved with this process, such as Chuck Tappan, to see what I am missing and if this has another aspect to it.

All in all, it was a pretty interesting day, as the others have been as well.  For tomorrow, I’ll try to hit up “The Last Airbender – Harnessing the Elements: Air, Water and Fire” first thing.  At 11:30, I’m attending a Press Luncheon where discussions about new processor technology, real-time raytracing, stereoscopic impact on production, and cloud computing for rendering purposes will take place.  At 2:00, I though I might check out the Molecular Graphics talk, simply because it’s always interesting to see cutting edge graphics applied to scientific research and visualization (which of course spills over into entertainment), followed by a talk on 3D Printing for Art and Visualization.  Though I forgot to register last week, hopefully one of the kind folks at Pixar will get me an one of my co-workers a spot at the evening Renderman User Group meeting, which is how I intend to end the day.  Check in tomorrow night for an overview of how my day pans out, and please feel free to comment on anything I have written if it sparks your interest in any fashion.  And now, off to my four hours of sleep!
Continue reading "SIGGRAPH: Tuesday's News Array" »

Permalink | Comments(0)
 
July 27, 2010
  SIGGRAPH: All About 'Avatar'
Posted By Damijan Saccio
ABADAH (Avatar)

I recall watching the 2010 Golden Globe Awards a few months ago and seeing my childhood hero Arnold Schwarzenegger up on stage saying something like:

"Yar dis Abadah is dee greathest moofee. Ya ya."

A great truth then dawned on me -- when you have Arnold's attention, you're onto something big. And that is just what Avatar is. Big. With over 2000 visual effects shots, and a team of 1000 different artists working together, what you end up with is a production of epic proportions. Scenes are brimming with polygons.

I attended a talk on Avatar by the artists at Weta Digital (Dejon Momcilovic, Kevin Smith, Antoine Bouthois and Peter Hillman) at Siggraph 2010 and the amount of painstaking detail and hard work that went into each tree and cloud of Avatar's alien planet Pandora become blatantly obvious. It was hard work just keeping up with them. I tried my best.

Most of the problems they encountered with Avatar dealt with the film's massive scale. Setting up the lighting for a city street is one thing but how about setting up the lighting for a whole planet? And having it be consistent across each shot in the film? Kevin Smith talked about some of the techniques they used to keep things simpler. One idea was to build a light "rig" for all of the trees and vegetation. This created shadow maps for each of the plants at varying angles. When a scene was eventually lit, the plants could then pull up the specific shadow map which closest matched the scene light source. This saved the time of having to calculate shadows since they were all already baked in and ready to be loaded up.

Another topic Smith touched on was Image Based Lighting. This technique eliminated the need for traditional 3D point lights in exchange for something faster to set up and closer to real photography. The team used various environment projections to light scenes, usually in conjunction with each other. In one example, the hero character Jake was flying on one of the winged Banshee creatures. A basic jungle environment was dropped in to create the basic ambient light. An additional blue fill light environment (much like you would see in a photography studio) was laid on top to give more kick to the blue skin of Jake. And finally another environment of bright sunlight was added create a rim light around the character. With so much real world information in each of the environment maps, the lighting created was almost immediately photorealistic.

Antoine Bouthors and Peter Hillman both talked about an interesting new technique called "Deep Compositing". Instead of rendering regular alpha mattes and beauty passes with geometry "holdouts" or "cutouts", a separate "Deep Alpha" pass was created which contains information about where each pixel is in 3D space for each frame. So even when you have insane camera moves and objects overlaying other objects and subsequently vice versa, the "Deep Alpha" pass is able to position each pixel of your passes in it's correct Z depth. This helped save render time because elements could be rendered independently of each other without having to worry about other elements overlapping them. It was all taken care of with the Deep Compositing tool.

Both authors also mentioned a few things about volume lighting effects in the film. Since the film was going to be stereoscopic, none of the volumic elements-- such as clouds, muzzle flashes, and God rays -- could be 2D mattes or footage. They had to work in 3D space in order for the 3D to be real, correct 3D. A tool that Weta developed was a cloud program that could model and customize clouds for each scene extremely quickly with meta balls and noise algorithms. Shadow maps, like the ones mentioned earlier for the plants, were similarly used for the clouds in each scene, so that lighting could quickly be changed depending on what the director wanted. "Deep Compositing" also became a useful tool for the volume renders, as they would not need to be re-rendered if there were changes to the characters overlapping the clouds or God rays. The "Deep Alpha" channel enabled the volume effects to exist independently. This helped the team immensely in simplifying their workflow and avoiding re-rendering elements.

There were a few more tidbits of information, but the main theme running throughout the talk was really about early preparation and attempting to keep things simple. The creation of Pandora was such a huge task in and of itself that the team at Weta really wanted to anticipate and avoid any messes or nightmares that would come up later down the pipeline, especially with such a huge army of artists working on the film. Any extra steps or complications that could be prevented were worked out and resulted in much more efficient use of their time and energy. They worked hard but they also worked smart.

I want to thank Weta Digital for giving such a thought provoking talk on Avatar.
Continue reading "SIGGRAPH: All About 'Avatar'" »

Permalink | Comments(0)
 
July 27, 2010
  SIGGRAPH: Monday, July 26
Posted By Damijan Saccio
The exhibition floor does not open until Tuesday, so a wise SIGGRAPH attendee spends Monday catching up on the Art Gallery, the Emerging Technologies, the Studio, and of course the Animation Festival. We certainly got our fill of the latest and greatest animations on show this year. There definitely seems to be a slightly depressing theme going through many of them. This is of course not a surprise after what the world has gone through this past year.

One part of the conference that I always particularly enjoy is the Emerging Technologies area.  This is a place where researchers, professors, and scientists are able to showcase their new technologies and inventions regardless of whether they have any practical applications yet.  I can't tell you how many times I've seen great ideas here that years later finally find practical applications and become wildly popular. Quite a number of years ago, I saw the first steps towards multi-touch displays, which now every iPhone user feels is second nature.

This year, a big trend seemed to be devices Persistence of Vision. Many presenters used devices where persistence of vision  would allow for a three dimensional experience (see photos below)






One of these exhibitors that used persistence of vision to show cool graphics is Monkeylectric LLC.  Their product has already 'emerged' and is actually part of the 'Studio' section.  They sell two main varieties of LED lights that one can afix to one's bicycle wheel to produce an interesting set patterns and another that is more sophisitcated and allows for one to show any artwork of one's choosing.  They even let viewers draw their own designs and were able to show them on one of their bicycle wheels within a minute or two. The simpler model for a bike is available for only $60.  It's a great safety feature for riding at night and also will turns a few heads.  MonkeyLectric was definitely one of the more interesting exhibits.  You can view a movie here: BikeWheel Movie and a sample image here:



Stay tuned for tomorrow when I'll write about my favorite exhibitor from the Emerging Technologies area!\
Continue reading "SIGGRAPH: Monday, July 26" »

Permalink | Comments(0)
 
July 27, 2010
  VideoInsight ’10 Seminar by Tektronix July 21, 2010 Hilton LA/Universal City Hotel, Universal City, CA
Posted By Barry Goch
Tektronix began their VideoInsight '10 series of seminars on the West Coast starting in Burbank, California on July 20th then heading down to the Universal Hilton the next day. I attended the Universal session and found myself amongst fellow editors, studio engineers, broadcasters, and more. Tektronix's Steve Holmes was the lead presenter and he gave an excellent presentation. In fact, I was very impressed by the professionalism and execution from the entire Tektronix team.

The first part of the presentation was entitled Audio Loudness Monitoring and Measurement. Steve discussed Dialnorm and demonstrated how to use the scopes to measure loudness. He went on to point out that new legislative standards are working their way through congress, HR 6209, the CALM act:

http://www.opencongress.org/bill/110-h6209/show

and content providers and broadcaster should be aware of this pending change to FCC policy.

He also referred to the ATSC standard:

ATSC RP A/85

http://www.atsc.org/cms/standards/a_85-2009.pdf

As a reference on the topic of loudness.

After a short break, Steve moved in to discussing Ancillary Data Monitoring. He mentioned the importance of the AFD, Automatic Format Description, flag in the ancillary data stream. He pointed out that if the AFD flag is in two different locations in the ancillary data, the picture could flicker between aspect ratios. Steve also showed where the SD and HD closed captioning data is stored in the ANC area and he also showed how the Tektronix scope can overlay the captions onto picture

After a lunch sponsored by Tektronix, Steve lead us through Color and Gamut Monitoring. He demonstrated their new spearhead display that shows color vector and saturation (Color Lightness, Saturation, and Value) in one display. He also suggested the that error setting be set to 1% to avoid single pixel anomalies setting off error messages. The errors are logged with time code read from video stream. To help people like me see exactly where errors are in the frame, Steve suggested enabling Bright Ups on the picture display to show where errors are in the image. He also mentioned that the Scopes are controllable via a web interface and also a USB interface on the front of the scope for downloading screen grabs

And for those working in 3D, the WFM 8300 can split a 3D image into two streams to monitor each eye for color and luminance. You can also use the line select to check parallax

The day was wrapped up with a session entitled Waveform Monitors as a Creative Tool in Color Correction with expert colorist, Steve Hullfish. Steve is the author of two industry standard books on color correction. I spoke with Steve on our lunch break and found him to be an excellent resource for the grading questions I had. Unfortunately, I had to leave for a meeting and missed his session. No worries though, the VideoInsight tour continues July 27th in New York.

http://www.tek.com/forms/response/video_insight_10/short/form/

It was a presentation of knowledge using Tektronix gear and not a heavy sales pitch. For those of you not in LA for Siggraph, I highly recommend attending.
Continue reading "VideoInsight ’10 Seminar by Tektronix July 21, 2010 Hilton LA/Universal City Hotel, Universal City, CA" »

Permalink | Comments(0)
 
July 27, 2010
  SIGGRAPH: A Case Of The Mondays
Posted By David Blumenfeld
Today was a full day at the show, with a number of interesting talks and sessions. I arrived nice and early and snagged the first spot right in front again. If I can keep this up, I definitely won’t have to worry about getting lost in the parking lot. This also came in handy around midday when my iPhone needed a bit of recharging.

After arriving, I headed over to the “All About Avatar” talk. Moderated by Jim Hillin, this talk featured Stephen Rosenbaum (on-set VFX Supervisor), Kevin Smith (lighting artist and shader writer from Weta), Antoine Bouthors (effects artist from Weta), Matthew Welford (from MPC), and Peter Hillman (compositing at Weta). This talk was interesting and full of well-presented visual examples and on-set photography, including the use of the virtual camera for creating shot layout “templates” which were handed over to Weta for effects creation. While a number of technical aspects were discussed, including custom development of a stereo compositing pipeline in Shake, discussion of spherical harmonics in relation to pre-baked image-based environment lighting, and various interactive artist toolsets for creation of volumetric effects such as clouds, atmospherics, and fluid, perhaps the most interesting aspect of this talk for me was their development of what they have termed “deep compositing”.

While this was not a technical paper presentation on this technique (something which I would be very interested to read about), from what I gather, this methodology stores all depth data for a given sampled pixel along with the color and alpha, so that all recorded depths for that pixel (essentially a mathematical z-depth) can be accessed automatically at compositing time. To elaborate with a simple example (my apologies to the developers if I’m getting this wrong), a 1x1x1 unit opaque blue cube rendered orthographically perpendicular to the camera view at a distance of 5 units away from the camera would, in addition to storing an rgba value of 0,0,1,1 would also contain information (visualized as a Cartesian graph for purposes of illustration, but in reality stored as a deep opacity voxel field, similar to Renderman’s deep shadow format) of its “existence” between a depth from the camera of 5 and 6 (and all points in between since the cube is solid). During the compositing phase, any other objects read into the composite tree would read their own depth data, and either place themselves in front of the cube if their depth was less than 5, behind the cube if their depth was greater than 6, or inside the cube if their depth was somewhere between 5 and 6. This method doesn’t suffer from the drawbacks of z-depth compositing, where edges are poorly sampled and aliased, and also doesn’t require separate z-depth channels to be written out. While z-depth solutions can be developed to provide much better handling of transparency without loss of depth integrity (something I developed along with Matt Pleck back at Imageworks during the production of Beowulf), this solution is considerably more elegant and easier to work with when brought into a compositing package that is written to handle this type of data. Furthermore, the alternate solution of generating holdout mattes is not necessary, which provides not only for considerable time savings, but allows a greater amount of flexibility in regards to asset workflow and render planning and management. This is definitely a cool development (and one which has also been developed at Animal Logic), and I personally plan on looking into something along these lines at Brickyard, though for us, it would require implementing a reader not only in Nuke, but in Flame as well.

After this talk, I had planned on attending the “Illustrating Using Photoshop CS5 New Painting Tools” talk. Unfortunately, I became strangely lost inside the Studio section, and by the time I found the workshop, it was already underway and overflowing into other areas of the room, so I opted out of this and instead grabbed a bite to eat. I also took this opportunity to charge my phone within the confines of my strategically parked car. Once back, I decided to stop in on the “Do-It-Yourself Time-Lapse Motion Control Systems” presentation. This turned out to be a presentation by xRes Studio, where a longtime friend and colleague Eric Hanson is currently a VFX Supervisor. He was one of the presenters, and it was nice to run into him and say hello. We last worked together back at Digital Domain on Stealth. What they were presenting at this talk was some methodology for helping photographic artists to create low-cost, ad-hoc time-lapse motion control systems of varying complexity without the large cost of a turnkey solution. By utilizing an Arduino controller board, as well as some additional circuitboards they are developing, it is possible for hobbyists and professionals alike to construct (using these inexpensive products and custom written or open-source code) to create these camera rigs for (at the high end) hundreds of dollars or even less depending on choice of materials. I found this to be an interesting alternative to higher priced motion control systems as this may be something we want to look into for various practical shoots we do using the Canon 5D in our motion graphics department.

Immediately after this talk and in the same location, Dan Collins from Arizona Statue University gave an overview talk on “LIDAR Scanning For Visualization and Modeling”. While this was definitely an overview presentation of the evolution of this technology and some real-world examples of how this can be used, I found it to be interesting to actually see a selection of various capture devices and the point cloud data they produce. While I have worked on features which used LIDAR data for large scale geometry acquisition, such as The Day After Tomorrow, I was not directly involved with the capture process and have from time to time wondered if using LIDAR might be beneficial to our studio for quick set acquisition during a shoot. I will definitely be looking more into this in the near future.

From here, it was time to head over to “The Making Of Avatar” presentation, featuring VFX Supervisor Joe Letteri. This talk was more of a general overview of some of the various challenges faced and strategies used to tackle the overall film. In addition to touching more on the spherical harmonics and image based lighting, deep compositing, and stereoscopic issues and solutions, there was also some discussion of the muscle deformation system, FACS targeting and facial performance capture methodology, and overall scene reconstruction breakdown.

I decided to finish my day with the panel discussion of “CS 292: The Lost Lectures – Computer Graphics People and Pixels in the Past 30 Years”. This was an interview (more of a recollection of memories) by Richard Chuang (co-founder of PDI) of Ed Catmull (current President of Walt Disney Animation Studios and Pixar). The talk focused on a class that Ed taught back in 1980 (in which Richard was a remote student via microwave broadcast). Richard managed to record these classes, and his video footage of Ed is the only surviving record of these classes. The talks were all at once entertaining, historically significant, and surprisingly relevant despite 30 years of changes and development in the field of computer graphics and animation. This was a rare glimpse into the thoughts and recollections of a handful of people (including Jim Blinn among others) whose contributions to the field and our industry are perhaps too great to measure and fully appreciate. Throughout the duration of the panel, I was reminded of my own foray into this realm, which I thought I might briefly share here. As a child, I was fairly artistic and enjoyed drawing, though I was always very technical and enjoyed building things as well. Though I had used a number of computers early on, including the Commodore 64, TI-99, and Apple II and IIe, the first computer my family purchased was an Apple IIc. In school, a close friend of mine and very talented artist would draw comic strips and flipbooks with me, and animation was always something I was interested in creating myself. I began my foray into computer graphics using Logo, creating interesting pictures (and very basic computer programming) as well as animating trucks moving across the screen, but my first real animation program was a tool called Fantavision, publish by Broderbund. This tool allowed for a number of drawn shapes with different fill patterns (after upgrading to an RGB composite monitor, I learned these patterns were actually colors) as well as tweening and keyframing. My 5th grade science project had me construct a backyard meteorology station, and for my presentation, I created a simply animated tornado (which I still have on 5 ¼” floppy disk in a box in the garage somewhere) using this program and presented it to the class in the computer lab at my school. A number of changes in hardware, software, and life goals transpired in between then and now, but sometimes I find myself thinking about how lucky I am to have spent a large portion of my life working in a field where I can create beautiful images, solve technical challenges, and go to work every day with such a fun and amazing task. In large part, I owe a round of thanks to these pioneers whose vision, hard work, and pursuit of the unknown has made this career choice possible. A former colleague and friend of mine at Disney named Tim Bergeron once told me something which I think about often, especially when I’m having a particularly rough time on a project or things just aren’t going my way. He would say, “Whenever you think things are really bad and they couldn’t get any worse, always remember, we get paid to make cartoons”. I don’t know if I could put it any better than this.

For tomorrow, I have a full day planned. In the morning, I’ll either be checking out more about Iron Man 2, where they discuss some of the shading techniques further as well as their keyframe/mocap integration, or a talk on Simulation In Production, where representatives from a few different studios will talk about fluids, hair simulation, fractured broken geometry with texture fidelity and continuity, and large count object simulation. I really wish I could go to both, but I guess I’ll decide in the morning. Next, I’ll be hearing about a paper on Expressive Rendering and Illustrations, where a slew of different motion graphics techniques will be discussed. At lunch, I’ll be attending the Autodesk 3DMax 20th anniversary press lunch. In the early afternoon, there’s a nice talk called Blowing $h!t Up, which will talk about a number of destruction effects and rigid body dynamics in Avatar, Transformers 2, and 2012. After that, I’ll have to choose between a Pipeline and Asset Management talk (a topic near and dear to my heart as I used to specialize in developing systems for this purpose), or a talk on the making of Tron given by some former colleagues. I’ll likely end up choosing the former even though again I’d like to see both. Finally, MPC is presenting their views on a Global Visual Effects Pipeline, dealing with the challenges faced by multi-site operations like the one I work at. I may try to catch this one before calling it a day. Well, looks like it’s pretty late again, so I’d better call it a night. Stop back in tomorrow to hear some more recaps and random thoughts!
Continue reading "SIGGRAPH: A Case Of The Mondays" »

Permalink | Comments(0)
 
July 26, 2010
  SIGGRAPH 2010 - LA
Posted By Damijan Saccio
Well, another year, another SIGGRAPH. This year, 4 of UVPHACTORY's crew is in Los Angeles for a few days to see what this year's convention has to offer. 
Continue reading "SIGGRAPH 2010 - LA" »

Permalink | Comments(0)
 
July 26, 2010
  SIGGRAPH - Day 1: From Virtual Cookies to Image Statistics
Posted By Michele Sciolette
The first thing I did when I arrived at SIGGRAPH yesterday was have a look at the Emerging Technologies area as there’s always some amazing technology on display there. Unfortunately I only had time for a quick look, but I noticed a couple of noteworthy installations that are worth checking out.

The first is a 360-degree auto-stereoscopic display from Sony. It looks like a transparent pillar inside which 3D content is displayed. When you walk around the display you can see the subject from all points of view. It clearly hints at what could be available at home in the near future.

There’s also an amazing installation called MetaCookies which is a mixture of augmented reality, virtual reality and scent. You wear a virtual reality helmet and grab a real, plain cookie on which they burn a tracking marker. You choose what type of cookie you want to virtually recreate (chocolate or whatever), and as you move your real plain cookie around, you’ll see a virtual recreation of the cookie of your choice through the display. As you bite the cookie another device releases a scent to match the smell of your choice of cookie. Crazy!

I hope I get time to have a look at some of the other exhibits in the Emerging Technologies area, but I had to dash off to a presentation on Image Statistics. It was a good summary of research studies into natural images as collections of pixels that satisfy specific statistical patterns. It has some interesting applications in the area of automatic color correction.

Next was a presentation on physically-based shading and lighting where speakers from ILM and Sony gave a really interesting presentation on how moving towards physically-based lighting and shading has helped them produce more accurate results with faster turnaround.

Finally, I went to the Technical Papers Fast Forward which is always an entertaining way to get an overview of the latest research in all areas of computer graphics. The presentations didn’t disappoint and I’m looking forward to seeing some of the papers during the week.
Continue reading "SIGGRAPH - Day 1: From Virtual Cookies to Image Statistics" »

Permalink | Comments(0)
 
July 26, 2010
  SIGGRAPH: sunday, Sunday, SUNDAY!
Posted By David Blumenfeld
I arrived today at the show at 1pm, and was lucky enough to get a parking spot right in front of the door to the West hall.While the show was by no means packed, it was definitely busier than I had expected for a Sunday.Of course being the first day, and with the expo hall still being prepped for Tuesday, there weren’t a ton of things to do.I found the media registration easily enough and grabbed my badge as well as a pocket guide.Wifi at the event was working fine, and having the iPhone app was helpful for checking my schedule.

After browsing the halls for a few minutes to see what was posted up, I made my way to today’s course, “Physically Based Shading Models In Film and Game Production.”I felt this course would have some practical application for my facility as we move forward with further development.I’ll spend the remainder of tonight’s blog discussing some things I personally got out of this course, as well as some ways in which these ideas can apply to those of you also involved in commercial visual effects production.While everything here also applies to both feature film production, and increasingly to high-end game production as well, I have never worked in the game field, so I won’t attempt to speak intelligently about that, and as for films, most studios working on projects of that size have multiple dedicated departments responsible for developing custom in-house solutions for this.With commercial production, development time and resources are usually quite limited (unless the company is part of a feature effects facility), and only slight customizations are usually possible combined with primarily off-the-shelf software.When custom development is practical, it is either small scale, or so long term that it is more broad scoped and facility oriented instead of production specific.

Before diving into it, I’ll give a bit of background as to our current shading pipeline and setup. Being a Maya/Renderman Studio facility, our general workflow is based around a raytraced global illumination shading model using the provided Renderman Delux shader, allowing us to add components as needed to obtain the desired look without relying on custom shaders.On set, we’ll capture HDR images using a Canon 1D Mark IV with an 8mm fisheye lens mounted to a roundabout with click stops set at 120 degrees (for three sets of photos to guarantee nice image overlap for stitching).For each angle, we’ll shoot a bracket of seven exposures in raw format, each usually around 1-1.5 stops apart (more on this in a bit based on the talk today).Of course, whenever possible, we shoot an 18 percent gray ball (that I need to replace after a recent breakage), and ideally a chrome ball (though this doesn’t always happen).Back at the shop, we’ll merge the brackets into single radiance files in Photoshop CS, and then stitch the three spherical images into a single lat-long map using Realviz Stitcher.This yields a roughly 8k image (slightly smaller due to the camera’s resolution) in floating point radiance format.

We recently acquired a Canon 5D at the facility, which will allow us to shoot larger resolution images, but I haven’t played with its exposure bracketing yet, and am not sure if there are any limitations with it.From here, we’ll take our image back into Photoshop and paint out the mount base as well as perform any cleanup on the image that seems necessary.Finally, we’ll save out a flopped version of the image as the environment ball in Renderman Studio inside of Maya has a flopped coordinate system, so if we don’t want our image to be backwards, this is required.At this point, if we have images of our gray ball, we’ll set that up in one of our scenes with the unwarped plates (working with reverse gamma corrected images at gamma 0.565 since we render with a lookup of 1.77) to obtain a lighting match using the environment and a single spot for shadow casting.From here, we’ll typically create a second environment light so that we can easily separate the specular contribution of one from the diffuse contribution of the other, and then add additional studio lighting environment lights as necessary for rim lighting etc.While this process will usually give us pretty nice results in a short amount of time, there are a number of drawbacks to it.I will discuss some of these below as I recap the presentation.

Now for the course overview.The presenters (in order of presentation) were Naty Hoffman from Activision (games), Yoshiharu Gotanda from tri-Ace (games), Ben Snow from ILM and Weta (film), and Adam Martinez from Imageworks (film). Naty headed off the talk with an overview of how surface shading is calculated, as well as a brief recap of the BDRF (Bidirectional Reflectance Distribution Function) calculation.He spoke about the notion of an optically flat surface (where the perturbances in a given surface are smaller than the wavelength of the visible light interacting with it, such as at the atomic level), as well as microfaceting (where small surface imperfections will play a part in the directional scattering of light, as well as how that ray is shadows or masked by adjacent imperfections.He also discussed the notion of importance sampling, whereby different methods can be utilized to help speed the raytrace calculations by focusing on areas of the scene where the lighting makes a large contribution to the calculated pixel result while culling out less important areas.Finally, he touched on the importance of not only rendering in the correct gamma, but painting your textures with the same compensation, something which we currently do at our own facility.

Yoshiharu spoke about some specific situations at his studio and how switching from the ad hoc shading model they had previously used (whereby their lighters were basically trying to compensate for shading inconsistencies manually) to a custom physically based shading model they wrote in-house improved the overall look of textured elements and lighting response, specifically for use on some of today’s console gaming platforms such as the Playstation 3.

Next up was Ben with a discussion of how ILM used to light and shade their scenes, and how they have been doing it since Iron Man (including Terminator Salvation and Iron Man 2 as well in his discussion) using a physically based shading model.There were a number of ideas and tips I took away from his talk, which I will share here.Sadly, he had to leave before the Q&A session at the end of the talk, as I would’ve liked to ask him a question or two.Some interesting aspects of building physically based shading models involve the idea of importance sampling and calculating with normalized values (especially the specular contribution in relation to surface roughness). Of particular interest was his continued use of a chrome ball on set. While we are in the habit of shooting an 18 percent gray ball (a known quantity object for matching in our lighting setups), we don’t shoot a chrome ball since we obtain our HDRs through fisheye lens multiple-exposure bracket photography. Of course, when we create our lighting setup, we build a chrome ball in the digital scene, but have nothing to match this to. Shooting the chrome ball would give us that match object, and though it seems quite obvious, this is something I intend to start doing from here on out for that reason alone. Another interesting tidbit he shared had to do with the brushed metal on the Iron Man suit.In their look development phase, they painted displacement maps of the brushed metal streaks, as we do when we want to recreate that look. Of course, this invariably produces sparkling artifacts, requiring us to turn up the sampling to a high level, as well as having to set this up at different scales based on each shot. In their scenario, they did away with the painted maps and instead created the UVs for each surface running in the direction of the brushing. They then set up their shader to be able to adjust the roughness (if I recall correctly) with separate values in U and V, thereby creating a brushed effect that would not produce sampling artifacts and work correctly from any distance. The next interesting item was that, while they shoot their HDRs much the same as we shoot ours, they are capturing their images at 3 stops apart, while we tend to do between 1-1.5. Of course, using larger gaps in exposure will definitely crank the shadows and the light sources, but I tend to find stitching problems in this range, so I definitely need to do some more experimentation along these lines to see if using 3 will give better results. He also demonstrated their use of a standardized environment for lookdev, using the same plate and a well controlled environment light only for the basic lookdev of an object. This seems to me that this would only be applicable using a physically based shading model, since non-physically based ones would end up with setups which will work fine in one environment but produce substandard results in another. I am hopeful that if we switch over to a physically based model, we can implement something along these lines as well, as this seems to be a much more time-efficient way to work.

Finally, Adam presented the new advances Sony has made implemented a physically based raytracing model into their Arnold renderer, and how they have used that in conjunction with area lighting, texture mapped geometry (Ben also demonstrated a few sets with HDR mapped geometry behaving as set lighting) for lighting, and ensuring that all lights have decay preset on them.

Overall, the notion of a real-world physically based shading model is a fantastic development. Being able to work in a manner which allows not only for quicker, easier lookdev, but with a material behavior which will work correctly in multiple lighting situations is incredibly appealing. Of course, there is naturally a tradeoff in this sort of approach when it comes to render times. During the QandA session, Adam was asked at one point about the render time for one of the images he discussed, which turned out to be roughly 14 hours. While render times like this may be acceptable (even though not entirely desirable) at a large facility, smaller facilities like mine simply cannot deliver shots with those kind of times. In fact, for our size and the required turnaround, render times exceeding one hour are generally unacceptable except in rare cases. Moving forward, we will be looking into either new custom development in our rendering pipeline, or perhaps adding other renderers which currently take advantage of these features off the shelf to see if we can obtain a better workflow with the results we are looking for. Taking this course and better understanding what this was all about definitely opened my eyes to some fantastic advancements in this realm, and I intend to definitely find out some more knowledge and information in this regard during the rest of the show. I am also curious to see how the new GPU rendering (using some of the newer graphics cards with CUDA acceleration) might be able to help us along these lines as well.

In conclusion, it was a nice first day of the show, and hopefully you’ll also find this information useful for your own facility.If you have the opportunity to check some of this out during the rest of the conference, it’s definitely worth your while, and if not, I would highly recommend looking this up on the web to learn how this type of shading can improve the final look of your images.

For tomorrow, I’ll start the day off with a presentation about Avatar, followed by an illustration class using the new painting tools in Photoshop CS5 (we’re planning to upgrade from CS3 shortly). After lunch, I’ll attend a demo of a LIDAR scanning session, see a session on the Making Of Avatar, hear a panel discussion with Ed Catmull about the early days of CG, and possibly check out the Electronic Theater if I’m not too worn out by that time. Check back tomorrow for a summary of the day’s events. I’ll try to keep that post a bit less technical! And now to bed for a bit of recharge before 5am rolls around.
Continue reading "SIGGRAPH: sunday, Sunday, SUNDAY!" »

Permalink | Comments(0)
 
July 25, 2010
  SIGGRAPH: Before The Show
Posted By David Blumenfeld
Well, here we are again.Another year has passed, and it’s time for SIGGRAPH once more. I tend to only visit every other year when the show is in LA, so it’s been two years since my last foray to the convention.The extra gap between conferences usually means that the focus changes enough to make it substantially different than my previous experience, which keeps it fresh and new.

I’m assuming the focus on the floor this year will be mostly on stereoscopic, including glasses, projectors, encoders, displays, creation tools, and new research into more immersive presentation methods.It also appears that there will be a large amount of talk regarding accelerated graphics processing via the new type of GPUs being offered by the major graphics card vendors. While being able to program custom shaders and methods via their APIs has always been of interest, I’m most excited to hear about utilizing the chips and toolsets for greatly accelerated raytracing and rendering.

As before, there will likely be a fair amount of wireless, markerless motion capture devices being demonstrated, as well as small-scale 3D scanning devices and additive rapid prototyping machines (3D printing).While it seems there will be a number of vendors with different types of layer deposition printers, it would be nice to see a few examples of subtractive rapid prototyping (SRP) as well, utilizing machines that are basically advanced CNC mills to cut away from a block of material such as wood, metal, or plastic, as opposed to putting down layers of specialized materials to build an object up.Both types of machines have their place and advantages, so it would be good to see them equally represented, but who knows what the show will hold.

In the course/presentation arena, it looks like there will be a fair amount of talk regarding natural effects and simulation.While this will surely prove to be informative and interesting, much of this development (as before) will be largely a presentation of custom developed in-house solutions, either written from the ground up, or a large set of custom software written on top of a commercially available product. From a production point of view, it would be nice to see some additional talks of people cleverly using some off-the-shelf software without much in-house development to create these effects, but for now, this is just where the state of the industry is at.

After working for many years on large feature films, I was always used to working on this type of development, but recently, working at a smaller facility almost exclusively on short term commercial projects, the resources and schedule simply doesn’t permit this in most cases. However, I’m still looking forward to hearing how some of these effects were made.There seems to always be the desire for volumetrics, cloth, fur, fluids, and the like in all the work that comes through the door, so ideas and techniques are surely good food for thought.

HDR lighting and global illumination techniques are a hot topic as well, and I’m excited to hear about some of the advancements that are being made, especially on the sampling and rendering side.It would be great to find some better techniques for speeding up occlusion rendering, sample reduction without visible quality loss, and optimized lighting setups which create photorealistic results in a number of different conditions.Similarly, there is some discussion about optimized texture creation and tile randomization, which is always helpful for background items and filling out large sets of digital assets, so this sounds interesting as well.

Overall, the show is the venue where I run into old friends and colleagues from the various studios I’ve worked at, and it’s always nice to see what people are doing these days, both professionally and personally.As our industry ages and matures, it’s always fun to see the once eager all-nighters with children and families of their own now.Sharing production horror stories becomes mixed with diaper tales and photos of the kids, making this high-tech, unique business seem a little more normal.

This year, I used the online scheduler to try and plan every day out, though I have a few courses which are currently overlapping.I’ll make the decision when I get there, or have a backup in case something gets cancelled. I would’ve liked to have been able to view this schedule in a chart format or time view, but that didn’t seem to be an option (or I just didn’t dig deep enough to figure this out).

I also downloaded the iPhone app for the convention as well. This seems to have some nice info in it, as well as maps and phone numbers etc.One thing I would’ve liked to have been able to do is log into the SIGGRAPH site with my username/password and download my schedule to the app, but it doesn’t seem to have this functionality, so I guess I’ll have to go through the schedule again on there and add them all separately.Again, maybe I’m missing something and there is a way to do this, but I haven’t figured that out so far.With a show such as this one, where hi-tech is the name of the game, and the race to stay current with the latest tools and resources is always on, it would be nice to have some of these features fully implemented to properly take advantage of them to the highest degree.But like the work we create, we try and learn to make it better each time we go.

I’ll be heading down to the convention center to get my badge in a few hours.Not wanting to lug a briefcase or backpack type thing around, I’m simply bringing a few pens, a notepad, and my iPhone.I’ll find out quickly enough if this proves to be a wise move or not.Check back in tonight to see how the first day went. For everyone else getting prepped to head on down, I’ll see you there!
Continue reading "SIGGRAPH: Before The Show" »

Permalink | Comments(0)
 
July 23, 2010
  Adobe Photoshop CS5 - We need more Content-Awareness
Posted By Phil Price
Whenever I mention I'm working with Adobe CS5 to other designers and digital artists, they usually say something like "I just upgraded to CS4 and haven't learned all its features, why should I upgrade to CS5?"My answer lately is, "If you do a lot of image retouching and photo manipulation in Photoshop, then Content-Aware Fill technology is for you."

I'd seen the Content-Aware features previewed before Photoshop CS5 was released and thought it looked helpful and perhaps faster than cutting, pasting and cloning unwanted objects out of still images. I figured it was one of those tools that works well in ideal cases, but wasn't such a big improvement. Recently, however, I had a bunch of images with unwanted objects that had to be removed. Normally I would have cut and pasted small pieces of the image together to cover up the unwanted objects. Since I'd recently loaded Photoshop CS5, I thought I'd give the new Content-Aware Fill a whirl to really see it in action. I have to say it worked like a charm and I was finished retouching much more quickly than I'd planned.

The feature wasn't just helpful in removing unwanted objects, it also came in handy when a number of the pictures had to be rotated. Rotating the images left big chunks of empty space in the corners of the frame. I simply used the magic wand tool to select the blank parts of the picture, selected the Content-Aware option in the Fill dialogue menu, hit OK and it filled in what it calculated to be the missing part of the picture. It still took some retouching on many of images to get right, but it did a pretty amazing job in most cases of getting me started. In some cases it got it right the first time with no additional retouching needed.

Now I should point out that most of my images were ideal for the Content-Aware tools. It works best when there's lots of organic or random shapes around the objects like clouds, grass and landscape. When there was more definable objects like a building with exactly spaced windows it got a little trickier.

The tools that use the Content-Aware algorithm are:
  • Fill - when you've used one of the selection tools or hand drawn a selection around an object.

  • Spot Healing Brush - to paint out by hand an unwanted object with the Content-Aware option chosen.
After using the new Content-Aware features on still frames, the obvious question concerns the possibility of using it in After Effect CS5 to remove unwanted objects in moving footage. Unfortunately there's no such option in the new After Effects (although there are several new and improved image tracking and retouching tools). I'm not sure exactly how Content-Aware could be implemented within After Effects, but it seems there must be a way. Sure, you can bring moving clips directly into Photoshop and paint them frame by frame, but that's time consuming and can create image chatter across multiple frames.

Conclusion: While new software releases always add new and refined features, it's usually hard to point to one feature that stands out as a tool you wouldn't want to live without. In the case of Photoshop CS5, I vote for Content Aware and hope to see it integrated into other Adobe programs in the future. You can check out a demo by clicking this link.
Continue reading "Adobe Photoshop CS5 - We need more Content-Awareness" »

Permalink | Comments(0)
 
July 02, 2010
  ALT Systems CineSpace FCP + Smoke Awesome Demo
Posted By Barry Goch
Not long ago, on a beautiful evening, ALT Systems invited folks to join them at one of the coolest venues in Hollywood, CineSpace, to experience Smoke for Mac. The event was billed as FCP + Smoke = Awesome. Yes, it was me, your humble blogger, who had the honor of presenting the demo for ALT Systems. Representatives of Apple were also on hand for the presentation.

I was demonstrating the new drag-and-drop conform workflow between Final Cut Pro and Autodesk Smoke 2011. It's so great to have the advanced tools in Smoke at your fingertips to pull off amazing keys, fast roto work, and industry standard tracking. In my opinion, there's no faster way to get your work done.

I mentioned stars…I mean stars in the world of Smoke of course. Alan Latteri of Instinctual TV came and demo'd his amazing Chopper Spark - plug-in in Autodesk lingo. Chopper does an amazing job at scene detection - perfect for chopping up a single clip for color correction, effects, and even restoration work. http://www.chopperspark.com/Chopper/Home.html

After the demo, there was a crowd around the presenters with follow up questions and good-humored industry banter. ALT + Demos = Good times!
Continue reading "ALT Systems CineSpace FCP + Smoke Awesome Demo" »

Permalink | Comments(0)