Benjamin Button VFX

by 전우열 posted Jan 10, 2009
?

단축키

Prev이전 문서

Next다음 문서

ESC닫기

크게 작게 위로 아래로 게시글 수정 내역 댓글로 가기 인쇄
신기하군요..ㅡ.,ㅡ);; 모션 트래킹하는 제품이 새로 개발이 됐나봐요.. 아무것도 없이.. 표정을 잡아내다니..후덜덜하네요

http://features.cgsociety.org/story_custom.php?story_id=4848






Warner Brothers and Paramount commissioned DD to do a one shot test. In five weeks DD created Benjamin, worked out the tracking issues and put a CG head on a body. Everyone was happy, the film was greenlit. But now, DD had to make a character speak, hold up in a close-up, and handle several hundred shots of a character that would make the audience laugh and cry and carry the first 52 minutes of the film. Though the test had worked, it was not sufficient for the entire project. That method was limited to how Pitt looks today, but DD had to model and animate Pitt over a series of decades.





It must be said that several other VFX houses did a lot of other work for the Benjamin Button feature. Extensive Matte Painting work was tackled by the team at Matte World Digital; Greg Strause and his crew at Hydraulx did the Russian snow and the matte painting of scenes in Paris, New York and Russia, as well as head replacements for Cate Blanchett's dance performances and the CG elements for the baby Button. Asylum VFX handled the many scenes in the tugboat, and Lola VFX also worked on the 'youthening' effects, with supervision by Edson Williams.
 
PIECING IT TOGETHER
Enter Rick Baker. Starting with life casts of Pitt, Baker sculpted three different lifelike maquettes, Benjamin at 60, 70, and 80 years of age. Baker also did life casts from the shoulders up of the various actors playing Benjamin’s body, then grafted on the various heads, resulting in three different busts of Pitt at different ages. Those sculptures were then scanned into a mesh to be retargeted with Pitts acting.

DD’s Character Supervisor Steve Preeg was aware of the work of psychologist Dr. Paul Ekman who researched human emotion and reaction to stimuli. Ekman believed that the face had a series of basic poses he could catalogue, thus creating FACS (Facial Action Coding System), a library of everything the human face can do. Ekman contended our facial muscles are more or less hard wired with root expressions that are universal. It’s a theory used by animators for years, but is usually used by referencing the shapes for keyframing emotion.

By utilizing this theory along with Mova Contour, DD volumetrically captured Pitts’ face doing roughly 120 expressions and applied the information to various 3D models, resulting in literally thousands of models of Pitt. At the same time, they scanned the three maquettes of Benjamin’s head. This type of volumetric capture, where a head is scanned in real time in three dimensions at 24 FPS, results in millions of polygons of surface capture without the dead zones that result between markers. The result was the old Benjamin lined up perfectly to the Mova Contour scan of Pitt’s facial poses. By retargeting the expressions onto the scanned Benjamin heads, DD effectively had the three old Benjamin CG characters performing the FACS with Brad Pitt’s full range of emotions and expressions.