It's been a long journey, filled with trials, tribulations, and rendering problems, but I can finally see the end of the tunnel. It's the night before thesis is due, and I'm spending this last night finishing up my paper and gathering all the documents I need for delivery.
The last couple of weeks have been a whirlwind of animation, lighting, dynamics, and Realflow. Several of the effects I wanted to do did not make it into this cut of the film: the disintegrating ash effect was a particularly difficult sacrifice to make.
Interestingly, the shot I am happiest with is the fire breathing shot. I love the color of the flame and the subtle smoke and burning effect on the book itself. The effect is a great study in compositing, dynamics, and light. Everything was done in a separate component and composited together. This shot took about as much time as I expected, and looks almost exactly the way I wanted it to look. As I have come to learn, the key to great looking effects is compositing. In a sense, 3D visual effects is one of the most difficult areas to master because you have to be technically savvy in both 2D and 3D environments.
The underwater effect was the most annoying effect in that I spent the most time on it out of all the effects, and I'm still not quite happy with it. I still want to do a little more testing to create more tendril-like fluid emission instead of the gaseous look it has now.
I spent a lot of time these last few weeks with Realflow, and I'm starting to move from novice to an intermediate knowledge of the program. I feel very comfortable generating splashes, pours, and fills -- simply, everyday liquid effects. While the simulations for thesis aren't perfect, I'm quite happy with the results I got out of the program.
Lighting was a consistent problem due mainly to the fact that this is a night setting. Getting enough light on Billy and the dragon to see their facial expressions was a challenge for nearly every shot. Too much light, and they looked glowly and pale. Too little light, and their faces would be in complete shadow.
The compositing and editing for this piece was a complete nightmare. I have Demetri Patsiaris to thank for editing my movie together. I mainly had to keep track of all the different renders and all the different shots, which was crazy enough in itself. 37 doesn't seem like that large a number until you're trying to manage several layers for 37 shots. All in all, I think I rendered out about 100 passes. All the shots had at least a beauty pass and an ambient occlusion pass, and many had one, two, or more effects passes.
Well, I should wrap this up. This is the last journal entry that will make it into my paper. If you have read this journal up to the very end, thank you for taking this journey with me! And send me a comment!
Over and out
jk
Wednesday, December 12, 2007
Sunday, December 2, 2007
Final Countdown
We're due in 10, 9, 8 ... It's the final 10 days before thesis is due, and of course I still have a mountain of work to do. I'm jealous of my fellow thesis mates who are starting to coast knowing that they will finish on time.
For me, finishing is contingent on three things: particles, realflow, and compositing. Besides the drudgery work of lighting, animating, rendering, and editing/sound, I still have no idea how long the particle, realflow, and compositing work is going to take. This is especially bad considering the thesis is due in less than two weeks.
At this point, I am still focusing on getting all the shots rendered out before trying to perfect any one shot. Right now I have most of the beginning shots done. Tomorrow, Monday, all my animators should theoretically be finished, and it will be a big rendering day.
Conrad finished his animation on the manual page turns, it looks really nice. I had to tweak some frames to make sure Billy's hand traveled with the paper, but all in all I'm happy with those shots. Cidalia's textures look awesome, and the nCloth turn is very fluid and fun to watch.
Still working on the pour, although I've got some good bubbles and animated the meniscus pretty well such that it looks like water in a bottle. That will be my first realflow test, and if that is successful I'll use it for the potion splash, and then the dragon splash. Blobby particles in Maya is my contingency plan, and I've already experimented with using splash blobbies (with some success). For that shot I also tried another way of doing ripples on water, still mapping to the wave height offset of the ocean shader, but this time instead of animating the ramp color positions I animated the 2D placement node's UV repeats and frame translates. The result is a much more controllable ripple. So I've gone from using a rendered moving ramp as an animated texture, to directly using an animatable ramp, to using a ramp that is animated by its 2D placement node. That's some great problem solving if I do say so myself.
The spices shot is done, rendered out. I lined up the ripples as best I could with the spice pieces, and it looks fine. Gavin said he didn't like the trees in the background for that shot, but it was really difficult to get nice-looking ripples without the shot composition being really oblique.
The drop out of the bottle looks good as a blendshape, and interestingly so does the resulting drop splash. I considered realflow for the drop splash, but in my footage that I took with my family over Thanksgiving, I found it only lasts for about 4 frames. I thought, what the hell, I can model shapes for 4 frames. It went really fast and the results and look really good. I had to put a ramped transparency on the blendshape to get it to fade properly into the water and make it look like one solid liquid entity instead of the ocean surface and a modeled splash.
Next on the list is all the dragon, non-effects shots, for tomorrow, when my animators deliver (hopefully). After that, all that's left is the ash and smoke effect, the fire and burn effect, the potion splash, the dragon splash, and the underwater shot. That's a lot of work for 10 days! I'm really liking the way the rendered shots look. Now if only I could get the major effects to look right, I'll be in good shape.
For me, finishing is contingent on three things: particles, realflow, and compositing. Besides the drudgery work of lighting, animating, rendering, and editing/sound, I still have no idea how long the particle, realflow, and compositing work is going to take. This is especially bad considering the thesis is due in less than two weeks.
At this point, I am still focusing on getting all the shots rendered out before trying to perfect any one shot. Right now I have most of the beginning shots done. Tomorrow, Monday, all my animators should theoretically be finished, and it will be a big rendering day.
Conrad finished his animation on the manual page turns, it looks really nice. I had to tweak some frames to make sure Billy's hand traveled with the paper, but all in all I'm happy with those shots. Cidalia's textures look awesome, and the nCloth turn is very fluid and fun to watch.
Still working on the pour, although I've got some good bubbles and animated the meniscus pretty well such that it looks like water in a bottle. That will be my first realflow test, and if that is successful I'll use it for the potion splash, and then the dragon splash. Blobby particles in Maya is my contingency plan, and I've already experimented with using splash blobbies (with some success). For that shot I also tried another way of doing ripples on water, still mapping to the wave height offset of the ocean shader, but this time instead of animating the ramp color positions I animated the 2D placement node's UV repeats and frame translates. The result is a much more controllable ripple. So I've gone from using a rendered moving ramp as an animated texture, to directly using an animatable ramp, to using a ramp that is animated by its 2D placement node. That's some great problem solving if I do say so myself.
The spices shot is done, rendered out. I lined up the ripples as best I could with the spice pieces, and it looks fine. Gavin said he didn't like the trees in the background for that shot, but it was really difficult to get nice-looking ripples without the shot composition being really oblique.
The drop out of the bottle looks good as a blendshape, and interestingly so does the resulting drop splash. I considered realflow for the drop splash, but in my footage that I took with my family over Thanksgiving, I found it only lasts for about 4 frames. I thought, what the hell, I can model shapes for 4 frames. It went really fast and the results and look really good. I had to put a ramped transparency on the blendshape to get it to fade properly into the water and make it look like one solid liquid entity instead of the ocean surface and a modeled splash.
Next on the list is all the dragon, non-effects shots, for tomorrow, when my animators deliver (hopefully). After that, all that's left is the ash and smoke effect, the fire and burn effect, the potion splash, the dragon splash, and the underwater shot. That's a lot of work for 10 days! I'm really liking the way the rendered shots look. Now if only I could get the major effects to look right, I'll be in good shape.
Wednesday, November 28, 2007
PANIC MODE! WOO HOO!
I'm actually in quite a good mood despite the fact that I have about 2 weeks to finish thesis when I really should have another year. What can I say, I attempted a really really ambitious project. Renders are coming along fine, I've rendered out 3 shots, and 2 more are going overnight. I'm aiming for getting 4 shots finalized per day (the 3D work), which will leave me about 4 days for compositing. Not enough, of course, but it should be adequate to get something passable together.
I'm going through a marathon of effects: the splash, the fire, and the underwater shot all have new renders. The mrBatchAnim.mel script is saving my ass all over the place, and I've gotten pretty good at dealing with memory heavy renders. One of the best tricks, besides using the command line whenever possible, is to end the explorer.exe process while it's rendering. I find for some reason that will prevent most of the following:
-fatal error, cannot allocate 23493409583094 bytes of memory
-mental ray memory allocation error, memory exception thrown
-memory usage has exceeded allocated amount, memory exception thrown
-the C++ runtime library has caused Maya to terminate in an unconventional way
-memory allocation error, mental ray may have become unstable, please restart Maya.
I've been working at home a lot, now. Gone are the days where I can come home and relax. Now I finish my day's work and come home to start my night's work. I render overnight at the labs, during my day at the labs, at night at home while I sleep, and during the morning just before I leave the house. That's actually more than enough rendering power.
The really really big problem is that ALL THE FRICKIN MONITORS ARE DIFFERENT. My Macbook at home is the absolute worst--any render that looks perfect there looks extremely dark on any other monitor. My old Fujitsu lifebook has no graphics card as far as I know, so the color depth is about 20 colors. Or that's what it seems like. A nice-looking render on my Mac, when viewed on the lifebook, you can count the colors on the screen. Three shades of brown! Two shades of green! One shade of blue! And black!
I am seriously oscillating back and forth between feeling like I have more than enough time to finish and feeling like there's no hope for passing. These mood swing correlates very strongly with an effects shot looking decent and an effects shot looking like crap.
Some problems I solved this week so far: translucency for the dragon, putting the moon in the sky without a weird alpha fringe, getting a workable dragon splash effect, getting a first composite for the fire, and doing a first attempt at compositing in the ambient occlusion and depth passes. The depth pass is most troublesome because the depth fog tends to make everything flat. My plan right now is to make 4 AFX comps, one for beauty, one for depth, one for ambient occlusion, and one for additional special effects. I'll take all those comps and layer them on top of each other. Just to simplify things, hopefully.
I like talking to Adam Burr about my project. He always seems so optimistic about a project without sugar coating things. He tells me what's working and what looks bad, and yet I don't come away feeling like I just got beat up. Being in that dynamics class is kind of intimidating because many of students in the class right after me are absolutely phenomenal artistically and technically. I think CADA has peaked with that class.
Keeping track of renders for 34 shots is not fun. My system of swapping files into folders called currentBeauty04, currentAmbOcc06, currentDepth23, etc., is working pretty nicely. It's still a lot of image sequences, however, and sometimes I can't find an image sequence that I'm sure I rendered sometime somewhere.
Today was decent, in terms of getting work done. I really have to be a machine from now to the end of the semester. Tomorrow, I'll get four more shots rendered, work on the ripple effects, start working with Realflow one way or another, and do a second pass of the fire effect. If Conrad has my page-turning animation, that will work out well because those can be the four shots I render.
Okay, it's 1 am = time to do an effect then choose a shot to render.
I'm going through a marathon of effects: the splash, the fire, and the underwater shot all have new renders. The mrBatchAnim.mel script is saving my ass all over the place, and I've gotten pretty good at dealing with memory heavy renders. One of the best tricks, besides using the command line whenever possible, is to end the explorer.exe process while it's rendering. I find for some reason that will prevent most of the following:
-fatal error, cannot allocate 23493409583094 bytes of memory
-mental ray memory allocation error, memory exception thrown
-memory usage has exceeded allocated amount, memory exception thrown
-the C++ runtime library has caused Maya to terminate in an unconventional way
-memory allocation error, mental ray may have become unstable, please restart Maya.
I've been working at home a lot, now. Gone are the days where I can come home and relax. Now I finish my day's work and come home to start my night's work. I render overnight at the labs, during my day at the labs, at night at home while I sleep, and during the morning just before I leave the house. That's actually more than enough rendering power.
The really really big problem is that ALL THE FRICKIN MONITORS ARE DIFFERENT. My Macbook at home is the absolute worst--any render that looks perfect there looks extremely dark on any other monitor. My old Fujitsu lifebook has no graphics card as far as I know, so the color depth is about 20 colors. Or that's what it seems like. A nice-looking render on my Mac, when viewed on the lifebook, you can count the colors on the screen. Three shades of brown! Two shades of green! One shade of blue! And black!
I am seriously oscillating back and forth between feeling like I have more than enough time to finish and feeling like there's no hope for passing. These mood swing correlates very strongly with an effects shot looking decent and an effects shot looking like crap.
Some problems I solved this week so far: translucency for the dragon, putting the moon in the sky without a weird alpha fringe, getting a workable dragon splash effect, getting a first composite for the fire, and doing a first attempt at compositing in the ambient occlusion and depth passes. The depth pass is most troublesome because the depth fog tends to make everything flat. My plan right now is to make 4 AFX comps, one for beauty, one for depth, one for ambient occlusion, and one for additional special effects. I'll take all those comps and layer them on top of each other. Just to simplify things, hopefully.
I like talking to Adam Burr about my project. He always seems so optimistic about a project without sugar coating things. He tells me what's working and what looks bad, and yet I don't come away feeling like I just got beat up. Being in that dynamics class is kind of intimidating because many of students in the class right after me are absolutely phenomenal artistically and technically. I think CADA has peaked with that class.
Keeping track of renders for 34 shots is not fun. My system of swapping files into folders called currentBeauty04, currentAmbOcc06, currentDepth23, etc., is working pretty nicely. It's still a lot of image sequences, however, and sometimes I can't find an image sequence that I'm sure I rendered sometime somewhere.
Today was decent, in terms of getting work done. I really have to be a machine from now to the end of the semester. Tomorrow, I'll get four more shots rendered, work on the ripple effects, start working with Realflow one way or another, and do a second pass of the fire effect. If Conrad has my page-turning animation, that will work out well because those can be the four shots I render.
Okay, it's 1 am = time to do an effect then choose a shot to render.
Thursday, November 22, 2007
Final stretch
This is the final stretch! 20 days left to go, and I still feel only about 60% of the project is completed. To that end, I've outlined the tasks I need to complete every day in order to get this thing finished up to a passing level.
But first, a recap on some of the difficulties and problems solved over the past few weeks.
The nCloth book was actually quite a pain to deal with. In retrospect, I'm not sure if it was worth all the problems it has caused. The main difficulty with the simulation is that I am referencing the manual into animation files, and attempting to use the same simulation in multiple shots. This requires, first of all, that the cloth be cached using an absolute path, not a local one, so that it can change as the project changes. This was not too bad to deal with, as maya creates an nCache node to store this information.
Setting up the textures for the book pages was relatively easy using quick select sets and assigning textures to specific vertices. The rotation had to be mirrored (on the 2D placement node, UV scale set to -1) for some textures, but all in all not too much of a hassle. At this point, the files can be swapped out depending on what is needed for the shot. I'm only waiting for Billy's hand animation; Conrad Turner is helping me with this shot, as well as the first shot with the manual.
More annoying was the actual simulation itself. I still have not been able to solve the problem of the book jiggling when it's supposed to be still. My workaround, at this point, is to duplicate the object at a specific frame and then do a visibility switch when I don't want the book to jiggle around. For some reason, keying the nucleus from enabled to disabled and back again screws up, I think it might have something to do with the nucleus' interpretation of time.
The transform constraints work will to turn individual pages, but they were quite a pain when I realized I had created them incorrectly, and wanted to redo the simulation. Unfortunately, I had already smoothed the object, and this smooth was done downstream in relation to the actual cloth mesh.
The workaround for this problem made me feel for the first time that I actually have a good handle on how maya works. I knew that all the history for the nCloth simulation was all there, so I graphed the Hypergraph connections. nCloth has a very easy way of operating on meshes, it simply accepts an inMesh and passes an outMesh. Thus, all I had to do was bypass the smooth node and take the result of the nCloth and pipe it directly into shape node of the book geometry. This worked perfectly. I wrote a quick script so I could toggle between smooth-connected and smooth-disconnected versions of the nCloth. I was really glad that I didn't have to go back and set up the whole nCloth system from scratch. Yay for nCloth, it's really robust and quite easy to use and understand how it works. Here's the hypergraph images and the connection I had to change:
I did my title effect using maya fluids. I used the fluid paint tool to paint density into my container from a file, the "mighty dragon" text that I put in my original 3D animatic. I did two simulations, and reversed the first one to form the title out of vapor clouds. The second was kind of serendipitous, as I was experimenting with pushing some of the fluid attributes beyond their normal ranges (setting negative densities, buoyancies, etc.). The result was a really nice looking explosion where the title becomes rapidly dispersing licks of flame.
The past few weeks I have been trying to organize the animation of my shots. I have Vanessa Weller, Conrad Turner, Seon Crawford, and possibly Danielle Hopzafel to help with my animation, and coordinating has been a difficulty unto itself. I'm wondering if it would have been easier to just do the animation all by myself, since none of them are doing more than 4 shots per person. Some of the animation is not quite what I wanted either, not because it wasn't good (they're all great animators), but since they don't know my piece as well as I do some of the emotions and acting choices didn't seem quite right to me.
The drop effect I accomplished in half a day, modeled nearly completely after the bandaid drop effect I did for Gavin's technical directing class. The blendshapes worked nearly flawlessly and it looks relatively good for half a day's work. I will probably keep this effect as is, since at this point I don't have time to perfect everything.
The ripple effect for the spices is taking longer than it should. Well, no, it's taking about the amount of time I expected it to, but it's frustrating because the effect is just for one shot that doesn't really help the story that much. It's quite a successful effect, though, I've had a bunch of people comment it works out. It is the shot during the montage in which Billy drops a handful of thyme into the well. The leaves used an nCloth simulation modeled off of Duncan Brinsmead's confetti tutorial. In order to get the cloth to collide with the well surface, I connected the well's surface to the ground plane of the nucleus for the spices. This looked really flat, so I needed a way to slighly tilt the water plane to create the illusion that the spices were falling on water and not a solid ground plane. The ground plane direction is controlled with a normal vector, so I connected the normal vector's X and Z to the X and Z transform of a locator. I created a plane and aim-constrained the plane to the locator in order to quickly visualize the rotation motion of the ground plane, and now it works wonderfully. An animator could even move the locator as needed to quickly and easily jiggle the fallen spices around, depending on the amount of turbulence in the water.
To get the ripples, I actually rendered out the spices from the top view and created a blue transparency to represent the point at which the spices fell into the water. I took that top view render into after effects and created a ripple for each spice that fell in the water. At first, I thought I could get away with placing random ripples, but a render showed that it was obvious that the ripples were in the wrong place. The ripple render was then mapped to the wave height offset of the ocean shader, much like I did for the ripples from the single drop.
My underwater effect is coming along quite nicely. Gavin said he liked it accept for the bubbles, which were gray and lifeless. So I have since created lots more bubbles to give the illusion that it recently splashed into the water, and some kind of mystical bubbles that grow in size and swirl around the bottle. The major difficulty with this shot was matching the 2D fluid to the bottle, and preventing the bubbles from colliding with the bottle. The bubbles are particles, and particles collide with objects at their center point. This means that even though a particle collides with the bottle, nearly half the bubble penetrates the bottle before it detects the collision. To solve this, I created a non-renderable offset surface to use as the collision object. I also rendered out some ripples from an old ocean shader to represent caustics. Now the only elements missing are a depth pass for the bubbles and the bottle, and some kind of murkiness pass to further sell the fact that the shot is under the well surface. I might have little floaties in the murky pass, maybe use some live action footage.
I found this great ash effect while search for particle tutorials on line. The author didn't really give a tutorial, but instead the effect was used as an example of how the gravity and uniform fields were different (gravity, as in real life, does not take mass into account, while the uniform field does). In the effect, the author uses a moving black and white ramp to make a vase invisble while at the same time use the texture rate attribute of an emitter to emit particles (with randomized masses, and effected by a uniform field). The result is that it looks like the vase is disintegrating into dust particles that blow away quite nicely. I am trying to use it for the ash in my book, but the major problem is creating a nice looking ramp so that it doesn't just cheesily disintegrate from left to right. I might just stick to a top to bottom projected ramp, but this means I have to lay out the UVs of the manual perfectly.
I converted my paint effects trees to meshes. This is both a blessing and a curse. It's a blessing in that I can get my paint effects, depth, and beauty all in a single pass. all rendered in mental ray. The side effect, though, has been catastrophic in that it very often causes the batch renderer to crash. The mrBatchAnim script doesn't even work that well. Most of the time, I have to say a prayer and hope that the script doesn't create out of memory errors. Looking at the task manager, I can see Maya taking nearly 1.7 GBs of memory, whereas when rendering without the trees it rarely gets above 1 GB. This is a major problem, and to me it is the one that will make or break the project (if I can't render, I obviously can't produce a finished product. To that end, I'm going to give in to nearly everyone's suggestion and put some of the trees on textured cards. I've always hated the look of textured cards and have done everything in my power to avoid them, but if I can't render I'm going to have to give in. Textured cards. Blech.
I tried to tweak BSP settings. My tweak caused the render to go from 4 minutes per frame to over 30 minutes without anything rendering. I must not understand BSP settings as well as I thought I did. It's sad because that's something I think could save a lot of time, but how much time should I invest trying to get the settings correct?
The dragon is finally complete! For the most part, anyway. After a few problems with skin weights flying apart and files corrupting themselves, Adrian delivered a finalized dragon model. He's really cute! It has made the initial splash shot more difficult because it's hard to make the cute dragon look large, but I'm happy with the model. Adrian helped rig the mouth head and neck, since I was having a lot of problems having never done that kind of rigging before. I rigged the feet, claws, tail, and eyebrows. Besides a couple skin weight problems, I'm quite happy with the rig. Steve Matthews helped me sort out some of the skin weight issues, and now most of the deformations are at least acceptable. I spent about a day blocking the new dragon into the beauty shots based on the old dragon's animations, and it's looking quite nice. Cidalia did a wonderful texture for him, one that I am planning on deriving a translucency map from. Hopefully that will produce a cool effect for the splash shot where the dragon comes out of the well.
Dragon texture:
I bought two new effects books and have been scouring them for solutions to my effects problems. Eric Keller's book on special effects is kind of interesting because it is project based, and uses standard Maya features in unconventional ways. The ambient occlusion effect used to create a scanning effect for a skull was quite clever.
The second book was the standard maya Special effects handbook from Autodesk, for Maya 2008. The project files are Maya 2008 files, but with 8.5's new ignore version check box (under the Open scene option box), they worked fine. There are some really really cool effects in that book. It turned me on to the use of hardware particles as a viable solution. Previously, I had always thought of hardware particles as a nuisance. They look bad, and require an extra pass because points, streaks, spheres, etc. cannot be rendered in Maya software. Two things are required for awesome looking hardware effects, from what I've gleaned off of the special effects handbook. First of all is motion blur, which is easy to set in the hardware render settings. The second is the color accum checkbox on the particleShape attributes. This wonderful attribute causes the color of hardware particles to be additive, which is awesome for fire effects, explosions, and any kind of energy effect. After seeing some of the demonstrations from the book, and creating a couple of my own, I'm totally sold on using them as a separate pass for the splash effect. Many people I've talked to said that the splash can have some kind of a magical quality to it, since story-wise it does involve magic.
I should also talk about the disastrous industry critique. This is the second of the three critiques we have to go through, the first being a faculty critique that for me was very successful. I was quite surprised that the first critique went so well, and I expected the second critique to be much harsher. But not THAT much harsher. First of all, our monitors are calibrated incorrectly, so everything appears too bright. Thus, after color correcting, everything turns out too dark. This was especially a problem on the projector, for which I couldn't even show my project it was so dark. The industry panelists, one the head TD for lighting. He basically said I didn't have enough time to finish, and that I needed to really focus on getting the effects to look right. It was especially discouraging when he said it would probably take the rest of the semester to get one of my effects up to C- level, never mind the rest of the project. 5 weeks to do a spash effect? If anything, that's encouraging in that the time frame seems reasonable, unlike that Dreamworks presentation when the modeler said she churned out a NURBS noodle cart in half a day (something that would have taken a month for me).
Oh well, at this point it's just a matter of bringing everything up to passable level. I'm confident I can do that, but what I'm not confident in is the ability of the computers to render.
That's all for now! I've got another journal entry planned in about 10 days or so, so hopefully I'll be on schedule till then. It's gonna be a big push to the end of the semester. Pray for my renders!
But first, a recap on some of the difficulties and problems solved over the past few weeks.
The nCloth book was actually quite a pain to deal with. In retrospect, I'm not sure if it was worth all the problems it has caused. The main difficulty with the simulation is that I am referencing the manual into animation files, and attempting to use the same simulation in multiple shots. This requires, first of all, that the cloth be cached using an absolute path, not a local one, so that it can change as the project changes. This was not too bad to deal with, as maya creates an nCache node to store this information.
Setting up the textures for the book pages was relatively easy using quick select sets and assigning textures to specific vertices. The rotation had to be mirrored (on the 2D placement node, UV scale set to -1) for some textures, but all in all not too much of a hassle. At this point, the files can be swapped out depending on what is needed for the shot. I'm only waiting for Billy's hand animation; Conrad Turner is helping me with this shot, as well as the first shot with the manual.
More annoying was the actual simulation itself. I still have not been able to solve the problem of the book jiggling when it's supposed to be still. My workaround, at this point, is to duplicate the object at a specific frame and then do a visibility switch when I don't want the book to jiggle around. For some reason, keying the nucleus from enabled to disabled and back again screws up, I think it might have something to do with the nucleus' interpretation of time.
The transform constraints work will to turn individual pages, but they were quite a pain when I realized I had created them incorrectly, and wanted to redo the simulation. Unfortunately, I had already smoothed the object, and this smooth was done downstream in relation to the actual cloth mesh.
The workaround for this problem made me feel for the first time that I actually have a good handle on how maya works. I knew that all the history for the nCloth simulation was all there, so I graphed the Hypergraph connections. nCloth has a very easy way of operating on meshes, it simply accepts an inMesh and passes an outMesh. Thus, all I had to do was bypass the smooth node and take the result of the nCloth and pipe it directly into shape node of the book geometry. This worked perfectly. I wrote a quick script so I could toggle between smooth-connected and smooth-disconnected versions of the nCloth. I was really glad that I didn't have to go back and set up the whole nCloth system from scratch. Yay for nCloth, it's really robust and quite easy to use and understand how it works. Here's the hypergraph images and the connection I had to change:
I did my title effect using maya fluids. I used the fluid paint tool to paint density into my container from a file, the "mighty dragon" text that I put in my original 3D animatic. I did two simulations, and reversed the first one to form the title out of vapor clouds. The second was kind of serendipitous, as I was experimenting with pushing some of the fluid attributes beyond their normal ranges (setting negative densities, buoyancies, etc.). The result was a really nice looking explosion where the title becomes rapidly dispersing licks of flame.
The past few weeks I have been trying to organize the animation of my shots. I have Vanessa Weller, Conrad Turner, Seon Crawford, and possibly Danielle Hopzafel to help with my animation, and coordinating has been a difficulty unto itself. I'm wondering if it would have been easier to just do the animation all by myself, since none of them are doing more than 4 shots per person. Some of the animation is not quite what I wanted either, not because it wasn't good (they're all great animators), but since they don't know my piece as well as I do some of the emotions and acting choices didn't seem quite right to me.
The drop effect I accomplished in half a day, modeled nearly completely after the bandaid drop effect I did for Gavin's technical directing class. The blendshapes worked nearly flawlessly and it looks relatively good for half a day's work. I will probably keep this effect as is, since at this point I don't have time to perfect everything.
The ripple effect for the spices is taking longer than it should. Well, no, it's taking about the amount of time I expected it to, but it's frustrating because the effect is just for one shot that doesn't really help the story that much. It's quite a successful effect, though, I've had a bunch of people comment it works out. It is the shot during the montage in which Billy drops a handful of thyme into the well. The leaves used an nCloth simulation modeled off of Duncan Brinsmead's confetti tutorial. In order to get the cloth to collide with the well surface, I connected the well's surface to the ground plane of the nucleus for the spices. This looked really flat, so I needed a way to slighly tilt the water plane to create the illusion that the spices were falling on water and not a solid ground plane. The ground plane direction is controlled with a normal vector, so I connected the normal vector's X and Z to the X and Z transform of a locator. I created a plane and aim-constrained the plane to the locator in order to quickly visualize the rotation motion of the ground plane, and now it works wonderfully. An animator could even move the locator as needed to quickly and easily jiggle the fallen spices around, depending on the amount of turbulence in the water.
To get the ripples, I actually rendered out the spices from the top view and created a blue transparency to represent the point at which the spices fell into the water. I took that top view render into after effects and created a ripple for each spice that fell in the water. At first, I thought I could get away with placing random ripples, but a render showed that it was obvious that the ripples were in the wrong place. The ripple render was then mapped to the wave height offset of the ocean shader, much like I did for the ripples from the single drop.
My underwater effect is coming along quite nicely. Gavin said he liked it accept for the bubbles, which were gray and lifeless. So I have since created lots more bubbles to give the illusion that it recently splashed into the water, and some kind of mystical bubbles that grow in size and swirl around the bottle. The major difficulty with this shot was matching the 2D fluid to the bottle, and preventing the bubbles from colliding with the bottle. The bubbles are particles, and particles collide with objects at their center point. This means that even though a particle collides with the bottle, nearly half the bubble penetrates the bottle before it detects the collision. To solve this, I created a non-renderable offset surface to use as the collision object. I also rendered out some ripples from an old ocean shader to represent caustics. Now the only elements missing are a depth pass for the bubbles and the bottle, and some kind of murkiness pass to further sell the fact that the shot is under the well surface. I might have little floaties in the murky pass, maybe use some live action footage.
I found this great ash effect while search for particle tutorials on line. The author didn't really give a tutorial, but instead the effect was used as an example of how the gravity and uniform fields were different (gravity, as in real life, does not take mass into account, while the uniform field does). In the effect, the author uses a moving black and white ramp to make a vase invisble while at the same time use the texture rate attribute of an emitter to emit particles (with randomized masses, and effected by a uniform field). The result is that it looks like the vase is disintegrating into dust particles that blow away quite nicely. I am trying to use it for the ash in my book, but the major problem is creating a nice looking ramp so that it doesn't just cheesily disintegrate from left to right. I might just stick to a top to bottom projected ramp, but this means I have to lay out the UVs of the manual perfectly.
I converted my paint effects trees to meshes. This is both a blessing and a curse. It's a blessing in that I can get my paint effects, depth, and beauty all in a single pass. all rendered in mental ray. The side effect, though, has been catastrophic in that it very often causes the batch renderer to crash. The mrBatchAnim script doesn't even work that well. Most of the time, I have to say a prayer and hope that the script doesn't create out of memory errors. Looking at the task manager, I can see Maya taking nearly 1.7 GBs of memory, whereas when rendering without the trees it rarely gets above 1 GB. This is a major problem, and to me it is the one that will make or break the project (if I can't render, I obviously can't produce a finished product. To that end, I'm going to give in to nearly everyone's suggestion and put some of the trees on textured cards. I've always hated the look of textured cards and have done everything in my power to avoid them, but if I can't render I'm going to have to give in. Textured cards. Blech.
I tried to tweak BSP settings. My tweak caused the render to go from 4 minutes per frame to over 30 minutes without anything rendering. I must not understand BSP settings as well as I thought I did. It's sad because that's something I think could save a lot of time, but how much time should I invest trying to get the settings correct?
The dragon is finally complete! For the most part, anyway. After a few problems with skin weights flying apart and files corrupting themselves, Adrian delivered a finalized dragon model. He's really cute! It has made the initial splash shot more difficult because it's hard to make the cute dragon look large, but I'm happy with the model. Adrian helped rig the mouth head and neck, since I was having a lot of problems having never done that kind of rigging before. I rigged the feet, claws, tail, and eyebrows. Besides a couple skin weight problems, I'm quite happy with the rig. Steve Matthews helped me sort out some of the skin weight issues, and now most of the deformations are at least acceptable. I spent about a day blocking the new dragon into the beauty shots based on the old dragon's animations, and it's looking quite nice. Cidalia did a wonderful texture for him, one that I am planning on deriving a translucency map from. Hopefully that will produce a cool effect for the splash shot where the dragon comes out of the well.
Dragon texture:
I bought two new effects books and have been scouring them for solutions to my effects problems. Eric Keller's book on special effects is kind of interesting because it is project based, and uses standard Maya features in unconventional ways. The ambient occlusion effect used to create a scanning effect for a skull was quite clever.
The second book was the standard maya Special effects handbook from Autodesk, for Maya 2008. The project files are Maya 2008 files, but with 8.5's new ignore version check box (under the Open scene option box), they worked fine. There are some really really cool effects in that book. It turned me on to the use of hardware particles as a viable solution. Previously, I had always thought of hardware particles as a nuisance. They look bad, and require an extra pass because points, streaks, spheres, etc. cannot be rendered in Maya software. Two things are required for awesome looking hardware effects, from what I've gleaned off of the special effects handbook. First of all is motion blur, which is easy to set in the hardware render settings. The second is the color accum checkbox on the particleShape attributes. This wonderful attribute causes the color of hardware particles to be additive, which is awesome for fire effects, explosions, and any kind of energy effect. After seeing some of the demonstrations from the book, and creating a couple of my own, I'm totally sold on using them as a separate pass for the splash effect. Many people I've talked to said that the splash can have some kind of a magical quality to it, since story-wise it does involve magic.
I should also talk about the disastrous industry critique. This is the second of the three critiques we have to go through, the first being a faculty critique that for me was very successful. I was quite surprised that the first critique went so well, and I expected the second critique to be much harsher. But not THAT much harsher. First of all, our monitors are calibrated incorrectly, so everything appears too bright. Thus, after color correcting, everything turns out too dark. This was especially a problem on the projector, for which I couldn't even show my project it was so dark. The industry panelists, one the head TD for lighting. He basically said I didn't have enough time to finish, and that I needed to really focus on getting the effects to look right. It was especially discouraging when he said it would probably take the rest of the semester to get one of my effects up to C- level, never mind the rest of the project. 5 weeks to do a spash effect? If anything, that's encouraging in that the time frame seems reasonable, unlike that Dreamworks presentation when the modeler said she churned out a NURBS noodle cart in half a day (something that would have taken a month for me).
Oh well, at this point it's just a matter of bringing everything up to passable level. I'm confident I can do that, but what I'm not confident in is the ability of the computers to render.
That's all for now! I've got another journal entry planned in about 10 days or so, so hopefully I'll be on schedule till then. It's gonna be a big push to the end of the semester. Pray for my renders!
Monday, November 5, 2007
Getting to final countdown
The end of thesis feels so far away, and yet, IT ISN'T! With just about a month to go, I feel like I have gotten about 50% of the way through my project. Time to step it up! Big changes over the past few weeks:
-starting the underwater bottle animation
-getting a nice looking ripple effect
-setting up organized rendering folders
-getting a nice looking plant-adding animation
-working on the manual
-working on a lighting rig
-working on paint effect trees
The underwater bottle is a nightmare of an effect, even Gavin said so. He showed me a similar effect that he did by texturing swirling liquid footage onto a deformed plane that moved along with his object. This seems more like an additional pass rather than the full on effect, but no matter what I do there will be several passes.
My first attempt was with a 3D fluid emitted from inside the bottle, but that didn't work because I needed impossibly high voxel resolution to get the fluid to realistically flow out of the bottle.
My second attempt was with a 2D fluid, with collision planes animated to follow the sides of the bottle. This turned out to be awful, and was even worse because I spent a lot of time on it, thinking with each passing hour that it would only take a little while longer to get it to work. As it turns out, matching collision planes to the 2D fluid is extremely difficult. Maya doesn't look at where a surface collids with the fluid plane, Maya checks to see if the object collides and then uses the whole object as a collision surface. Thus my first attempt at this method, which used a cylinder instead of two planes, failed miserably.
The breakthrough came when I decided to use the 2D fluid as it's own pass, and take care of the fluid in the actual bottle using a moving ramp shader. That part of the effect now looks good, and I'm working on shaping the 2D fluid, and getting in some particle bubbles. The bubbles are blobbies so I don't have to worry about reflections and refractions output from a hardware render.
The ripple effect was kind of fun to do. I started by working with the standard ripple effect from Maya's visor, which utilizes the pond wake. The only problem with this effect is the fact that it's not very portable at all. I can't really add anything on top of it because it's a fluid, and not a mesh.
My second attempt involved a keyed ramp on bump. With the ocean shader, this didn't work at all because for some reason the ocean shader has trouble reading in bump. Not sure why. It did work, however, when I mapped this bump to the wave height offset instead. Now it looks pretty good, if I do say so myself. The next component I need to add to this effect is the drop of fluid actually spreadin out in the pond, then sort of fizzing away to nothing.
Another little irritation that happened for both these effects. Apparently, mental ray does not like heavy particle or keyframed shader animation. It refused to render some of my particles and my shader animation for the bump map. After many headaches, I finally came across a script to batch render multiple frames in mental ray from the image viewer, something I had been trying to write myself but without success.
global proc mrBatchAnim ( int $start, int $end, int $by) {
int $frameNum;
$last = ($end + 1);
print ("\n\nRendering Frames " + $start + " to " + $end + "\n\n");
for ($frameNum = $start; $frameNum < $last;) {
print ("\nRendering Frame: " + $frameNum + "... \n");
currentTime $frameNum;
string $filename = ("C:\Documents and Settings\patsiarisd\Desktop\jking\wellTest2."+$frameNum+"tif");
renderWindowRender redoPreviousRender renderWindowPanel1;
//renderWindowSaveImageCallback renderWindowPanel1 $filename "image";
$frameNum = $frameNum + $by;
}
print ("\n\nRendering Complete.\n\n");
};
Of course, this is not how I got the script. The script I downloaded apparently only works for Maya 5 or so. Maya 8.5 has done away with the renderView object, or at least that was the line that failed: renderWindowRender redoPreviousRender renderView. So I changed it to the render panel and it worked, but unfortunately the panel refuses to accept the $filename input (the renderWindowSaveImageCallback line only saved the same image into the same folder, instead of the $filename folder). Thus, I can only save to the image directory specified by the current project. A minor annoyance, compared to the more aggravating problem of not rendering particles at all.
I've organized my render folders on a single computer in the lab, since gramercy, our networked storage space, is getting pretty full. Each shot has its own folder, which in turn contains folders for each pass. Older passes are transfered to other folders
within the same shot folders. I also changed the after effects file to locate the image sequences within these subfolders. I think I will stick to after effects for the basic composite, maybe even the edit, just for simplicity's sake.
The plant dropping animation was actually pretty fast for an effect. I basically tweaked Duncan Brinsmead's (yet again) tutorial for confetti. My knowledge of nCloth has since grown exponentially, methinks. This effect went quite fast, after a few tweaks with dynamic forces. I'm debating whether or not I should try to get some of the pieces to float on the water. I think I will add ripples for sure, but the floating on the surface dynamically is a little more difficult.
The manual requires a lot of nCloth tweaking, but the good news is I got it to model a pretty nice opened book. The page turning is turning out to be very problematic, however, and I'm considering doing the turning animation with blendshapes as I did for the animatic. Which is sad, since I set up the whole system in nCloth. But hey, whatever works.
Gavin discussed my lighting rig this week, and it's basically four lights: one coming from the street lamp (which has changed positions to better light Billy's face as he stands over the well, an opposing moonlight to serve as a colder fill, a bounce light only on Billy to simulate reflected light from the well, and finally a point light point constrained to the camera to fill in any other dark areas. I will then add more lights as needed on a per-shot basis.
The paint effects trees were also difficult to work with. The hardest part to deal with is their conversion into polgons, and I'm debating whether I should do this or not. The benefit is that if they are converted they can be rendered in mental ray, and so they can use the raytraced shadows created by the spherical area light. Otherwise I have to create depth map shadows for the trees, and probably render them out as a separate software pass. Here's one of my latest incarnations, the branches are a little too thick and the trees overall are too bright, I think. Gavin said they look too much like broccoli stalks.
Some more stills showing the latest look, sans paint effects trees because they don't render in mental ray as of yet:
Well, that's the update, for now. I'm starting to integrate some of the effects into my animated shots, which is a big step. Now my biggest worry is the dragon, which Adrian says will be finished by this week. I then have to rig and do blendshapes for the dragon's face, and animate it as well. That's the biggest hold up, causing me to basically ignore the second half of my animation. If the dragon is finished within this week, hopefully by next week I will have near finalized animation in all shots, and at least a first attempt at a beauty pass for every shot in the piece. Besides the dragon, I'm feeling pretty confident about the rest of the animation. But ask me again in a couple weeks, we'll see ...
-starting the underwater bottle animation
-getting a nice looking ripple effect
-setting up organized rendering folders
-getting a nice looking plant-adding animation
-working on the manual
-working on a lighting rig
-working on paint effect trees
The underwater bottle is a nightmare of an effect, even Gavin said so. He showed me a similar effect that he did by texturing swirling liquid footage onto a deformed plane that moved along with his object. This seems more like an additional pass rather than the full on effect, but no matter what I do there will be several passes.
My first attempt was with a 3D fluid emitted from inside the bottle, but that didn't work because I needed impossibly high voxel resolution to get the fluid to realistically flow out of the bottle.
My second attempt was with a 2D fluid, with collision planes animated to follow the sides of the bottle. This turned out to be awful, and was even worse because I spent a lot of time on it, thinking with each passing hour that it would only take a little while longer to get it to work. As it turns out, matching collision planes to the 2D fluid is extremely difficult. Maya doesn't look at where a surface collids with the fluid plane, Maya checks to see if the object collides and then uses the whole object as a collision surface. Thus my first attempt at this method, which used a cylinder instead of two planes, failed miserably.
The breakthrough came when I decided to use the 2D fluid as it's own pass, and take care of the fluid in the actual bottle using a moving ramp shader. That part of the effect now looks good, and I'm working on shaping the 2D fluid, and getting in some particle bubbles. The bubbles are blobbies so I don't have to worry about reflections and refractions output from a hardware render.
The ripple effect was kind of fun to do. I started by working with the standard ripple effect from Maya's visor, which utilizes the pond wake. The only problem with this effect is the fact that it's not very portable at all. I can't really add anything on top of it because it's a fluid, and not a mesh.
My second attempt involved a keyed ramp on bump. With the ocean shader, this didn't work at all because for some reason the ocean shader has trouble reading in bump. Not sure why. It did work, however, when I mapped this bump to the wave height offset instead. Now it looks pretty good, if I do say so myself. The next component I need to add to this effect is the drop of fluid actually spreadin out in the pond, then sort of fizzing away to nothing.
Another little irritation that happened for both these effects. Apparently, mental ray does not like heavy particle or keyframed shader animation. It refused to render some of my particles and my shader animation for the bump map. After many headaches, I finally came across a script to batch render multiple frames in mental ray from the image viewer, something I had been trying to write myself but without success.
global proc mrBatchAnim ( int $start, int $end, int $by) {
int $frameNum;
$last = ($end + 1);
print ("\n\nRendering Frames " + $start + " to " + $end + "\n\n");
for ($frameNum = $start; $frameNum < $last;) {
print ("\nRendering Frame: " + $frameNum + "... \n");
currentTime $frameNum;
string $filename = ("C:\Documents and Settings\patsiarisd\Desktop\jking\wellTest2."+$frameNum+"tif");
renderWindowRender redoPreviousRender renderWindowPanel1;
//renderWindowSaveImageCallback renderWindowPanel1 $filename "image";
$frameNum = $frameNum + $by;
}
print ("\n\nRendering Complete.\n\n");
};
Of course, this is not how I got the script. The script I downloaded apparently only works for Maya 5 or so. Maya 8.5 has done away with the renderView object, or at least that was the line that failed: renderWindowRender redoPreviousRender renderView. So I changed it to the render panel and it worked, but unfortunately the panel refuses to accept the $filename input (the renderWindowSaveImageCallback line only saved the same image into the same folder, instead of the $filename folder). Thus, I can only save to the image directory specified by the current project. A minor annoyance, compared to the more aggravating problem of not rendering particles at all.
I've organized my render folders on a single computer in the lab, since gramercy, our networked storage space, is getting pretty full. Each shot has its own folder, which in turn contains folders for each pass. Older passes are transfered to other folders
within the same shot folders. I also changed the after effects file to locate the image sequences within these subfolders. I think I will stick to after effects for the basic composite, maybe even the edit, just for simplicity's sake.
The plant dropping animation was actually pretty fast for an effect. I basically tweaked Duncan Brinsmead's (yet again) tutorial for confetti. My knowledge of nCloth has since grown exponentially, methinks. This effect went quite fast, after a few tweaks with dynamic forces. I'm debating whether or not I should try to get some of the pieces to float on the water. I think I will add ripples for sure, but the floating on the surface dynamically is a little more difficult.
The manual requires a lot of nCloth tweaking, but the good news is I got it to model a pretty nice opened book. The page turning is turning out to be very problematic, however, and I'm considering doing the turning animation with blendshapes as I did for the animatic. Which is sad, since I set up the whole system in nCloth. But hey, whatever works.
Gavin discussed my lighting rig this week, and it's basically four lights: one coming from the street lamp (which has changed positions to better light Billy's face as he stands over the well, an opposing moonlight to serve as a colder fill, a bounce light only on Billy to simulate reflected light from the well, and finally a point light point constrained to the camera to fill in any other dark areas. I will then add more lights as needed on a per-shot basis.
The paint effects trees were also difficult to work with. The hardest part to deal with is their conversion into polgons, and I'm debating whether I should do this or not. The benefit is that if they are converted they can be rendered in mental ray, and so they can use the raytraced shadows created by the spherical area light. Otherwise I have to create depth map shadows for the trees, and probably render them out as a separate software pass. Here's one of my latest incarnations, the branches are a little too thick and the trees overall are too bright, I think. Gavin said they look too much like broccoli stalks.
Some more stills showing the latest look, sans paint effects trees because they don't render in mental ray as of yet:
Well, that's the update, for now. I'm starting to integrate some of the effects into my animated shots, which is a big step. Now my biggest worry is the dragon, which Adrian says will be finished by this week. I then have to rig and do blendshapes for the dragon's face, and animate it as well. That's the biggest hold up, causing me to basically ignore the second half of my animation. If the dragon is finished within this week, hopefully by next week I will have near finalized animation in all shots, and at least a first attempt at a beauty pass for every shot in the piece. Besides the dragon, I'm feeling pretty confident about the rest of the animation. But ask me again in a couple weeks, we'll see ...
Monday, October 22, 2007
Snowballin away
Wow, I can't believe that October is nearly over. It's about time to get into full-on panic mode.
Updates since the beginning of the month:
-finalized reference structure
-redid the final ingredient bottle
-outsourced animation and modeling
-animation splining for shots 1-7
-underwater fluid effect
-tests for fire and splash effects
-moon texture
-well procedural shader
I'm feeling pretty good about where I am -- I tend to alternate between feeling like I have all the time in the world and feeling like I have no time at all, which I guess on average means I'm about on track. The piece is about 35% done at this point, which is behind since technically I should be about 50% done. Half done! We should be half done!
The bane that was last week at least allowed me to break through one major problem, the problem of referencing. I love referencing when it works, but I hate it when setting it up. I have about 35 different shots, and each shot is getting its own scene file. Each scene file is referencing the elements needed in a modeling folder I labeled FINAL_MODELS.
So I had basically two choices: assemble the reference files from scratch and import my character animation from the 3D animatic files, or re-reference within the 3D animatic files and save out new reference files. After a couple shots of searching through nearly 100 channels of animation and trying to paste them onto laddoo, I figured it would be easier to use the 3D animatic files.
This turned out to be a great decision, at least I think it was. It changed a complicated but potentially shorter process into a long tedious one, but long and tedious is better because the shorter process really was a lot more complicated (getting animation curves off old Laddoo and onto new Laddoo).
I also decided to place all the ingredients (moving objects) in one reference file, all the environment objects (non-moving, set pieces) in another reference file, and the character in a third. At this point, the only model that is still using the old 3D animatic geometry is the manual, which has fast turned back into a book for issue simplification.
So then it was a matter of going into each 3D animatic file, changing the references, as well as the namespaces for the references. It's probably something I could have scripted, since all the files are .ma files. However, the files run across a network and I'm much more used to doing this kind of scripting in Unix / Mac OSX. Didn't want to chance network problems or my lack of scripting experience screw up the animation files.
So I opened each file (a minute to load references), changed the references (another minute), plus overhead (a minute of daydreaming, waiting for things to load, cutting and pasting paths to the reference files, the new save files, and the old animatic files).
While this seems easy, it really was a pain in the ass. It took a good one and a half days to get everything transfered over, but now everything is working perfectly. The animation on Laddoo transfered over perfectly, which I'm super happy about. The bottle animation didn't transfer over, but that's because the geometry is so different, so that was to be expected.
I redid the geometry of the final ingredient to look like a small scotch bottle. Another implicit joke, that the final ingredient is alcohol. This will remain implicit, however, I will still label the final ingredient as FINAL INGREDIENT. The rig is still the same, and it looks like I will definitely be able to make use of the meniscus rig. Gavin gave another possibility of using booleans, but I don't think that works out with the number of surfaces in a dielectric interface system. Each surface has an in-ior and an out-ior, depending on the materials. For the bottle, the interfaces are glass to air, glass to water, and water to air. The outside surface gets glass to air, the inside top surface gets glass to air, the inside bottom surface gets glass to water, and the meniscus itself gets water to air. This is how Boaz taught us to do dielectric interfacing, and it seems to work so I'll stick to that. Regarding the bottle, the next thing to figure out is caustics and where to include them within my composite layers.
John Tarnoof and Rachel Tiep-Daniels from Dreamworks came to give a presentation at CADA. The one thing I got from that presentation was a reiteration of how good you really have to be to get a good full-time job in this industry. Rachel mentioned that she modeled a really nice looking noodle cart in a day, half the day for research and half the day for modeling. That thing would probably take me a month to model, two weeks if I was lucky. I got some info on what Dreamworks is looking for in an effects artist, scripting, API programming, the ability to troubleshoot coding for a renderer. Very intimidating.
In that spirit I started doing tutorials on python. Python is a really weird language. It looks like visual BASIC to me, but feels like a flat version of Perl, if that makes any sense. Whereas Perl is quirky, Python seems to be its organized but informal brother. The only thing I don't like about Python so far is its lack of braces to define blocks of code (it uses tabs/white space instead), and the lack of for(i=0,i<10,i++) syntax. But it seems pretty powerful and pretty easy to code, and I've checked the interfacing with programs like Realflow and it seems pretty straightfoward.
Vanessa Weller is doing my foley sound effects, and Germono Bryant is doing my score. Germono got the first version of my score to me, and it was really nice. Whereas I expected just a sequence of notes, it was a full-on operatic piece. Did not fit completely, but he's really talented -- I was impressed.
Cidalia Costa is on board for textures for my manual, and I'll discuss those with her some time this week. Some of the animation 2 people have expressed interest in helping me animate.
I've started doing some splining and second pass animation for the beginning shots, and rendered out tests in mental ray. The stars are too bright and the well water waves are moving too fast. Everything looks pretty good, though. I showed it to Michael Hosenfeld, another professor, and he said the colors were too saturated for a night shot. The moon is also a strange color, it doesn't fit the palate. I think I'll make it a little more yellow, less orange.
And then, the effects tests. Effects are my focus so I have to start getting all this stuff planned out. Here are some of the tests I've been working on. They look better in motion, I swear.
This is a particle explosion I may use to simulate pieces of burning pages coming off the manual after the dragon burns it. It mixes the result of an rgbPP ramp with a rotating crater-based volume texture.
The fire is also a partile effect that uses a shader similar to the one above. The speed is controlled with scripting and a turbulence field creates the spreading out effect at the end.
This potion blur is actually a 2D fluid oriented to the camera angle. The fluid emitter is animated to camera. This is after I tried to do it with a 3D fluid and the computer failed. Adam Burr said the 2D fluid was a perfectly legitimate solution, and suggested the use of animated bounding planes to serve as collisions in the fluid system to represent the sides of the glass bottle.
This splash is controlled with forces. The shader is a facing ratio glow with geometry hidden on blobby surfaces.
Next to come: better animation, finalized textures, paint effects, finalized book
Updates since the beginning of the month:
-finalized reference structure
-redid the final ingredient bottle
-outsourced animation and modeling
-animation splining for shots 1-7
-underwater fluid effect
-tests for fire and splash effects
-moon texture
-well procedural shader
I'm feeling pretty good about where I am -- I tend to alternate between feeling like I have all the time in the world and feeling like I have no time at all, which I guess on average means I'm about on track. The piece is about 35% done at this point, which is behind since technically I should be about 50% done. Half done! We should be half done!
The bane that was last week at least allowed me to break through one major problem, the problem of referencing. I love referencing when it works, but I hate it when setting it up. I have about 35 different shots, and each shot is getting its own scene file. Each scene file is referencing the elements needed in a modeling folder I labeled FINAL_MODELS.
So I had basically two choices: assemble the reference files from scratch and import my character animation from the 3D animatic files, or re-reference within the 3D animatic files and save out new reference files. After a couple shots of searching through nearly 100 channels of animation and trying to paste them onto laddoo, I figured it would be easier to use the 3D animatic files.
This turned out to be a great decision, at least I think it was. It changed a complicated but potentially shorter process into a long tedious one, but long and tedious is better because the shorter process really was a lot more complicated (getting animation curves off old Laddoo and onto new Laddoo).
I also decided to place all the ingredients (moving objects) in one reference file, all the environment objects (non-moving, set pieces) in another reference file, and the character in a third. At this point, the only model that is still using the old 3D animatic geometry is the manual, which has fast turned back into a book for issue simplification.
So then it was a matter of going into each 3D animatic file, changing the references, as well as the namespaces for the references. It's probably something I could have scripted, since all the files are .ma files. However, the files run across a network and I'm much more used to doing this kind of scripting in Unix / Mac OSX. Didn't want to chance network problems or my lack of scripting experience screw up the animation files.
So I opened each file (a minute to load references), changed the references (another minute), plus overhead (a minute of daydreaming, waiting for things to load, cutting and pasting paths to the reference files, the new save files, and the old animatic files).
While this seems easy, it really was a pain in the ass. It took a good one and a half days to get everything transfered over, but now everything is working perfectly. The animation on Laddoo transfered over perfectly, which I'm super happy about. The bottle animation didn't transfer over, but that's because the geometry is so different, so that was to be expected.
I redid the geometry of the final ingredient to look like a small scotch bottle. Another implicit joke, that the final ingredient is alcohol. This will remain implicit, however, I will still label the final ingredient as FINAL INGREDIENT. The rig is still the same, and it looks like I will definitely be able to make use of the meniscus rig. Gavin gave another possibility of using booleans, but I don't think that works out with the number of surfaces in a dielectric interface system. Each surface has an in-ior and an out-ior, depending on the materials. For the bottle, the interfaces are glass to air, glass to water, and water to air. The outside surface gets glass to air, the inside top surface gets glass to air, the inside bottom surface gets glass to water, and the meniscus itself gets water to air. This is how Boaz taught us to do dielectric interfacing, and it seems to work so I'll stick to that. Regarding the bottle, the next thing to figure out is caustics and where to include them within my composite layers.
John Tarnoof and Rachel Tiep-Daniels from Dreamworks came to give a presentation at CADA. The one thing I got from that presentation was a reiteration of how good you really have to be to get a good full-time job in this industry. Rachel mentioned that she modeled a really nice looking noodle cart in a day, half the day for research and half the day for modeling. That thing would probably take me a month to model, two weeks if I was lucky. I got some info on what Dreamworks is looking for in an effects artist, scripting, API programming, the ability to troubleshoot coding for a renderer. Very intimidating.
In that spirit I started doing tutorials on python. Python is a really weird language. It looks like visual BASIC to me, but feels like a flat version of Perl, if that makes any sense. Whereas Perl is quirky, Python seems to be its organized but informal brother. The only thing I don't like about Python so far is its lack of braces to define blocks of code (it uses tabs/white space instead), and the lack of for(i=0,i<10,i++) syntax. But it seems pretty powerful and pretty easy to code, and I've checked the interfacing with programs like Realflow and it seems pretty straightfoward.
Vanessa Weller is doing my foley sound effects, and Germono Bryant is doing my score. Germono got the first version of my score to me, and it was really nice. Whereas I expected just a sequence of notes, it was a full-on operatic piece. Did not fit completely, but he's really talented -- I was impressed.
Cidalia Costa is on board for textures for my manual, and I'll discuss those with her some time this week. Some of the animation 2 people have expressed interest in helping me animate.
I've started doing some splining and second pass animation for the beginning shots, and rendered out tests in mental ray. The stars are too bright and the well water waves are moving too fast. Everything looks pretty good, though. I showed it to Michael Hosenfeld, another professor, and he said the colors were too saturated for a night shot. The moon is also a strange color, it doesn't fit the palate. I think I'll make it a little more yellow, less orange.
And then, the effects tests. Effects are my focus so I have to start getting all this stuff planned out. Here are some of the tests I've been working on. They look better in motion, I swear.
This is a particle explosion I may use to simulate pieces of burning pages coming off the manual after the dragon burns it. It mixes the result of an rgbPP ramp with a rotating crater-based volume texture.
The fire is also a partile effect that uses a shader similar to the one above. The speed is controlled with scripting and a turbulence field creates the spreading out effect at the end.
This potion blur is actually a 2D fluid oriented to the camera angle. The fluid emitter is animated to camera. This is after I tried to do it with a 3D fluid and the computer failed. Adam Burr said the 2D fluid was a perfectly legitimate solution, and suggested the use of animated bounding planes to serve as collisions in the fluid system to represent the sides of the glass bottle.
This splash is controlled with forces. The shader is a facing ratio glow with geometry hidden on blobby surfaces.
Next to come: better animation, finalized textures, paint effects, finalized book
Tuesday, October 9, 2007
It's Comin Together!
Just a few updates to report. Most significantly, we had our first thesis critique from CADA faculty, and mine was surprisingly a great success! More on that later, but first:
I added more to the environment, including throwing a (bad) texture onto the road. That will have to be changed. I still like my ground shader, though, and it is really easy to render. Trees still need to be tweaked. I want to have a nice cartoony moon in the sky, but I've been playing with filters and trying to get a nice look but haven't had much success. I watched some old tv episodes of Aladdin, and there are some great shots of gigantic, stylized moons against a bright blue Arabian night sky. The secret seems to be mixing the bluish light of the sky and the yellowish glow of the moon without ending up with an unsightly shade of green. I'll play with that a bit more this week.
I fixed the dragon model up a bit, smoothing out the patches and converting to polygons, but I have since deferred to Adrian de la Mora for the modeling details. One of the major comments I got at the thesis critique was how and when to pass off work. Also, I'm having Cidalia Costa do the textures for the book. She wants to do oil paintings, which will end up looking fantastic. As soon as I nail my style of the book, I know it will look great.
I spent a couple days playing with the Laddoo rig, understanding how it works and how the nodes are connected. Just for reference, this is the rig itself in the hypergraph, which was what I was trying to understand:
There are several blendshape nodes that feed into one master blendshape called parallelBlender. This node controls the face and eyebrows. The parallelBlender then feeds into the joint clusters, which makes sense since the face deformation should occur before joint deformation. All this is piped through a smooth node, and then outputted to the actual geometry. Of course, I didn't notice that this smooth node had it's attributes connected to a control, and tried to create my own smooth attribute. I smoothed all the geo, then attempted to write an expression to control the number of divisions in those smooths. As it turns out, just being able to select all those new smooths was a pain. It took an unfortunate amount of time to come up with the following script:
int $i;
for ($i = 35; $i < 45; $i++) {
$pick = "polySmoothFace" + $i;
select -tgl $pick;
}
I then used a driven key to connect a smooth attribute on the master control to the division levels of the selected smooth nodes. Only after I did this did I realize there is a big S control curve behind laddoo that does this very same thing:
Luckily, this practice of adding a smooth control was not in vain, because using the rig's built in smooth control destroyed the UV transfer map I created to prevent the UVs from swimming. This was because the rigged geometry takes its UVs from the texture reference, and doesn't know how to interpret the UVs when it is smoothed and the texture reference isn't. So the solution, of course, was to smooth the texture reference simultaneously. I used the driven key to drive the smooth of the texture reference, and, lo and behold, it smoothed the UVs. I was really happy because I had no idea whether that would actually work, Maya can be very picky when it comes to UVs.
Other laddoo rig issues: there were a lot of blendshapes that looked like they were part of the modeling history and not really built into the rig. Most of them were duplicates, so I ended up deleting a bunch of head models. This ended up removing almost a third of the entire file size, cutting it down from 30 MB to about 22 MB. I learned how the eyebrows work: getting deformations both from the blendshape nodes as well as extra cluster nodes used to deform the eyebrows independently. It's quite weird. There's also some different geo for the hair, but I don't know how to change the hair to use those pieces of geometry. My guess is they haven't actually been built into the rig. The hair dynamics is cute but slightly overkill, in my opinion, not sure if I'll use it.
In summary, laddoo has a lot of extra weight in the form of extraneous blendshapes, a weird eyebrow setup, and ONLY THREE FRICKIN FINGERS, and some of the attributes are unnecessarily clamped (though the keys can be pushed in the graph editor), but is otherwise quite a versatile rig that can achieve a decent range of poses and expressions. I'm probably going to add only a single blendshape to the mix, a squint blendshape to allow laddoo to make a "what-the-hell-is-going-on" face. As you can see, it's not exactly wtf-ish, as of yet:
Up next is nCloth. Ah, what can I say. nCloth is totally cool. It's really easy to learn, too. After going through Duncan Brinsmead's tutorial a couple times I got the hang if it: http://area.autodesk.com/index.php/blogs_duncan/blog_detail/animating_a_book_with_ncloth/
Right now, I'm using a lot of transform constraints to open the cover and turn a page. I'm getting some nice animation, except in the binding area of the book. There's also dmap shadow problems, I assume its a self-shadowing issue because my bias is too low. But it's exciting to see the pages turn and the binding of the book actually follow where the page goes.
The nCloth cache is also really easy to use, just like fluids. Now if only I could remember how to do it for particles -- I remember there is an extra step somewhere. Oh well, I'll get to particles soon enough.
I've been showing the animatic to more and more people, and getting better and better reactions. I showed it to my parents, who represent to me the target audience: non-industry folk seeing the piece for the first time. Everything really has to be spoonfed, the story may seem clear to me but be confusing for someone watching for the first time. And most of the people who see it will only see it once.
Most importantly, I showed it to Patricia Heard-Greene, the other thesis advisor, and she had some good suggestions on cutting it down. She had a lot of lighting tips, and a lot of comments on the animation which was funny because I really haven't spent much more than a couple weeks animating.
So that brings me to the thesis panel, I'll document everything they said because I feel it's pretty important: it's like reading through the comments you get on your first exam in a class. My panel included Boaz Livny, my lighting instructor (and all-around super genius), Adam Meyers (genius), Myles Tanaka (genius), and Patricia (genius). A great panel, all with very worthwhile comments:
-Adam says less is more, and clarity is much more significant than razzle dazzle. Subtle effects are much more impressive than big, over the top effects.
-Pan and scan is a definite possbility for background matte paintings
-Boaz stressed screen gamma, especially from the projector, which tends to make everything dark and flat, and clips the brightest whites and darkest blacks. The projector render thus needs to be super hi res, super-saturated, slightly brighter, within a range of 15% gray to 85% gray. UGH!
-Oh, and also another gentle reminder from Boaz, regular renders from Maya are physically wrong because they output texture colors in linear space. DOUBLE UGH!
-Adam mentioned that viewers are stupid, and if you bet on your audience to cleverly get the idea of your piece you're probably gonna lose.
-Patricia did not outright say it but implicitly suggested that I need a much better thesis statement and synopsis, one that I can read off a piece of paper to the panel.
-Boaz told me to get help with animation, since it isn't my focus and without good animation the piece will fall flat, regardless of how good the effects are.
-Myles mentioned something that I have written down as "ad cuts" little graphic blurbs used to sell services, like ACME company from looney toons. Something to look up, I guess.
-Boaz strongly suggested DragonHeart as reference for some of the dragon and water interaction.
-Patricia reminded me to strongly prioritize my time. Spend the majority of it making the visual effects look really spot on.
The rest of the comments were directed specifically at the work I had done in my piece, which I've organized by person:
Patricia:
-first few shots could be cut in favor of foley sounds that occur during the title shot.
-Light the well during the book shots darker on the left side if you don't want the audience to read through the whole text
-Make the book a brighter, more obvious color than the well so the audience is focused on the book
-The overhead street light is fine
-Add a moon to get nice rim lighting on the dragon, glint of moonlight on the book cover from bump
-Watch out for the beginning with action starting and stopping -- try to cut on action a little better
-Step 3, add fennel, start with the page already half turned, and hold on the Step 2 image a little longer
-Need 3 ingredients for the montage sequence: steps 3, 6, 10?
-The warning page is really boring
-Patricia likes the 3D animatic bottles!
-The explosion out of well shot needs to be much more dramatic -- consider having the dragon fly toward the camera a little bit
-Play with the zdepth of the dragon, it's a little too static during the upward motion
-Drag on the wings and tail, possibly nCloth for the wings
-Maybe the shadow of the dragon falls over Billy's face when it cuts to Billy surprised
-For light come back on shot, again play with dragon's zdepth
-For the great dragon is tiny shot, Billy should be leaning back or stepping back or doing something, not just standing
-Tree leaves need to be rustling. As a visual effects artist this will be pointed out if it is overlooked.
Myles:
-Give the dragon some teeth and angry eyes, exaggerate it for the big shot where he comes out of the well
-Angle the camera to see the dragon higher more quickly. Have the dragon come out toward the camera, filling up more than the frame: tips of wings, some of head, and legs and tail out of frame (saves on animation, yay!)
-Possibly during that shot switch to an aerial view above with the dragon surging toward the frame. You want its head to look big, mean, and scary
-Play up the warning in the shot where Billy reads about the final ingredient, maybe slightly zoom toward the warning. A subtle camera move.
Adam:
-Treat the title better, do something with it. Integrate it into the piece instead of just title and fade up from black.
-Be more aggressive with the title font.
-A lot is going to depend on how the book is stylized. Textures need to be spot on.
-You can use Billy's finger to animate and show how fast and where he is reading on the book.
-The background is a bit flat and could use some depth of field blur and fog effects
-Texture the lightfog to give the whole scene a really eerie look.
Boaz:
-watch the smoothness of Laddoo's hair. That's easy to fix because Laddoo was unsmoothed at the time I rendered him for the animatic.
-For the bubble effect you can use the bubble texture not only as displacement but also as a mask for the glow
-You can use a lens shader to distort shot 23 when the dragon comes out of the well. You can also use a 2D filter.
-The dragon texture is non-existent. (So is the final model, I wanted to say, but didn't).
-The dragon should be reflected in Laddoo's eyes. You can fake this with a reflection map, if necessary.
-Motion blur is super necessary, but don't be a fool and try to get mental ray to render motion blur and depth of field at the same time, it will kill your render times. Instead, render out a separate z depth pass and composite in z depth fog -- an effect I am quite familiar with, having done it twice for his assignments. It's a great effect!
-Read Boaz's book chapter on paint effects for the trees. Yay, Boaz wrote a whole Maya book chapter on paint effects. Guess what's just moved to the front of my reading list.
-Use the rasterizer to render motion blur. I'm a little bit hesitant on this one because I do want raytraced reflections in the well water. And the rasterizer scrunches up its face and gets constipated whenever it tries to render a raytraced scene. Plus I've never really gotten a good render out of the rasterizer, even without raytracing. It's just my incompetence, I'm sure, but it's another thing I have to learn if I use it.
Whew!
Onward to the next task: somehow figuring out how to transfer the animation from my 3D animatic over to the final shot scenes.
Yay, I'm starting to animate! Yay, I'm on schedule! Mostly, probably, sort of, hopefully ... ?
I added more to the environment, including throwing a (bad) texture onto the road. That will have to be changed. I still like my ground shader, though, and it is really easy to render. Trees still need to be tweaked. I want to have a nice cartoony moon in the sky, but I've been playing with filters and trying to get a nice look but haven't had much success. I watched some old tv episodes of Aladdin, and there are some great shots of gigantic, stylized moons against a bright blue Arabian night sky. The secret seems to be mixing the bluish light of the sky and the yellowish glow of the moon without ending up with an unsightly shade of green. I'll play with that a bit more this week.
I fixed the dragon model up a bit, smoothing out the patches and converting to polygons, but I have since deferred to Adrian de la Mora for the modeling details. One of the major comments I got at the thesis critique was how and when to pass off work. Also, I'm having Cidalia Costa do the textures for the book. She wants to do oil paintings, which will end up looking fantastic. As soon as I nail my style of the book, I know it will look great.
I spent a couple days playing with the Laddoo rig, understanding how it works and how the nodes are connected. Just for reference, this is the rig itself in the hypergraph, which was what I was trying to understand:
There are several blendshape nodes that feed into one master blendshape called parallelBlender. This node controls the face and eyebrows. The parallelBlender then feeds into the joint clusters, which makes sense since the face deformation should occur before joint deformation. All this is piped through a smooth node, and then outputted to the actual geometry. Of course, I didn't notice that this smooth node had it's attributes connected to a control, and tried to create my own smooth attribute. I smoothed all the geo, then attempted to write an expression to control the number of divisions in those smooths. As it turns out, just being able to select all those new smooths was a pain. It took an unfortunate amount of time to come up with the following script:
int $i;
for ($i = 35; $i < 45; $i++) {
$pick = "polySmoothFace" + $i;
select -tgl $pick;
}
I then used a driven key to connect a smooth attribute on the master control to the division levels of the selected smooth nodes. Only after I did this did I realize there is a big S control curve behind laddoo that does this very same thing:
Luckily, this practice of adding a smooth control was not in vain, because using the rig's built in smooth control destroyed the UV transfer map I created to prevent the UVs from swimming. This was because the rigged geometry takes its UVs from the texture reference, and doesn't know how to interpret the UVs when it is smoothed and the texture reference isn't. So the solution, of course, was to smooth the texture reference simultaneously. I used the driven key to drive the smooth of the texture reference, and, lo and behold, it smoothed the UVs. I was really happy because I had no idea whether that would actually work, Maya can be very picky when it comes to UVs.
Other laddoo rig issues: there were a lot of blendshapes that looked like they were part of the modeling history and not really built into the rig. Most of them were duplicates, so I ended up deleting a bunch of head models. This ended up removing almost a third of the entire file size, cutting it down from 30 MB to about 22 MB. I learned how the eyebrows work: getting deformations both from the blendshape nodes as well as extra cluster nodes used to deform the eyebrows independently. It's quite weird. There's also some different geo for the hair, but I don't know how to change the hair to use those pieces of geometry. My guess is they haven't actually been built into the rig. The hair dynamics is cute but slightly overkill, in my opinion, not sure if I'll use it.
In summary, laddoo has a lot of extra weight in the form of extraneous blendshapes, a weird eyebrow setup, and ONLY THREE FRICKIN FINGERS, and some of the attributes are unnecessarily clamped (though the keys can be pushed in the graph editor), but is otherwise quite a versatile rig that can achieve a decent range of poses and expressions. I'm probably going to add only a single blendshape to the mix, a squint blendshape to allow laddoo to make a "what-the-hell-is-going-on" face. As you can see, it's not exactly wtf-ish, as of yet:
Up next is nCloth. Ah, what can I say. nCloth is totally cool. It's really easy to learn, too. After going through Duncan Brinsmead's tutorial a couple times I got the hang if it: http://area.autodesk.com/index.php/blogs_duncan/blog_detail/animating_a_book_with_ncloth/
Right now, I'm using a lot of transform constraints to open the cover and turn a page. I'm getting some nice animation, except in the binding area of the book. There's also dmap shadow problems, I assume its a self-shadowing issue because my bias is too low. But it's exciting to see the pages turn and the binding of the book actually follow where the page goes.
The nCloth cache is also really easy to use, just like fluids. Now if only I could remember how to do it for particles -- I remember there is an extra step somewhere. Oh well, I'll get to particles soon enough.
I've been showing the animatic to more and more people, and getting better and better reactions. I showed it to my parents, who represent to me the target audience: non-industry folk seeing the piece for the first time. Everything really has to be spoonfed, the story may seem clear to me but be confusing for someone watching for the first time. And most of the people who see it will only see it once.
Most importantly, I showed it to Patricia Heard-Greene, the other thesis advisor, and she had some good suggestions on cutting it down. She had a lot of lighting tips, and a lot of comments on the animation which was funny because I really haven't spent much more than a couple weeks animating.
So that brings me to the thesis panel, I'll document everything they said because I feel it's pretty important: it's like reading through the comments you get on your first exam in a class. My panel included Boaz Livny, my lighting instructor (and all-around super genius), Adam Meyers (genius), Myles Tanaka (genius), and Patricia (genius). A great panel, all with very worthwhile comments:
-Adam says less is more, and clarity is much more significant than razzle dazzle. Subtle effects are much more impressive than big, over the top effects.
-Pan and scan is a definite possbility for background matte paintings
-Boaz stressed screen gamma, especially from the projector, which tends to make everything dark and flat, and clips the brightest whites and darkest blacks. The projector render thus needs to be super hi res, super-saturated, slightly brighter, within a range of 15% gray to 85% gray. UGH!
-Oh, and also another gentle reminder from Boaz, regular renders from Maya are physically wrong because they output texture colors in linear space. DOUBLE UGH!
-Adam mentioned that viewers are stupid, and if you bet on your audience to cleverly get the idea of your piece you're probably gonna lose.
-Patricia did not outright say it but implicitly suggested that I need a much better thesis statement and synopsis, one that I can read off a piece of paper to the panel.
-Boaz told me to get help with animation, since it isn't my focus and without good animation the piece will fall flat, regardless of how good the effects are.
-Myles mentioned something that I have written down as "ad cuts" little graphic blurbs used to sell services, like ACME company from looney toons. Something to look up, I guess.
-Boaz strongly suggested DragonHeart as reference for some of the dragon and water interaction.
-Patricia reminded me to strongly prioritize my time. Spend the majority of it making the visual effects look really spot on.
The rest of the comments were directed specifically at the work I had done in my piece, which I've organized by person:
Patricia:
-first few shots could be cut in favor of foley sounds that occur during the title shot.
-Light the well during the book shots darker on the left side if you don't want the audience to read through the whole text
-Make the book a brighter, more obvious color than the well so the audience is focused on the book
-The overhead street light is fine
-Add a moon to get nice rim lighting on the dragon, glint of moonlight on the book cover from bump
-Watch out for the beginning with action starting and stopping -- try to cut on action a little better
-Step 3, add fennel, start with the page already half turned, and hold on the Step 2 image a little longer
-Need 3 ingredients for the montage sequence: steps 3, 6, 10?
-The warning page is really boring
-Patricia likes the 3D animatic bottles!
-The explosion out of well shot needs to be much more dramatic -- consider having the dragon fly toward the camera a little bit
-Play with the zdepth of the dragon, it's a little too static during the upward motion
-Drag on the wings and tail, possibly nCloth for the wings
-Maybe the shadow of the dragon falls over Billy's face when it cuts to Billy surprised
-For light come back on shot, again play with dragon's zdepth
-For the great dragon is tiny shot, Billy should be leaning back or stepping back or doing something, not just standing
-Tree leaves need to be rustling. As a visual effects artist this will be pointed out if it is overlooked.
Myles:
-Give the dragon some teeth and angry eyes, exaggerate it for the big shot where he comes out of the well
-Angle the camera to see the dragon higher more quickly. Have the dragon come out toward the camera, filling up more than the frame: tips of wings, some of head, and legs and tail out of frame (saves on animation, yay!)
-Possibly during that shot switch to an aerial view above with the dragon surging toward the frame. You want its head to look big, mean, and scary
-Play up the warning in the shot where Billy reads about the final ingredient, maybe slightly zoom toward the warning. A subtle camera move.
Adam:
-Treat the title better, do something with it. Integrate it into the piece instead of just title and fade up from black.
-Be more aggressive with the title font.
-A lot is going to depend on how the book is stylized. Textures need to be spot on.
-You can use Billy's finger to animate and show how fast and where he is reading on the book.
-The background is a bit flat and could use some depth of field blur and fog effects
-Texture the lightfog to give the whole scene a really eerie look.
Boaz:
-watch the smoothness of Laddoo's hair. That's easy to fix because Laddoo was unsmoothed at the time I rendered him for the animatic.
-For the bubble effect you can use the bubble texture not only as displacement but also as a mask for the glow
-You can use a lens shader to distort shot 23 when the dragon comes out of the well. You can also use a 2D filter.
-The dragon texture is non-existent. (So is the final model, I wanted to say, but didn't).
-The dragon should be reflected in Laddoo's eyes. You can fake this with a reflection map, if necessary.
-Motion blur is super necessary, but don't be a fool and try to get mental ray to render motion blur and depth of field at the same time, it will kill your render times. Instead, render out a separate z depth pass and composite in z depth fog -- an effect I am quite familiar with, having done it twice for his assignments. It's a great effect!
-Read Boaz's book chapter on paint effects for the trees. Yay, Boaz wrote a whole Maya book chapter on paint effects. Guess what's just moved to the front of my reading list.
-Use the rasterizer to render motion blur. I'm a little bit hesitant on this one because I do want raytraced reflections in the well water. And the rasterizer scrunches up its face and gets constipated whenever it tries to render a raytraced scene. Plus I've never really gotten a good render out of the rasterizer, even without raytracing. It's just my incompetence, I'm sure, but it's another thing I have to learn if I use it.
Whew!
Onward to the next task: somehow figuring out how to transfer the animation from my 3D animatic over to the final shot scenes.
Yay, I'm starting to animate! Yay, I'm on schedule! Mostly, probably, sort of, hopefully ... ?
Subscribe to:
Posts (Atom)