I am half way through Droidmaker, the book by Michael Rubin that outlines the beginning of CG, starting in the 50’s with George Lucas’ childhood.
As a child of the 80’s it was fun to be given material that connected the dots for me.
All of the memories of computers, movies and electronics I was exposed to as a very small child had very little relation to each other except for the fact that I look back at them fondly (ahhh… video arcades, how I miss you so!)
I had no idea how all of the technology from the 70’s and 80’s weaved together. I did know that Lucas had a high impact on CG, but had no idea he drove industry research for several decades and that his team made many of the discoveries we recognize as commonplace today (I now think of this book every time I use a Blinn shader).
I have zipped through Marin County several times when traveling (I used to live in Los Angeles and loved visiting the redwoods in Northern California) and always wondered about the mysterious Skywalker Ranch. I guess it’s good I didn’t consider trying to drive by to see the place, in true Star-Wars-fan fashion, because the book mentioned that there are actually several areas that the work on those Lucas’ great works was masterminded and brought to life.
Droidmaker presents the history of computer generated visual effects in such a way that allows the reader to appreciate the beginning of this field with great understanding and appreciate of the current state creating visual effects today.
There are intense similarities in the way that these images were generated thirty years ago compared to how they are created today.
For example, the use of wireframes and polygons to model surfaces was a concept I found odd, until I realized that in the 50’s the first computer to generate images was polygon based, created by mathematical equations as they are today.
In the 70’s, Ed Catmull and his fellow students at the University of Utah were making hands and faces with these wireframe polygons. They were able to manipulate these images so that even flat, polygon surfaces could be curved, as in the case of the teapot, giving us the modeling style we have today.
Rendering still takes a long time, though nowhere near as long as in the 70’s and 80’s. However, I can relate to what the artists on Wrath of Khan must have felt when they rendered the Genesis Effect over Christmas holiday, only to come back and discover that the camera in that shot was moving through the mountains, and that they had to redo the entire sequence. I rendered sequences for my 2D thesis last year on my clunky, 3 year old MacBook, that would take an entire day, and sometimes they had very obvious miscalculations that required fixing and then re-rendering. Most recently, I have rendered an animation for this semester’s pipeline class and then found that I had to fix and re-render the clip because the animation was off, even though it looked good in the pre-render playout.
Today, obviously the computers are more powerful, and so the rendering takes much less time, but sometimes the pre-render plays slow in Maya or I just miss a tiny but important element that I spot in the render (like my character not moving as naturally as I thought he was) and I have to go back and correct it.
Finally, I have come to understand, not only through Droidmaker but through the pipeline class, why there are so many people on the visual effects credits in movies. I always knew that this work was very precise and labor intensive, but I didn’t understand the process through which a team would go from start to finish. Not only has my experience in pipeline helped me to understand this, but also the history of the field, because these departments developed in the timeline of visual effects history to meet a specific need at a specific time in history, like texturing the teapot or compositing the cave in Wrath of Khan.