Unreal

I had been using Unity 3D for many years. My own 3d software and apps were developed in Unity 3D. I had a substantial code-base of Unity 3D c# code built up.

But I threw it all away. UNREAL!

Or, rather, I moved to the Unreal Engine.

Continue reading “Unreal”

The awkwardly wrong path of digital SLR as digital cinema

Last week, Scott Squires retweeted a link to a Digital SLR (DSLR) review for Digital Cinematographers and filmmakers.

Screen Shot 2014-11-30 at 5.23.38 PMThe linked article eventually gets back to its source at 4K shooters.

I sort of went off.  Not at Scott.  Or at least not intentionally at Scott.  But rather, at the whole of the Digital SLR Cinematography movement.

My general gist being:

Continue reading “The awkwardly wrong path of digital SLR as digital cinema”

Global Storage 2 VFS

So, I’ve been quietly working on Global Storage, verison 2.

Verison 1 never really went beyond a "neat thing" I made.  I evaluated it for usage on Terence Malik’s "Tree of Life" and decided it wasn’t mature enough yet, to trust production assets to.  I also evaluated GlusterFS and MooseFS at the same time and came to the same conclustion regarding those offereings at that time.  Instead, I went with a venerable NFS cross mounting solution, leveraging the power of my network switch and the inteligence of the my artists, rather than the filesystem, to handle the workload.  Once I set down ground rules, and taught my team to follow them, all was well.  That soution doesn’t really scale.  But it was fine for that show.

However, I’ve recenlty found myself really missing the functionality that Global Storage was mean to bring, as I’ve been working a lot in Unity3D and other digital asset heavy environments.  So I have gone back in and started a rewrite from scratch.  The base tennets of the system have not changed.  But I decided to break backwards compatibility in order to address a lot of the "bigger facility" issues I’ve been exposed to during my time as the Director of Software and Pipeline at Method Studios.

Some major changes:

  • All SSH back end and communication for security
  • Bazaar rather than Subversion as a base level revsion system.  This allows a more distributed asset managment system just like Bazaar allows a more distributed development system than Subverion does.
  • Virtual File System via FUSE.  Looks exactly like a native file system but, on closer inspection, it isn’t.  Its way more.

That last item, the VFS is one that I may hold off until version 2.5.  Its designed with it in mind.  Its just that I may finish everything up but that and push it into use using symlinks as a glue mechanism before I dive into the FUSE module.  We’ll see.

Currenlty Global Storage 2 (a.k.a. gls) is awake.  It can log into a minimally configured server and set up some base level storage buckets.  It can manage SSH and RSA passwordless login for a user (somewhat important to a system premised entirely on SSH network commuincations).  I’m implementing the Bazaar storage bucket types right now actually. Hopefully it will do somethign impressive enough to show off, very soon.

I guess the other piece of this puzzle is as follows:

gls is likely going to be the basis of all my licensable technology.  I’m going to use it as my distribution, licening and updating machanism.

In a lot of ways, this is turning into version 2.0 of the "Tree of Life" pipeline.  I figure if I can solve the scalability and multi-site concerns of Method, along with the budget, image size and quality isues of "Tree of Life," then I’ve gotten something pretty compelling.

bash tab complete

random note on something that was hard to figure out:

bash tab complete using:

complete -C [commandtorun] [commandtocomplete] 

it will run the "commandtorun" command with three arguments.

  1. commandtocomplete
  2. the incomplete arg token at the prompt or "" for an empty token
  3. the arg token just before the incomplete token… which may be commandtocomplete if there are no intermediate args. 

Had to write an exe to write out the incoming args to a text file to figure this out.  the docs are a bit thin on the matter.

Staff at Method Studios

Actually, this is old news.  As of January 2010, I took a staff position at Method Studios.

For FIE, this means the consulting and software facets of my company are going into a bit of a frozen state.  I will continue to develop software and tools in some capacity outside of Method, but will limit that work to open source tools most likely.

The creative development half of the company is still moving forward however slowly…

Nuke 5.2 GPU Support

I found it somewhat unintuitive and unpublished, so I thought I’d post my findings here.

Nuke’s brand new GPU support is only currently set up to run GPU operations on the viewport.  It will not return them to the next node in the chain.

This is accross the board in all 3 hidden GPU nodes that come with Nuke 5.2 and appears to be built into the IOp calling mechaism (the closed source part of Nuke).  So for all those with visions of GPU accelerated compositing… not yet.

I cannot really intuit the design intentions to the level that I can say if this will change in future versions.  The one really odd part is the hidden node: BlockGPU.  It makes no sense to have a node that blocks the usage of the GPU engine in a tree, if the only way to use the GPU engine is to view a node that uses the GPU engine directly.  And secondarily… it doesn’t block the node if you view it directly.

Anyhow, to me it looks like they got just enough code in there to accelerate their new GPU LUT functionality, and left it at that.  That’s probably why the GPU nodes are hidden rather than exposed.  Maybe in Nuke6.0?

Iron Man Released

Iron Man finally hit threaters and so I finally get to talk about it in the open, and what I did on it.

If you watch the credits, under Pixel Liberation Front, you’ll find me crediated as Brad Friedman, Technical Artist.  This is actually a mistake.  My credit is supposed to be Technical Lead, as my supervisor submitted it.  I’ll assume it was a transcription error and not a willful demotion.  If I’m really lucky it might be fixed for the DVD release, but that’s pretty unlikely.

So, what exactly did I do on the film?  What does previz do on films like that? How does it relate to FIE?  How will it help my clients?

Well, I’ll answer the last questions first.  On Iron Man, I developed a brand new previsualization pipeline for Pixel Liberation Front, in Autodesk Maya.  PLF has, in the past, been primarily an XSI based company.  Though they have worked in Maya before, its usually for short jobs (commercials and the like) or working within someone else’s custom Maya pipline (such as working with Sony Image Works on Spider Man 2).  This was the first time PLF was going to really take on a modern feature film as the lead company, completely under it’s own sails in Maya.  And it was my job to make it possible.

The tools and techniques I developed were the backbone of the previsualization department on Iron Man for the two years PLF was actively on the project, even after I left the film.  The pipeline also moved over to “The Incredible Hulk” as we had a second team embedded in that production.  I’ve actually been talking to PLF about further expansion and will probably be expanding their pipeline in the coming months for upcoming projects.  All those designs and ways of working are currently being updated and reimplemented into my licensable toolkit.  They will be available to my clients.

I’ll get into the specifics of the tools and designs as I describe the challenges of the production further.

When we were first approached by Marvel and the production, one of the requirements for being awarded the job, was that we work in Maya.  We quickly evaluated if we could do it.  And the answer was “yes.”  Very quickly we were awarded the job and we sent our first team to the production offices, embedded in Marvel’s offices in Beverly Hills.  The team consisted of Kent Seki, the previsualization supervisor, Mahito Mizobuchi, our asset artist and junior animator, and myself, the technical director and artist.

My frist major task was to help the designers in the art department with the Iron Man suit design.  At the time, there was no visual effects vendor.  And the hope was that I could do Visual Effects development and vetting with the art department, avoiding the need of hiring the VFX vendor early in the production.  John Nelson, the VFX supervisor was particularly concerned with the translation of the designs on paper, to a moving 3d dimensional armor.  He didn’t want obvious interpenetrations and obvious mechanical imposibilities to be inherent in the design.  He wanted the range of motion evaluated and enhanced.   He needed us to be able to build the art department designs in 3d and put them through movement tests.  He needed us to suggest solutions to tricky problems.  All that weight fell on my shoulders with the assistance of Mahito’s modeling skills.

For a number of months, we rebuilt the key trouble areas again and again as the design was iterated back and forth between us and the art department.  The abdomen and the shoulder/torso/arm areas were where we focused most of our time. I rigged a fully articulated version (metal plate for metal plate) of the trouble areas about once a week, trying to solve new and old problems different ways each time.

Finally, we arrived at what has stuck as the final articulation and engineering of the suit after a few months of work.  The suit has gone through many superficial redesigns and reproportions since then.  However its mechanical design was sound and survived all the way to the screen.  The design eventually went to Stan Windston Digital for further work and to be built as a practical suit.  It eventually went to ILM for final VFX work, once they were awarded the show.

I think what this aspect of the project shows is a very smart VFX Supervisor and VFX Producer
figuring out how best to utilize a top notch previs team to save money and get experts in the right places, without having to pay a premium for a full VFX company early in the production process.  This allowed them to go through a full bidding process for the VFX vendors, without being rushed into an early decision, as their VFX needs were met by us.  To give you an idea of how early in the process this was:  Rober Downey Jr. was not cast at this point.  The production designer started work a week after we did.  The script was not complete.  The Director of Photography was not chosen.  The main villain in the film was NOT Obidiah Stain at the time.  This is really early and ultimately the final decisions on VFX vendors didn’t come until about a year later.

Once our initial suit design chores were handled, I moved onto the main previs effort and we started working on sequences of the film.

Previs has some very special requirements.  It has to be done fast.  It has to look good.  It has to be cheap.  I think its the most demanding animation project type there is these days.  On the triangle of neccesity, it hits all three points.  Fast, Good and Cheap.  Usually you can pick only two of those to accomplish at the expense of the third.

Historically, previs got away with skimping on the “Good.”  Characters would slide along the ground, making no effort to pump their legs.  It was pure blocking.  It was fine though, because the previs was a technical planning tool.  The aesthetics were irrelevent as long as the previs was technically accurate enough to plan with.  However, as previs has become more of an aesthetic tool and less of a purely technical tool, the quality of animation has had to increase exponentially.  It has gotten to the point that on a lot of shows, we’re doing first pass animation.  And in the extreme, even first pass animation is not enough to convey real human acting.  We are expected to be able to generate real human motion without giving up on speed or price.  Some might find this antithetical to the concept of previs (infact a lot of people do) but our reality as a previs company is that “the customer is always right” and if the customer decides they want it, we need to be able to provide it.

So to sum it up, I was faced with being the only experienced Maya user on the team.  I needed to hit all three points in the triangle of neccesity fast cheap and (exceptionally) good.

By the end of my time on Iron Man and Hulk, I created the following

  • Multi Rig Marionette System
    • One character can have any number of control rigs in a given scene.  The character can blend from rig to rig as the animator pleases.  Addition of more rigs and characters to the scene is handled via an integrated GUI.  Control rigs can be of any type and therefore can use any animation for their source, from hand animation to mocap.  Think hard about what doors it opens up.  A rig devoted to a walk cycle.  A rig devoted to a run cycle and a rig to animate on by hand.  All connected to the same character with the freedom to add as many other rigs as you need to make the scene work.
    • Accelerated skinning based on a goemetry influence system integrated into the Marionette.
  • Motion capture Integration
    • Accelerated pipeline for moving raw motion capture from motionbuilder to any number of previs characters in a batch oriented fas
      hion.
    • Motion Capture editing rig that takes mocap as a source and gives the animator an array of FK and IK offset controls with which to modify the mocap.
  • Auto Playblast/Capture/Pass System
    • A fully functional settings system that allows assets to be populatd with “terminal strips” and passes to be populated with “settings” for the terminal strips that control aspects of the asset via connections, expressions, etc.  Activating a pass applies the settings to any matching terminal strips in the scene.
    • Passes can be members of any number of shots in the scene.
    • Passes can optionally activate associated render layers.  In this way, I’ve incorporated the existing render layers system into my own (if the new render layers ever become stable enough to use in production, I might actually condone their use)
    • A shot references a camera, a number of passes and keeps information on Resolution, Aspect ratio, shot, sequence, version, framerange, etc.
    • Smart playblast commands capture the selected shots selected passes in shots etc.  One command and it sets up the playblasts itself and fires them off one by one.  Go get coffee.
  • Guide and AutoRig solutions
    • A custom biped guide allows easy joint placement and alignment.
    • A very comprehensive rig based on the ZooCST scripts is automatically generated from the guide.
    • Motion capture rigs suitable for movement into MotionBuilder and generated off of the guide.
    • Motion capture editing rigs suitable for use in maya are generated off of the guide.
  • An exceptional Goat rig
    • I made a really good goat rig.  Which didn’t really end up getting used.  But it was a spectacular rig.  Really.

Beyond the tools I built directly for the previs jobs, Iron Man also opened the door for us when it came to motion capture.  The production had scheduled a series of motion capture shoots with which to test out the suit designs and see how they moved.  We made it a point to capture some action for the previs while we were there. Our previs characters were already loaded into the systems since that was what the production was using to test our the suit design.  We used some of the mocap for the previs on Iron Man in a more traditional motionbuilder pipeline that was functional, but slow.  I immediately realized that mocap could be the solution to the, better-faster-cheaper problem with the bottlenecks sorted out with custom tech.  And I moved in that direciton.  Within a year, PLF had bought a Vicon mocap system.  I set to the task of making it work for previs and within the year, we were in full swing of doing previs mocap for “The Incredible Hulk” due out later this summer.

The remaining question is: do these technologies have relevence outside of Previs?   And the answer is, Yes.  All of these technologies were built to dual target previs animation and production animation. They’re generic in nature.  We have used all of these systems on finsihed render jobs, game trailrs and the like.  What I’ve built, is a better way to animate, that is rig agnostic.  All the old animation techniques and rigs still work.  I’ve just incorporated them all into a higher level system.  I then built a high volume motion capture pipeline as one potential animation source in that higher level system.

But it doesn’t stop there.  In reality, I only really got to develop about half of the systems and tools I felt should be a part of the pipeline.  In their new incarnations, the tools will be even more functional.

I saw the movie the other night and it was a real thrill to see all our shots and sequences finished and put together.  A number of my own shots made it through the gauntlet and into the final film, which was a real thrill.  More so than that, it was great to see people enjoying it.  I felt good about knowing that I had in a very real way, built a large section of the foundation for it.  My tools allowed Jon Favreau to work on the film in a more fluid and intuitive manner through the previs team, which really created an unprecedented ammount of previs at an astounding quality.  The more previs we were able to do, the more revisions and iterations Jon could make before principal photography.  And the film is better for it.

Introducing GlobalStorage

GlobalStorage solves a lot of problems I’ve been pondering for a long time.  Here are some of those problems.

  1. You’re not really supposed to work directly on the file server in most production environments.
    1. It clogs the network for everyone.
    2. Network fabric is slower than internal data buses like SATA.  You get better performance on a local drive.
    3. Redundancy can save you.  If you break your local copy, you can pull the original from the server to fix it.
  2. But there are advantages to working on the server
    1. Organizationally, its easier to work on a common filesystem.  Changes others make to the filesystem are immediately seen by you.  You and your fellow artists wont miss each other’s changes as they live in the same place.
    2. You don’t have to manage which files you have "changed" and which you have not when publishing your changes back to the server since changes are changed immediately and… thats it.
    3. Absolute filepaths that are part of the files you are working on (references from one file to another) are not an issue if everyone maps the shared directory to the same place.  Even the renderfarm can work directly with the filesystem this way.  Otherwise, you have to manage absolute paths and artists mess this up all the time.
    4. You wont as often "forget" to publish new files to the server that you had local on your harddrive as your first instinct will be to save directly to the server.
  3. There are things that neither solution fixes
    1. Without actually locking the files you are working on when they are loaded into memory you risk two artists changing the same file and overwriting each other’s changes when they publish (or write) them back.
    2. When files are eventually published to the server they are overwritten, the old version is lost.  So if you mess something up, you’re out of luck.

An experienced artist will look at my list of issues and immediately start listing application features, workflows and tools to deal with the problems one by one.  And lets not be unreasonable here.  Most medium sized facilites have at the very least, solutions, standards and practices that mitigate a lot of these problems to varying degrees.  Here are a few:

  1. Alienbrain
  2. Perforce
  3. Versioning files manually
    1. Never create myfile.txt
    2. Always make myfile_v01_01.txt
  4. Use "incremental save" features in your 3d app
  5. Keep your whole project in a Subversion or CVS repository
  6. Make artists responsible for individual assets, reducing the number of people who may be working on a file.
  7. Use the verbal Chekout Checkin system (i.e. "I’m Checking Out SC_02!")

The better solutions listed here fall into the category of Source Control Management (SCM) systems.  SCMs solve a lot of problems.  They were created to manage the first real digital assets, computer sourcecode, many years ago.   Modern SCMs manage locking and versioning of complex directory trees.  They can manage collisions down to the file level and if your files are text based, they can often manage them down to the line level.

Perforce and Alienbrain have been optimized to work with digital media assets (which are usually characterized as being big binary files rather than text files).  They are however, proprietary and expensive.  If you choose one of these solutions (and many digital media production facilities do) you will be stuck licensing each artist seat, or buying a rather hefty site license.  They are proprietary and therefore, closed source.  And as much as they can provide plugin APIs, anything thats closed source is more difficult to customize than an open source solution.

Subversion on the other hand, is open source and has a large volume of support.  Subversion has been my favorite SCM for years and I’ve used it in production a number of times.

However, Subversion is not the end solution to the problem.  All SCMs I’ve worked with including Subversion, have a few problems.

  1. You don’t necessarily want to version every file in your tree.  Some files are meant to be replaced.  Especially large files that are generated from small files.  Its probably good enough to generate them and push them to the server.  There’s no need to track their every incarnation over time.  Its wasteful of space and processing power.
  2. If you accidentally commit large amounts of data to the server, its often quite hard to get rid of it for good.  Its part of the history and SCM systems are kind of built NOT to lose historical data.
  3. Archiving granularity is an issue.  You can create a repository per project but then the projects are separate.  Or you can keep everything under one tree, but it becomes hard to delete a project after archiving it.  Also, when archiving a project, you may want to keep versioing information for some parts but only the latest version for others.  This is even more complex, if not impossible.

Anyhow, what I’ve built and have running in Alpha right now, is what I’m terming GlobalStorage.  Its a suite of tools that use Subversion to implement a more robust SCM thats tailored better to production.  Basically, it a system built on top of Subversion and more common filesystem tools, to act as the single storage solution for a digital production studio.

Here are some features in no particular order:

  1. Generic storage solution.  Even the production accountant can use it as his/her data store, regardless of his/her completely different tools and workload.  Producers can use it for their storage needs.  Its not 3d or video specific in nature.
  2. Written mostly in python and therefore able to integrate directly into leading digital production packages directly and easily.
  3. Assets can be SVN backed or FLAT. So they are either under full historical version control, or just a flat copy on the server, depending on the appropriate storage for the asset.
  4. Assets can show up multiple times in the directory tree.  A single asset (say, HDRI Skies) can be in the textures directory for an XSI project, Maya project, and a central asset library, all without making redundant copies of the asset.
  5. Dependecies.  Assets can be set to be dependant on other assets.  Dependencies can optionally be updated and commited in lockstep with one another from a single call on the top level asset.
  6. Disconnected Mode.  When the system is disconnected from its server, it can create and work with new assets locally, as if they were on a server.  When you reconnect to the server, these assets are then able to be transferred to the server.
  7. Assets can have their history deleted when its time to save space.
  8. Assets can be filtered at the path level, allowing the permanant deletion of parts of an asset’s history witout affecting the history of the rest of the asset.
  9. Assets are easily copied and moved from server to server for archival purposes.
  10. Assets are stored via hashcode and will never collide at the storage level.  The entire history of your production at the company will be able to live on a single storage system if its big enough.  historical projects can be brought back into an existing server without worry of data loss.

The 900 pound gorilla in the room has the word "Scalability" shaved into his chest hair.  This of course being a serious issue and the cause of many growing pains.

There are a few ways to deal with this.

Firstly, I’m going to add a "round robin" load balancing system into GlobalStorage, where a newly created asset is put on a randomly selected server from a pool.  Assets will also be able to be created on specific servers at the user’s requ
est.  And assets will be able to be moved to specific servers at a user’s request.  GlobalStorage will magically merge the assets into a directory structure on the user’s machine when they are checked out.  Their location on the network is irrelevant as long as it has access to the repositories.

The round robin solution is pretty powerful and will probably meet the needs of a large number of facilities.  With the application of minimal brain power by the artists, assets will move to unencumbered servers every once and a while.

However, what you’d really want is what’s known as a clustered file system, where it appears a single server does all the work and it runs really fast. In reality, a cluster of servers is moving data around and load balancing in a logical data driven manner. You also would want redundancy at every step of the way to avoid having a single point of failure to keep your uptime in the 99.9% range even when you have a bad week and 4 drives and a network switch fail on you.

Clustered file systems are a pain to set up and usually quite expensive to license.  However, one of my goals over the next few months is to put together a set of virtual machines and infrastructure to make the deployment of a clustered filesystem based on commodity hardware and open source software a simple matter.

GlobalStorage is designed to work just as well in a Clustered environment as in a round robin environment.   But there’s no doubt that at some size, you’ll really want to put a storage cluster into your facility rather than maintain many individual servers.