Unreal

I had been using Unity 3D for many years. My own 3d software and apps were developed in Unity 3D. I had a substantial code-base of Unity 3D c# code built up.

But I threw it all away. UNREAL!

Or, rather, I moved to the Unreal Engine.

Continue reading “Unreal”

Composer 2 enters public beta

I’ve been working very hard on a new version of Composer for a long time.  I’m excited to announce that it has opened for public beta.

In a huge change, there is now a Desktop version of Composer available for OSX.  You can download the beta for free from the product page.

The desktop version isn’t just an after-thought.  Most of the redesign of the underlying systems of Composer are meant to turn it into a unique kind of dual-targeted app.  It’s meant to be the same application on both Desktop and Mobile-Touch platforms.  It’s meant to be great to use on both, rather than kludgy on one or the other.

There are also closed beta tests running for both iPad and Android tablets.  You can contact us to ask to be in those closed beta tests.

There are a lot of exciting things coming up with regard to Composer.  This version 2 release cycle is important.  I’d hope the platform expansion alone would be huge enough.  The addition of new lighting control and shadows should be big as well.

However, there are even bigger things coming down the pipe.  Even if you decide to just mess around with the free desktop version, I hope you’ll keep an eye on Composer going forward.

mDAG site launched

I’m launching mDAG, a project I’ve been working on, during the weekends, at home, while working at Method.

I’ve open sourced it under the Artistic License, 2.0 for a few reasons.

Firstly, I think its a tool, sorely needed in the visual effects community, in general.

Second, I’d like to see it adopted, and generally supported.

Third, I’d like to start using it at Method and felt open sourcing it was the clearest and cleanest path to Method’s ability to use it.

My hope is that as it grows, it will become a standard at many studios and facilities, and that many visual effects and animation artists will grow to love it and adopt it as a tool in their arsenal.

WingIDE for Python

Thought I’d just put in a quick shout out for my favorite Python coding tool, ever.  WingIDE from WingWare. WingIDE is by far the best Python coding environment I’ve ever used.

I know the first question that comes to mind when looking at the pricetag:  "With all the free python IDEs and script editors, why bother buying one?  They’re all about the same."  Well, thats mostly true.  Most python script editors I’ve used are about the same.  They provide some mediocre code completion and code folding.  Not bad… just not as good as it could be.

For me, its all about code completion.  Smart code completion.  The kind that reads APIs on the fly, knows what kind of object you’re working with, and tells you what is possible with that object.  Its sort of a combination of code completion and an object browser.  Visual Studio is renowned for its ability to do this on the fly.  Most good python script editors attempt this level of completion but they’re confounded by pythons dynamic typing.  I’ll give an example.

[code] 

import xml.dom.minidom

def myFunction (doc, element):
    pass

[/code]

So, here’s the question.  Since python has dynamic loose typing (opposite of static typing), when I try to code with the objects doc and element, how is the editor to know what types of objects they are so it can tell me what I can do with them?  It might be able to look at the code that is calling the function, but thats backwards.  A function can be called multiple times from anywhere.  Perhaps with completely different object types.  And its possible both of those calls could be valid.  The same problem shows up when trying to figure out what type of object a function returned.  There’s no rule that a function always has to return the same kind of object.  So how could the system know?

Its at this point that most script editors give up.  Code completion stops working the moment you get outside the scope of objects you create yourself within a single function.

With WingIDE, you can hint the system and you get your code completion back.  All you have to do is put in an particular type of assert statement.  For example:

[code]

import FIE.Constraints

def myFunction(obj, const):
    assert isinstance(const, FIE.Constraints.ParentConstraint)

[/code]

from the assert statement on down, code completion now works again.  There’s also an added benefit, in that the script will throw an exception should the assertion fail.  In python, my script could go for another 20 lines working on the wrong type of object and giving me a vague error without the assert check that cuts straight to the heart of the matter.

WingIDE will also parse source files for documentation and display it for you as you code, eliminating the need to constantly look up the API docs yourself.

Now, I know there’s a hardcore base of programmers out there who say all they need is a text editor and be damned with all these fancy IDEs and their crutches.  Well, I simply disagree.  I’m sure if you are a coder who has maybe 2 APIs to work with on a regular basis, perhaps that is all you need.  But in my job, I am required to learn a new API within a few hours and repeat that as much a necessary.  That can sometimes be 2-3 APIs a day. Do I know the full API?  No.  I know enough to get the job done.  And thats what I’m paid to do.  For that kind of coding (and scripting, I think lends itself to that kind of coding more that development does) there is no better tool than WingIDE.  Call me a weak coder if you wish.  I’ll just keep coding, getting the job done faster and better, and keep getting paid to do it. I have a job to do.

 

 

 

Kinearx is Coming: Humble Processing

One of the design departures that separates Kinearx from the competition is its approach to processing.  I’ve seen a number of motion capture solutions in multiple software packages and I’ve identified what I believe to be the single trait that holds most of them back: Arrogance.  Don’t get me wrong, I’m pretty arrogant myself Smile  I’m talking about a very specific kind of arrogance.  Its arrogant to assume that one’s algorithm is going to be so good, that it will be able to make sense of mocap data on its own, in a single pass, and be right about it.

Now you might be thinking, "Brad, thats silly.  All these programs let you go back and edit their decisions.  They all let you manually fix marker swaps and such.  They’re not assuming anything.  You’re blowing things out of proportion."  Ah, but then I ask you, why should an algorithm make a decision at all?  Why should you need to fix a marker swap that the algorithm has put into reality?

Kinearx approaches the way mocap data is processed in a way I would term a "humbled" approach.  Kinearx knows what it doesn’t know.  The design acknowledges that everything is a guess and its completely willing to give up on assumptions, should evidence point to the contrary.  The basic data structure that operators work on is one of recommendations, statistics and heuristics, rather than one of "the current state of the data. and what I’m going to change about it."  A typical labeling process can consist of running some highly trusted heuristic  algorithms  that make recommendations on labeling at points of high confidence.  It can also consist of applying less trusted heuristics that are wider in temporal scope.  The recommendations are weighted accordingly when they are eventually committed to the data as decisions.  Algorithms can peek at the existing recommendations to hint them along.  Manual labeling operations an be added to the list of recommendations as having extremely high confidence.  Algorithms can even go as far as to cull recommendations.  The difference between Kinearx and other mocap apps, is that this recommendation data lives from operation to operation.  It lives through manual intervention and as such, is open to being manipulated by the user, either procedurally or manually.

The power of this system will become apparent when looking at the pipeline system, which allows a streamlined processing environment in which to apply processing operations to data both procedurally and manually. 

Kinearx is Coming: Influences

Kinearx will be FIE’s flagship motion capture software offering.  Its currently in development and not quite ready for show.  However, it is taking a strong shape and is becoming functional, and I’d like to explain the goals and underpinnings of the system.  So, I’ll start with a blog entry about influences.

Software isn’t created in a vacuum of knowledge.  In this period of particularly vicious intellectual property warfare, it might even be dangerous to acknowledge any influence whatsoever.  That would not sit right be me in the long run however.  Also, I think acknowledging and explaining influences can keep design and goals on track.  So here they are in no particular order:

  • Natural Point‘s commodtization of the Motion Capture hardware market:  The commoditization of the hardware means one of two things.  Either a wide portion of the consumer market will have NP MC hardware, or a wide portion of the consumer market will have some form of competing commodity MC hardware.  Either way, the market will grow.  And more importantly, the size of the new market will so overshadow the old market, that previously existing systems will become mostly irrelevant.  So it will basically wipe the slate clean.  This means Kinearx is in a good position to come in and take a large chunk of the new commodity MC software market that will accompany the hardware market.  Its important not to lose focus on this target market.
  • Vicon IQ’s generalization and configurability:  IQ doesn’t recognize humans, it recognizes kinematic mechanical models.  It then has useful templates of those models for humans. 
    Generally, every operation exposes just about every parameter you could imagine, and many you just don’t want to know about.  It provides reasonable defaults.
  • Vicon IQ’s flexible processing:  While I think there are some serious flaws in their design, the overall concept of the post processing pipeline is a good one.  IQ stops short where it should expand further.
  • Vicon Blade:  Blade is a combination of IQ and HOM Diva.  The Diva half of it, I’m sure is fine.  The IQ half of it is really disappointing.  In combining it into Blade, they’ve lost sight of what was good about it, and in turn lost functionality.  I’ll be using IQ on Vicon system sessions for some time to come as I suspect a large portion of their existing customer base will be.
  • Giant‘s Autonomous Reliability: Giant’s software systems are proprietary, so you’d have to spend some time on set with them to observe this.  I had the opportunity to spend weeks at a time on The Incredible Hulk with one of their crews/systems.  The specifics of the software aside, the fact of the matter is, they were able to generate extremely clean data from arbitrarily complex takes overnight with minimal human intervention.  What does this look like?  Well, from what I could tell, the operator would scroll through the take, letting the system try to ID the markers (which it kinda did right on its own most of the time anyway).  When they found a frame that it got right, they marked it, and sent it off for batch processing.  The system took care of the rest.  I’d come back in the morning and they’d offload a day’s worth of cleaned up mocap directly to my hard drive.  Were there points where it interpolated the fingers?  Sure.  Did it matter?  No.  To put this in perspective, it was able to do this with two guys freestyle sparring roman greco wrestling style.  So you know, rolling around on the floor grappling.
  • Softimage XSI user interface: XSI has problems.  Believe me, I can list them.  But their UI is something special.
  • Motion Builder‘s Ubiquity:  Motion Builder is an amazing piece of software, but its meant to come into the mocap process too late and isn’t really trying to solve the hard problems early in the process.  Optical cleanup is missign half the tools it should have.  However, in the industry, Motion Builder is synonymous with motion capture.  Unless you are having your mocap vendor do all the cleanup and re targeting, you kind of need it (really big jobs can license software from Giant in some circumstances and Blade seeks to take MB’s place but those solutions have not solidified as general solutions just yet).
  • Motion Builder’s Bad Reputation:  Fact of the matter is, not many people like motion builder.  When I mention it, I get snears.  I’ve had a lot of trouble reconciling the volume of sneers I get from otherwise reasonable professionals, with the software and its production capabilities that I’ve come to know and rely on.  It is my opinion, that MB’s bad reputation comes from the complete and utter lack of production worthy training and demo material.  No one knows how to use it or what is actually in there.  Its all rather well hidden and unless someone trains you and shows you through it, you wont know its there.  Also, at this point it should be a no brainer to plug MB directly into a Maya character animation job.  However, I had to develop a ton of custom tools and workflows to accomplish it for PLF.  Add that to the fact that the 3d industry tends to be more religious and less scientific about its tools and techniques, and well, you end up in a bad spot.