I had been using Unity 3D for many years. My own 3d software and apps were developed in Unity 3D. I had a substantial code-base of Unity 3D c# code built up.
But I threw it all away. UNREAL!
Or, rather, I moved to the Unreal Engine.
I had been using Unity 3D for many years. My own 3d software and apps were developed in Unity 3D. I had a substantial code-base of Unity 3D c# code built up.
But I threw it all away. UNREAL!
Or, rather, I moved to the Unreal Engine.
I’ve spent a lot of time working with different approaches to Application Frameworks in Unity.
For example, my first 1.x releases of Composer and PoseMe were based on a data binding model, for designing in a way similar to (highly inspired by) Model View ViewModel (MVVM) patterns. Continue reading “Unity Application Framework”
I’ve published my simple Unity Build System to help support subsequent releases. You can check it out here.
Simply put: I’ve been using it internally to develop. And now that I’m about to publish things that depend on it, I need to publish it.
I’m elated to announce the release of FIE Composer 2.
Composer 2 is a free upgrade for iPad users. Also, we’ve now released Composer for Android Tablets too.
You can find Composer in Apple’s App Store or the Google’s Play Store.
Continue reading “FIE Composer 2 for iPad and Android Released”
I’ve been working very hard on a new version of Composer for a long time. I’m excited to announce that it has opened for public beta.
In a huge change, there is now a Desktop version of Composer available for OSX. You can download the beta for free from the product page.
The desktop version isn’t just an after-thought. Most of the redesign of the underlying systems of Composer are meant to turn it into a unique kind of dual-targeted app. It’s meant to be the same application on both Desktop and Mobile-Touch platforms. It’s meant to be great to use on both, rather than kludgy on one or the other.
There are also closed beta tests running for both iPad and Android tablets. You can contact us to ask to be in those closed beta tests.
There are a lot of exciting things coming up with regard to Composer. This version 2 release cycle is important. I’d hope the platform expansion alone would be huge enough. The addition of new lighting control and shadows should be big as well.
However, there are even bigger things coming down the pipe. Even if you decide to just mess around with the free desktop version, I hope you’ll keep an eye on Composer going forward.
I’m launching mDAG, a project I’ve been working on, during the weekends, at home, while working at Method.
I’ve open sourced it under the Artistic License, 2.0 for a few reasons.
Firstly, I think its a tool, sorely needed in the visual effects community, in general.
Second, I’d like to see it adopted, and generally supported.
Third, I’d like to start using it at Method and felt open sourcing it was the clearest and cleanest path to Method’s ability to use it.
My hope is that as it grows, it will become a standard at many studios and facilities, and that many visual effects and animation artists will grow to love it and adopt it as a tool in their arsenal.
Thought I’d just put in a quick shout out for my favorite Python coding tool, ever. WingIDE from WingWare. WingIDE is by far the best Python coding environment I’ve ever used.
I know the first question that comes to mind when looking at the pricetag: "With all the free python IDEs and script editors, why bother buying one? They’re all about the same." Well, thats mostly true. Most python script editors I’ve used are about the same. They provide some mediocre code completion and code folding. Not bad… just not as good as it could be.
For me, its all about code completion. Smart code completion. The kind that reads APIs on the fly, knows what kind of object you’re working with, and tells you what is possible with that object. Its sort of a combination of code completion and an object browser. Visual Studio is renowned for its ability to do this on the fly. Most good python script editors attempt this level of completion but they’re confounded by pythons dynamic typing. I’ll give an example.
[code]
import xml.dom.minidom
def myFunction (doc, element):
pass
[/code]
So, here’s the question. Since python has dynamic loose typing (opposite of static typing), when I try to code with the objects doc and element, how is the editor to know what types of objects they are so it can tell me what I can do with them? It might be able to look at the code that is calling the function, but thats backwards. A function can be called multiple times from anywhere. Perhaps with completely different object types. And its possible both of those calls could be valid. The same problem shows up when trying to figure out what type of object a function returned. There’s no rule that a function always has to return the same kind of object. So how could the system know?
Its at this point that most script editors give up. Code completion stops working the moment you get outside the scope of objects you create yourself within a single function.
With WingIDE, you can hint the system and you get your code completion back. All you have to do is put in an particular type of assert statement. For example:
[code]
import FIE.Constraints
def myFunction(obj, const):
assert isinstance(const, FIE.Constraints.ParentConstraint)
[/code]
from the assert statement on down, code completion now works again. There’s also an added benefit, in that the script will throw an exception should the assertion fail. In python, my script could go for another 20 lines working on the wrong type of object and giving me a vague error without the assert check that cuts straight to the heart of the matter.
WingIDE will also parse source files for documentation and display it for you as you code, eliminating the need to constantly look up the API docs yourself.
Now, I know there’s a hardcore base of programmers out there who say all they need is a text editor and be damned with all these fancy IDEs and their crutches. Well, I simply disagree. I’m sure if you are a coder who has maybe 2 APIs to work with on a regular basis, perhaps that is all you need. But in my job, I am required to learn a new API within a few hours and repeat that as much a necessary. That can sometimes be 2-3 APIs a day. Do I know the full API? No. I know enough to get the job done. And thats what I’m paid to do. For that kind of coding (and scripting, I think lends itself to that kind of coding more that development does) there is no better tool than WingIDE. Call me a weak coder if you wish. I’ll just keep coding, getting the job done faster and better, and keep getting paid to do it. I have a job to do.
One of the design departures that separates Kinearx from the competition is its approach to processing. I’ve seen a number of motion capture solutions in multiple software packages and I’ve identified what I believe to be the single trait that holds most of them back: Arrogance. Don’t get me wrong, I’m pretty arrogant myself I’m talking about a very specific kind of arrogance. Its arrogant to assume that one’s algorithm is going to be so good, that it will be able to make sense of mocap data on its own, in a single pass, and be right about it.
Now you might be thinking, "Brad, thats silly. All these programs let you go back and edit their decisions. They all let you manually fix marker swaps and such. They’re not assuming anything. You’re blowing things out of proportion." Ah, but then I ask you, why should an algorithm make a decision at all? Why should you need to fix a marker swap that the algorithm has put into reality?
Kinearx approaches the way mocap data is processed in a way I would term a "humbled" approach. Kinearx knows what it doesn’t know. The design acknowledges that everything is a guess and its completely willing to give up on assumptions, should evidence point to the contrary. The basic data structure that operators work on is one of recommendations, statistics and heuristics, rather than one of "the current state of the data. and what I’m going to change about it." A typical labeling process can consist of running some highly trusted heuristic algorithms that make recommendations on labeling at points of high confidence. It can also consist of applying less trusted heuristics that are wider in temporal scope. The recommendations are weighted accordingly when they are eventually committed to the data as decisions. Algorithms can peek at the existing recommendations to hint them along. Manual labeling operations an be added to the list of recommendations as having extremely high confidence. Algorithms can even go as far as to cull recommendations. The difference between Kinearx and other mocap apps, is that this recommendation data lives from operation to operation. It lives through manual intervention and as such, is open to being manipulated by the user, either procedurally or manually.
The power of this system will become apparent when looking at the pipeline system, which allows a streamlined processing environment in which to apply processing operations to data both procedurally and manually.
Kinearx will be FIE’s flagship motion capture software offering. Its currently in development and not quite ready for show. However, it is taking a strong shape and is becoming functional, and I’d like to explain the goals and underpinnings of the system. So, I’ll start with a blog entry about influences.
Software isn’t created in a vacuum of knowledge. In this period of particularly vicious intellectual property warfare, it might even be dangerous to acknowledge any influence whatsoever. That would not sit right be me in the long run however. Also, I think acknowledging and explaining influences can keep design and goals on track. So here they are in no particular order: