I wanted to play with Unreal Engine rigging for so long and therefore have finally have made some time to explore this. It’s a game engine and I had to treat it as such.
In Unreal there is a Construction Event node and then there is the Forward Solve that does the same thing, I got confused and so I’ve looked it up:
Construction event is initializing the components and Forward Solve is activated on the Being Play application. Okay! Knowing this, I’ve moved forward into crafting a simple FK Control Rig that has IKTwoBone and IKThreeBone, and BasicIK nodes to manipulate the legs puppetry. The real trick was figuring out the IK Pole Vectors by matching the the correct axes and you do this by first knowing what the original vector axes are for the bone and comparing that against the created IK Pole Vector control node. I put that information inside the Primary Axis and it works!
I’ve created a five minute video of my rigging process. Updating the mesh is as straightforward as it gets inside Unreal! Just replace the Skeletal Mesh inside “Preview Mesh” in the Control Rig. Wow!
I’ve spent some time crafting a Blender Add-On that allows me to save animation data onto a JSON file and to read and import that data onto a Blender Object
It was an interesting experience finishing the Add-On, and I’ve learnt a few things: 1. Blender handles Matrix data differently than from Maya 2. I have some understanding of how Blender handles its UI Class objects, the data is shared through convoluted class objects. 3. Before any animation is to be created, Action data-blocks are needed to be created: Actions – Blender 4.1 Manual, which I’m likening it to MFnAnimCurve node class object in Maya: individual object attributes connected to a type-specific Node: Maya API: MFnAnimCurve Class Reference (autodesk.com)
I wanted to show case how to write an Animation Import/ Exporter in Python Open Maya 1.0.
To start with, I will be using my own methods to achieve this, specifically, animation_utils.py What is important to know about OpenMaya 1.0 is that it is closely correlated with C++ types and arguments. In OpenMaya 1.0, I’ve created a ScriptUtil class to get “storage boxes’ that are meant to put into functions to collect information. Here’s an example:
# import modules
from maya import OpenMayaAnim
from maya_utils import object_utils
# let's grab tangent information from a MFnAnimCurve function
plug_node = object_utils.get_plug(object_name, attribute_name)
anim_fn = OpenMaya.MFnAnimCurve(plug_node)
# create a double pointer storage
weight = object_utils.ScriptUtil(as_double_ptr=True)
angle = OpenMaya.MAngle()# grab the key information at indexanim_node_index = 0
# store the information variables, "angle" and "weight.ptr"
anim_fn.getTangent(anim_node_index, angle, weight.ptr, True)
# grab the stored keys' information
radians_angle = angle.asRadians()
weight_dbl = weight.get_double()print("--> ", radians_angle, weight_dbl)
# ____________________________________________
# get_tangent_angle.py
And that’s really all you need to know about OpenMaya information gathering, from functions. Be sure to also check the OpenMaya’s C++ documentation to correlate to what types of information can be available:
from importlib import reload
import sys, os
pipe_path = "C:/Work/pipeline/Maya/Python"
if pipe_path not in sys.path:
sys.path.append(pipe_path)
from maya_utils import animation_utils as au
from maya_utils import object_utils
from maya_utils import atom_utils
reload(atom_utils)
reload(au)
# Maya ATOM
atom_utils.export_atom()
atom_utils.import_atom()
# Maya CMD
au.read_anim_data_cmd()
au.write_anim_data_cmd()
# Open Maya
au.read_anim_data()
au.write_anim_data()
I go over explaining how all this works on my Youtube Channel here
Available bouncy-ball animation file and link to my GitHub repository:
I’ve decided to practice Maya Python on Maya Bifrost and came up with going over a mirror vector as a practice test, as a work-around to using Maya Plugins. It took a while to get around Maya’s commands to create and connect the nodes using Bifrost as I needed to figure out the rules necessary to implement a working compound node and connecting them to the two locator nodes:
I’ve learned that nurbs curves cannot be drawn in bifrost, but you can manipulate them using existing curves in the scene. So the curves that you see in the picture, is a custom visualization only to demonstrate the mirror functionality and not generated by Bifrost.
I’ve learned a lot from this small project, mainly how bifrost nodes that you create are C++ compile time and values cannot be queried via code inside Bifrost Graph.
Here is the link to my repository of my Maya Python study:
This was during the time after the conclusion of the writer’s strikes and my many futile attempts at finding work in the Fall and Winter of 2023. I loved to program in Python for Maya and I’ve felt like having this certificate would come as close to a testament to all the years that I’ve worked in the past using the programming language, that I needed to have something to show for it: enter the PCAP Certified Associate in Python Programming Exam.
I’ve practiced what was needed for the exam, like I’ve been preparing for school again. I was excited when the exam day came. I went through all the technical checkups leading up to the release of the exam OnVUE application that they had. Until the proctors who’s main responsibility is to watch the students that take the examination had “video issues on their end” and had me restart the testing application. – as many times as they told me that I could until I couldn’t begin the exam any more. By that time I was starting to feel upset.
I’ve contacted customer PearsonVUE support help line, I went to Pearson’s Chat detailing the problem, all the while thoughts of being ripped off for minimal effort by the company came and went through my mind. After all, nobody in their right mind would purchase a USD$295 Exam Certificate that holds little weight in the industry – yes the act of taking the examination being the only weight here! A certificate that felt as useless as the twelve years of Python Programming experience that I’ve accumulated working in the VFX and the Animation industry – with most of it is around PCAP Examination level. The tools written at the studio is generally using Python’s modular nature for the majority of the time.
There is one silver lining to this: A Support Ticket has been created at Peason HQ:
Dear Alexei,
We recently assisted you with the following inquiry to Pearson Customer Support:
Discussion Thread:
Subject: Pearson Vue – The test has already started and no longer can continue relaunching the application
Contacted Customer Support Via: Chat
Date/Time Opened: Fri Dec 22 21:18:57 GMT 2023
If this issue is not resolved to your satisfaction, you may reopen it within the next 14 days by either contacting us and referencing your case number, or by replying to this email.
Thank You,
Pearson Support
I realized that there is a difference between Pearson Vue and Pearson departments, but what the hey, they both share the same name. Nothing to do at this point but wait, so I went back on my computer and got to work on my Maya tool, so I can put it to market at the discover.gumroad.com and then monetize it. I figured that is a better use of my programming skills at this point forward.
/Rant over
— December 24, 2023 —
I’ve gotten a fairly nice response from PearsonVUE directly, I’ve gotten a re-instated discount voucher code that I’ve manually entered into re-scheduling my Exam to be an In-Person at a Testing Center, on the Wednesday 27th, 2023:
I’ve learned that there is an actual place that does specialized tests in numbered booths: Unit 410 – 1190 Melville Street 4th Floor Vancouver, British Columbia V6E 3W1 Canada
Dear Alexei,
Thank you for contacting Pearson VUE regarding Case 10709479.
We understand you are anxiously awaiting our reply and we kindly ask for your continued patience. We are diligently researching this matter and hope to have additional information for you soon.
We apologize for any inconvenience and will be in touch soon!
Thank You,
Ankur T
Americas Support Team
Customer Support Specialist
— December 27, 2023 —
I P-A-S-S-E-D the exam! During the course of the examination, I was getting increasingly nervous. There were no cellphones allowed to ‘double-check’ anything, no watches, nothing in your pockets was allowed either. You walk in, and do the exam as-is. I am proud to have achieved this:
P.S.
What’s interesting to me I’ve found is that by the week’s end, my LinkedIn profile of 668 connections and Post Analytics has given me the following data:
Locations of Viewers:
Job Titles of Viewers, interesting to note that job title of ‘Recruiter’ is missing:
I’ll be following the Post Analytics of everything I post on LinkedIn form now on to understand better about my who and what my target audiences are.
Building tools is always fun because of how much over-lap of tools are needed to support the main tool. I figured that it’s time to redesign my current builder with a new one hence this project is born.
When I design a complex window, I find that it’s always a good practice to break up the widgets into separate functions. In this case, my window only contains two widgets, ModuleForm and InformationForm, and PySide makes things easier in combining modules:
class MainWindow(QtWidgets.QMainWindow):
HEIGHT = 400
WIDTH = 400
INFORMATION = {}
module_form = None
information_form = None
# the main build blue-print to construct
# every time a module is added to module form, this updates the blueprint dictionary
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
# add the widgets to the layouts.
self.main_widget = QtWidgets.QWidget(self)
self.setSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
self.main_layout = QtWidgets.QHBoxLayout(self)
# add the two widgets to the main layout
horizontal_split = QtWidgets.QSplitter(QtCore.Qt.Horizontal)
self.main_layout.addWidget(horizontal_split)
self.module_form = ModuleForm(parent=self)
self.information_form = InformationForm(parent=self)
horizontal_split.addWidget(self.module_form)
horizontal_split.addWidget(self.information_form)
As you can guess the ModuleForm are where each rig module class is stored; and when that is selected, it triggers an InformationForm refresh, showing the necessary information for that module, and it works like so:
This project is still in its infancy, I still need to add a blueprint file feature that saves and loads the module configuration as per rig specifications. My main purpose of this project is so that I can build creature rigs cleanly — including creature face work, I find that upkeep for creating faces is high, so having a modular based builder keeps file scenes nice and tidy.
I am not worried about the aesthetics of the tool for now, just its modular utility:
As I can see, PySide offers much flexibility with UI tool design, for example, I found out that you can add separate widgets to a QListWidgetItem like so:
@add_module_decorator
def add_module(self, *args):
"""
adds the module
:param args:
:return:
"""
module_name = args[0]
item = QtWidgets.QListWidgetItem()
widget = ModuleWidget(module_name=module_name, list_widget=self.module_form.list, item=item, parent=self)
item.setSizeHint(widget.sizeHint())
# add a widget to the list
self.module_form.list.addItem(item)
self.module_form.list.setItemWidget(item, widget)
return widget
And that QListWidgetItem contains a Widget with QLabel attached with a colored QtGui.QPixmap which can be changed just by re-assigning a different QtGui.QPixmap by using these two lines of code:
def change_status(self, color="green"):
"""
Change the status of the widget to "built"
"""
self.q_pix = QtGui.QPixmap(buttons[color])
self.icon.setPixmap(self.q_pix)
Today I will explain how a blend-shape based face rig works for autodesk Maya. Understand that blendShapes is an additive mesh shape deformer, that one after another shape gets activated and can be driven by a single controller with values from 0.0 to 1.0 to drive shapes: shape0 + shape0_5 + shape1_0.
I had trouble finding a face mesh to work with, so I headed over to AnimSchool and downloaded their Malcom Rig and extracted the head mesh for me to work on:
While the set-up of the controllers are rather simple, each controller had to have a maximum value of 1. In addition to that, here is work needs to be done in creating the shapes themselves. For this reason, I chose to play with OpenMaya::MFnBlendShape class to add, and remove shape targets.
To initialize a blend-shape node without targets (important step):
def create_blendshape(mesh_objects, name=""):
"""
creates a new blendShape from the array of mesh objects provided
:param mesh_objects: <tuple> array of mesh shapes.
:param name: <str> name of the blendshape.
:return: <OpenMayaAnim.MFnBlendShapeDeformer>
"""
blend_fn = OpenMayaAnim.MFnBlendShapeDeformer()
if isinstance(mesh_objects, (str, unicode)):
mesh_obj = object_utils.get_m_obj(mesh_objects)
blend_fn.create(mesh_obj, origin, normal_chain)
elif len(mesh_objects) > 1 and isinstance(mesh_objects, (tuple, list)):
mesh_obj_array = object_utils.get_m_obj_array(mesh_objects)
blend_fn.create(mesh_obj_array, origin, normal_chain)
else:
raise ValueError("Could not create blendshape.")
if name:
object_utils.rename_node(blend_fn.object(), name)
return blend_fn
Each blend-shape index starts from 5000 and ends at 6000, so to get indices we need to use OpenMaya.MIntArray(), please understand that we need to use MIntArray and not a list of integers because otherwise Maya will not accept those integers:
def get_weight_indices(blend_name=""):
"""
get the weight indices from the blendShape name provided.
:param blend_name: <str> the name of the blendShape node.
:return: <OpenMaya.MIntArray>
"""
blend_fn = get_deformer_fn(blend_name)
int_array = OpenMaya.MIntArray()
blend_fn.weightIndexList(int_array)
return int_array
Now we can add shape targets like by using the following code, the objects are accepted from targets_array and Maya’s specified index:
def add_target(targets_array, blend_name="", weight=1.0, index=0):
"""
adds a new target with the weight to this blend shape.
Maya has a fail-safe to get the inputTargetItem from 6000-5000
:param targets_array: <tuple> array of mesh shapes designated as targets.
:param blend_name: <str> the blendShape node to add targets to.
:param weight: <float> append this weight value to the target.
:param index: <int> specify the index in which to add a target to the blend node.
:return:
"""
blend_fn = get_deformer_fn(blend_name)
base_obj = get_base_object(blend_name)[0]
if isinstance(targets_array, (str, unicode)):
targets_array = targets_array,
targets_array = object_utils.get_m_shape_obj_array(targets_array)
length = targets_array.length()
if not index:
index = get_weight_indices(blend_fn.name()).length() + 1
# step = 1.0 / length - 1
for i in xrange(0, length):
# weight_idx = (i * step) * 1000/1000.0
blend_fn.addTarget(base_obj, index, targets_array[i], weight)
return True
One after another we an add all the shapes by code in whatever order of targets we want, adding in-betweens from 0.0 -> 1.0. I always choose to go in steps of “5”-ves. (0.0, 0.25, 0.5, 0.75, 1.0). This is because don’t really need to go any more complicated that that.
Over the course of constructing the blend-shape based rig, you need to have two base meshes: one for deformation and the other for duplicating mesh objects for sculpting. I used abSymMesh for mesh mirroring because the tool is already there and I did not need to re-invent another one. All in all I think I’ve done a good job with my face:
The complete module I used in this construction can be found at my GitHub page:
Okay following the previous vector posts, I decided to plunge ahead and create a plugin that capitalizes on that knowledge.
Previously, in OpenMaya 1.0, the MPxLocator has been defined by using the draw method by using Open Graphics Library (OpenGL) functions. Maya’s architecture has updated a new method, and I used the OpenMaya.MUIDrawManager class method to do the drawings, making things straight forward. Here, I draw a circle, rectangle and a line; I spent way too much time figuring out the nuances of this plugin.
At first, finding out which MPxLocator examples file work right out of the box has been an issue, except finally, I found this one: uiDrawManager/uiDrawManager.cpp
In addition to finding out how the 2.0 plugins work, I also wanted to learn a bit more on reflection math, and I put that to use in this plugin:
R = 2(N * L) * N – L
Which using Maya’s Python code looks like this:
# define normal vector at origin
normal = OpenMaya.MVector(0.0, 1.0, 0.0)
# get opposing vector through double cross product
opposing_vector = normal * (2 * (normal * input_point))
opposing_vector -= input_point
# now multiply it by the scalar value
opposing_vector *= scale
if as_vector:
return opposing_vector
else:
return opposing_vector.x, opposing_vector.y, opposing_vector.z
Maya viewport handles drawing by using the DrawManager, like this:
rect_scale_x = 1.0
rect_scale_y = 1.0
is_filled = False
position = OpenMaya.MPoint(0, 0, 0)
normal = OpenMaya.MVector(0, 0, 1)
up = OpenMaya.MVector(0, 1, 0)
drawManager.beginDrawable()
drawManager.setLineWidth(line_width)
drawManager.setLineStyle(drawManager.kSolid)
drawManager.setColor(OpenMaya.MColor(plane_color))
# For 3d rectangle, the up vector should not be parallel with the normal vector.
drawManager.rect(position, normal, up, rect_scale_x, rect_scale_y, is_filled)
drawManager.endDrawable()
The reason why I dived into Maya’s Viewport drawing is because I was following Chad Vernon’s excellent C++ series, and his MPxLocator example no longer works in the current Maya 2020 version. The full working code can be found at my GitHub page:
Alright, so this one is also lots of fun. We are going to create a NurbsCurve using OpenMaya, with a leading degree of 2 (Quadratic). Remember in the previous post about how I calculated the vectors between the two locator positions? Well this time, we are going to do the same, but creating nurbsCurve. This is because each CV needs a position vector array:.
So above is just a point array collector that recalculates positions from an existing array of positions: Like selected locators or joints. Preferably at world-space co-ordinates. We then take these recalculated positional array into the OpenMaya.MFnNurbsCurve.create function. I wrote this create_curve_from_points function below that uses this:
def create_curve_from_points(points_array, degree=2, curve_name="", equal_cv_positions=False):
"""
create a nurbs curve from points.
:param points_array: <tuple> positional points array.
:param degree: <int> curve degree.
:param curve_name: <str> the name of the curve to create.
:param equal_cv_positions: <bool> if True create CV's at equal positions.
:return: <str> maya curve name.
"""
knot_length = len(points_array)
knot_array = get_knot_sequence(knot_length, degree)
m_point_array = get_point_array(points_array, equal_distance=equal_cv_positions)
# curve_data = OpenMaya.MFnNurbsCurveData().create()
curve_fn = OpenMaya.MFnNurbsCurve()
curve_fn.create(m_point_array, knot_array, degree,
OpenMaya.MFnNurbsCurve.kOpen,
False, False)
m_path = OpenMaya.MDagPath()
curve_fn.getPath(m_path)
if curve_name:
parent_obj = object_utils.get_parent_obj(m_path.partialPathName())[0]
object_utils.rename_node(parent_obj, curve_name)
return curve_name
return curve_fn.name()
In the function above, there is a boolean parameter: equal_cv_positions. The default is False. The result of this is creating CV’s at their locator’s positions, like so:
And if the equal_cv_positions is set to True, this is the result:
As you can see, this utility tool is going to become immediately useful. You could already guess at plans use this already!
I love math. Everything in life can change — your interests, your job, outside influences, but not math. Math never changes and I love about that very much.
Today, let’s go over why Maya’s MVector class object is so much fun: it’s a point in space (with a direction); we can add, subtract and multiply it against another MVector or a scalar value.
Right now, let’s deal with multiplying MVectors against scalar values.
Here we have two locators in space. Let’s have some fun with these two locators. First we will collect and manipulate information about these vectors using some Maya Python scripting. First, let’s show some code:
from maya.OpenMaya import MVector
from maya import cmds
class Vector(MVector):
RESULT = ()
def __init__(self, *args):
super(Vector, self).__init__(*args)
def do_division(self, amount=2.0):
"""
divide the vector into sections.
:param amount: <int> divide the vector by scalar amount.
:return: <tuple> section vector.
"""
self.RESULT = self.x / amount, self.y / amount, self.z / amount,
return self.RESULT
def do_multiply(self, amount=2.0):
"""
multiply the vector by the amount.
:param amount: <int> multiply the vector by scalar amount.
:return: <tuple> section vector.
"""
self.RESULT = self.x * amount, self.y * amount, self.z * amount,
return self.RESULT
def get_position(self):
self.RESULT = self.x, self.y, self.z,
return self.RESULT
@property
def result(self):
return self.RESULT
@property
def position(self):
return self.get_position()
def get_vector_position_2_points(position_1, position_2, divisions=2.0):
"""
calculates the world space vector between the two positions.
:param position_1: <tuple> list vector
:param position_2: <tuple> list vector
:param divisions: <int> calculate the vector by divisions.
:return: <tuple> vector
"""
positions = ()
for i in xrange(1, divisions):
vec_1 = Vector(*position_1)
vec_2 = Vector(*position_2)
new_vec = Vector(vec_1 - vec_2)
div_vec = Vector(new_vec * (float(i) / float(divisions)))
result_vec = Vector(*div_vec.position)
positions += Vector(result_vec + vec_2).position,
return positions
def get_vector_positon_2_objects(object_1, object_2, divisions=2):
"""
calculates the world space vector between the two points.
:return: <tuple> vector positions.
"""
vector_1 = cmds.xform(object_1, ws=1, t=1)
vector_2 = cmds.xform(object_2, ws=1, t=1)
return get_vector_position_2_points(vector_1, vector_2, divisions)
So this is a module I created for getting point positions between the two vectors. So let’s go through this step by step, in the get_vector_position_2_points function. Ignoring everything else but the math:
1.) we define the two vector positions.
2.) we subtract the first vector from the second to create a third vector at the origin.
3.) we loop through the number of divisions, dividing each number by the total number of divisions to give us the fraction that we can use to multiply with. (1/4, 2/4, 3/4, 4/4)
4.) we add the resultant origin vector by the second vector to place it relative to the second vector’s position.
5.) finally, we use this vector point to place our locators using the code below:
We are going to divide the space between the locators into 4 sections (divisions = 4). Let’s go into Maya and load up the script editor and paste this code there:
from maya_utils import math_utils
import maya.cmds as cmds
reload(math_utils)
positions = math_utils.get_vector_positon_2_objects('locator1', 'locator2', divisions=4)
for v_pos in positions:
locator = cmds.createNode('locator')
cmds.xform(object_utils.get_parent_name(locator), t=v_pos)
As we can see, between the locators, we have created four equal divisions, and have created the locators with the calculated positional vectors between the two original locators. This is useful in many of my rigging work, like creating springs, wires and folding wings.
It is important to be precise when creating any useful tool. So that we can eliminate any uncertainty in our work.