Skip to main content

(2) MotionBuilder Plugin

Plugin installation

  • Download the latest plugin version "Nokov-MobuPlugin.XXX.exe". This plugin version is compatible with MotionBuilder versions 2018-2022. Unzip the plugin installation package, double-click the unzipped plugin, and install the plugin in the software directory (17.4.1). After completion, click Finish, and the plugin will be installed (the application of glove human body in Motionbuilder is the same as below).

Settings and usage of MotionBuilder

  • Open MotionBuilder. Click "Resources—Asset Browser—Devices", find "Nokov- Optical Device" among them, and drag it into the Viewer view.
  • Click the Online button in "Navigator—I/O devices—I/O Seeker - Optical Device" at the bottom of the interface to make it enter the Live state. Click "Model binding", select the Create button therein, and click "Generate a new optical model" below "Optical model". After playing, uncheck Live and then check it again. At this time, the human body in the Live software starts to move, and the model in MotionBuilder will be driven to perform synchronous movement.
  • Select "Define—Skeleton" in the right window. Click the "Define" button in the popped-up window to create a skeleton. Select a joint in the scene, select the corresponding joint in the skeleton in the right window, right-click and click "Assign Skeleton Bone" to complete the binding. After all joints are bound, click the lock icon (Lock Character), and select "biped" to complete the characterisation (17.4.4).
  • In MotionBuilder, import the model to be driven and perform the same character processing as in the third step on it.
  • In the "Character Controls" window, select the imported model character in the Character column and select the AI Mocap human skeleton character in the Source column. Let the human movement in the Live software drive the model.

MotionBuilder's automatic binding function

  • When redirecting motion capture data, the automatic binding function in MotionBuilder can be used for one-click binding.
  • First, connect to the human body data. In the Live software, open the settings, set the IP network card address to "10.1.1.198", and check the "SDK" option. Connect to the human body data in MotionBuilder. For specific steps, refer to the above "Settings and Use of MotionBuilder".
  • In the "Navigator" tab at the bottom left of MotionBuilder, expand "I/O Devices", click "I/O Seeker-Optical Device", check the "Use Tpose" checkbox in "Information" in the middle on the right, and click the "Characterize" button.
  • At this time, the motion capture human skeleton is automatically bound in MotionBuilder. Expand "Characters", and the bound human body name will be displayed. The human body name is the same as the human body name in motion capture.

Motionbuilder character requirements

  • When characterizing motion capture data in MotionBuilder, the imported model skeleton should be as consistent as possible with the Live human body skeleton. The bone hierarchy of the model needs to be consistent with the bone hierarchy of the Live human body. The bone prefix "Body3" refers to the Live human body name.
  • After importing the model in MotionBuilder, please adjust the model's pose to T-Pose and make it consistent with the T-Pose of the XINGYING human body before driving.
  • The bone axis of the model needs to be consistent with the XINGYING human body bone axis. XINGYING human body bone axis (17.4.10).

Motionbuilder rigid body creation

  • Create a rigid body in Live. Open the Motionbuilder software. Drag the plugin into the scene. Click the Online button in "Navigator—I/O devices—I/O Seeker - Optical Device" at the bottom of the interface to make it enter the Live state. Click "Model binding", select the Create button in it, and click "Generate a new optical model" below "Optical model". Play the XINGYING software, and Marker points will be displayed in the Motionbuilder scene.
  • In the "Navigator" tab at the bottom left of MotionBuilder, expand "I/O Devices", click "I/O Seeker-Optical Device", and after clicking "Create RigidBody" in the "Information" in the middle on the right, the rigid body is successfully created.
  • In the "Navigator" tab, expand "Scene". In "Scene", expand "Nokov-Optical Device: Optical". Scroll down to the bottom, and you can see the name of the rigid body created. The name is consistent with the rigid body name in Live. Double-click the rigid body name, and the rigid body in the scene will show a connection line. The selected rigid body will also be highlighted and changed from red to green (17.4.13).

Display of unnamed points

  • After obtaining motion capture data using the Motionbuilder plugin, if there are unnamed points in the motion capture data, unnamed points will also be displayed in the Motionbuilder scene. The color of unnamed points is purple by default, and the color of named points is blue. Adjusting the color of unnamed points: In Motionbuilder, you can modify the color of unnamed points of motion capture data in the scene. The operation is as follows: First, expand " I/O Devices" in the "Navigator" tab and select "I/O Nokov-Optical Device". Next, select Properties in the lower right corner of the Motionbuilder interface. Click the "..." button below Default. In the popped-up Color window, you can modify the color of unnamed points.

Scene refresh

  • After obtaining the human body data of Live, if the template of Live changes, you can select the "Information" tab in the middle at the bottom of Motionbuilder. Clicking the "Refresh" button can refresh the Motionbuilder scene to prevent the human body skeleton from getting stuck on the scene after the human body template of Live changes.

Use Tpose

  • After characterizing the human body data of Live and the model that needs to be driven, it is necessary to select the created role and source. Before corresponding, we need to check the "Use Tpose" checkbox to force the human body skeleton on the scene of Live to be displayed in the standard T-pose. Because the model imported into Motionbuilder is usually in the standard T-pose, our human body skeleton should also be consistent with the pose of the model. Then select the role and source. In this way, after the human body data of Live drives the model in real time, the model will be consistent with the actions of the human body model of Live.