WEEK5: 3D Equalizer

3D Equalizer is by no doubt the world’s leading 3D tracking solution, for merging live action footage with digital visual effects. Through this week’s study, I have a preliminary understanding of the user interface of 3D Equalizer and how to track the movement. Although there was an error on exporting node to nuke. Of course, due to language limitations, following the course is a bit difficult. I always need more time to understand what the teacher is talking about, and then look for the corresponding icon. But Dom gives us plenty of time to understand knowledge. I am very satisfied with this class, but new knowledge always requires repeated practice. Therefore, in the time after class, I followed the course of Dom again, which made me discover some missing knowledge points.

At first, I didn’t understand the purpose of the track and its principles in the first hour of the Dom course. However, after watching the 3D view in 3D Equalizer, I suddenly realized. Originally, the purpose of tracing under the two-dimensional perspective is to find the corresponding dynamic three-dimensional coordinates and the position of the camera under the three-dimensional perspective.

The following are some details I understand that should be paid attention to in 3D Equalizer. These small details are the key to success.

First of all, during the import process, we can change the Gamma to adjust the brightness of the video. The relative value is 2 or 2.2. This node is often used in the roughness map of MAYA.

Before tracking, we should also Expert buffer compression.

As for the most important work, it should be create point. How to find point is very important, first of all it has its uniqueness. The computer captures by analyzing the color changes of the pixels in the selected area, so after understanding the working principle of the computer, it will be relatively easy to create a point.

When dealing with the problem of edge lines, we should observe frame by frame and press the keyboard E key to end the track near the edge. This process takes time to process, but it is very critical.

Secondly, we should create as many points as possible, and these points are reflected in the foreground, mid-range and long-range. In this way, the position and movement of the camera will be more accurate.

Sometimes, we will encounter a situation where the chromatic aberration of the image is low, in which case it will be more difficult to create a point. For this, we can open Image control windows to adjust the color relationship of the video. This is very similar to the picture adjustment function in PS. Through adjustment, we can more clearly find some special marks as tracking points.

Talking about how to improve the accuracy of move track. In addition to constantly creating surprises, we have two methods.

One way is to hold down the ALT+C keys. After using result, find Deviation curve in Deviation browser.

Another way is looking for points with larger Deviation values to make changes and adjustments, and delete them if necessary. Make the average value of green closer to zero.

In addition, we can also reduce the Deviation value by adjusting the camera parameters.

First of all, we can find the exact parameters of a camera and modify it in Filmback Height. Second, turn on the adjust mode of Distortion degree.

Then click Adaptive all and adjust in the menu. Recalculate the camera lens, we will find that the Deviation value will be reduced.

In short, all the work is for more precise tracking. details make a difference.

This entry was posted in 3D computer foundamentals. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.