Overview Object Tracking Node

Overview Object Tracking Node

The Object Tracking Node is used to track several objects which were previously detected by the object detection node across several frames. 


Input and Output
  1. Input: Output of object detection node.
  2. Output: MQTT message containing the tracking results and result stream.
  3. Supported architecture: Currently supported on amd64 devices.
Output Format
  1. {
  2. tid: Tracker ID
  3. rect: [x, y, w, h]
  4. trace:  list of past rectangles = [[x, y, w, h], [x, y, w, h] [x, y, w, h], [x, y, w, h], ..]
  5. label:
  6. color:
  7. confidence:  
  8. roi_id:
  9. }

Node Sections
The Object Tracking node consists of three main parts:
  1. Tracking Settings: Select from a list of tracking algorithms, set the tracking quality and tracking cycles.
  2. Re-identification Settings: Select the algorithm for re-identification to find same objects across frames.
  3. General Settings: Set the colors of the detection output boxes and define if the output boxes and trace lines should be shown or not.
Tracking refers to the prediction of the new bounding box in the new frame according to its previous location. In contrast, re-identification refers to extracting the features of a detected object and then find the same object in other frames. If you are using counting with re-identification, we suggest to disable any tracking options in the object detection node.

Node Parameters
The following parameters are used in the Object Tracking node.

Name: Input the node name used in a specific flow.
  1. default: object-tracking
  2. type: string
Tracking algorithm: Defines the tracking algorithm to be used in your application.
  1. available values: DLIB, MOSSE, CSRT
  2. default: DLIB
  3. type: string
Tracking quality threshold: Refers to the quality threshold to keep the tracked objects.
  1. range : [0.1, 1.0]
  2. 1.0: strict
  3. 0.0: not strict, but low accuracy
  4. default: 0.7
  5. type: integer
Tracking cycle: Refers to the number of frames from one detection to the next detection event (frame postition).
  1. default: 1 (1 = tracking is disabled)
  2. type: integer
Re-Identification algorithm: Let's you select from existing re-identification algorithms. Currently supported are:
  1. DeepSORT
  2. Geo Distance
  3. Default: DeepSORT
You can link your own custom model if DeepSORT is selected. Enter your model URL to be used in the node and the model will be downloaded and executed on your edge device.

Re-Identification trace length:
  1. default: 10
  2. range: [1~30]
  3. type: integer
Re-Identification cos distance:
  1. range: [0.0~1.0]
  2. default: 0.7
  3. type: float
Re-Identification IoU distance:
  1. default: 0.7
  2. range: [0.0~1.0]
  3. type: float
Max Age:
  1. default: 5
  2. range: [1~10]
  3. type: integer
Geo Distance:
  1. range: [0.0~1.0]
  2. default: 0.7
  3. type: float


    • Related Articles

    • Overview Object Detection Node

      The Object Detection Node is used to detect several different objects off the shelf with pre-trained or custom deep learning models on GPU, CPU, VPU (Intel Movidius Myriad X) or TPU (Google Coral). Input and Output Input: Frame from a video file, IP ...
    • Overview Object Flow Node

      The Object Flow Node detects and tracks people from an input video stream to compose a heatmap and to calculate the average dwell time. Input and Output Input: Object Detection mqtt result message, ROI section definition, Object Counting result ...
    • Overview Object Counting Node

      The Object Counting Node is used to count one or multiple objects which are previously detected by the object detection node. Input and Output Input: MQTT message (results from object detection node) Output: MQTT message containing IN / OUT results ...
    • Overview Object Segmentation Node

      The Object Segmentation Node is used to segment several different objects off the shelf with pre-trained or custom deep learning models on GPU, CPU, VPU (Myriad) and TPU. Input and Output Input: Frame from a video file, IP or USB camera. Output: MQTT ...
    • Overview Image Classification Node

      The Image Classification Node is used to classify an image into several categories, using pre-trained or custom deep learning models on GPU, CPU, TPU and VPU. Input and Output Input: Frame from a video file, IP or USB camera. Output: MQTT message ...