Build your own custom module, node and container

Build your own custom module, node and container

All application logic and services used on edge devices are encapsulated in Docker containers. To configure these containers on Viso Suite, the visual programming interface called Viso Builder is used.

Viso Builder is built on top of Node-RED, which allows a simple creation process of your own modules and nodes or integrates already existing nodes previously built by the community.

In this getting started guide, we will walk you through creating a custom Person Detection Docker container and a custom N ode-RED node to configure this container using the Viso Builder.

There are two main types of Node-RED nodes that can be used with Viso Suite:
  1. General nodes that are provided by Node-RED, other 3rd parties or developed by yourself: Node-RED community and freely available 3rd party nodes can be used on Viso Suite without any modification. If functionality is required that is not available, you can develop Node-RED nodes for any purpose by following this guide and import them to Viso Suite. If the tasks required by your computer vision application can be done on the node directly, then this is the best approach!

  2. Complex nodes e.g. to perform computer vision tasks: If your use case is more complicated and difficult to do via a single node, you may want to develop a module that uses one or more docker containers. In this case, your Node-RED node is used to define and configure the parameters of your containers.

This tutorial is for the complex nodes and assumes knowledge of the following:

Node-RED and its admin API
  1. Docker
  2. Redis & MQTT

Step 1: Creating Custom Module and Node

We will use a real world example to create a custom module for Person Detection.
  1. Design your node: Follow this guide to create your custom node. Note that your node requires proper versioning ( Major.Minor.Patch, e.g. 3.5.12) so that it can imported to Viso builder. If the versioning is not correct the import will fail.
  2. Set the node name as person-detection
  3. Add some parameters (e.g. threshold, model name, etc.) to the configuration popup so that we can use these parameters in the container.
  4. Add the person-detection node to Viso Builder: Follow this guide to add your new node to Viso Builder.
For adding your node to Viso Builder: Compress your source files into a single zip file. Make sure that the source files are in the root directory of the zip file.

Step 2: Creating Custom Person Detection Docker Container

Once the custom Module is imported to Viso Suite, your Docker Container for person detection has to follow a couple of rules. The next steps are:

Parsing the person-detection node configuration: We added some parameters ( threshold, model name, etc.) to the person-detection node in Step 1. The first thing the container should do is to retrieve these configurations from the deployed Node-RED flow. All Viso Suite devices have a local Node-RED server running the same flow that you had created on Viso Suite. W e can use the Node-RED GET /flows API to parse the deployed Node-RED flow to obtain the configuration of our person-detection node. The parsed Node-RED flow is a JSON object which has all parameters inside it.

From the JSON object you can obtain the node name ( 'type' = 'person-detection') and the other configuration parameters including name, detection_threshold, etc. for use by the container:
  1. [
  2. {
  3. 'disabled': False,
  4. 'id': '47ed5c46.b85244',
  5. 'info': '',
  6. 'label': 'Flow 1',
  7. 'type': 'tab'
  8. },
  9. {
  10. 'id': '813f8d7f.be7c1',
  11. 'name': 'my person detector',
  12. 'type': 'person-detection',
  13. 'detection_threshold': 0.5,
  14. 'model_name': 'ssd_mobilenet_v1_coco_2018_01_28',
  15. 'wires': [['5c828fca.9dc5']],
  16. 'x': 570,
  17. 'y': 460,
  18. 'z': '47ed5c46.b85244'
  19. },
  20. {
  21. 'id': 'eb2a3e76.aeb2b',
  22. 'name': 'web camera',
  23. 'type': 'camera-feed',
  24. 'camera_source': '/dev/video0',
  25. 'camera_type': 'web',
  26. 'frame_height': 480,
  27. 'frame_rate': 30,
  28. 'frame_width': 640,
  29. 'wires': [['813f8d7f.be7c1']],
  30. 'x': 240,
  31. 'y': 300,
  32. 'z': '47ed5c46.b85244'
  33. },
  34. {
  35. 'active': True,
  36. 'complete': 'false',
  37. 'console': False,
  38. 'id': '5c828fca.9dc5',
  39. 'name': '',
  40. 'statusType': 'auto',
  41. 'statusVal': '',
  42. 'tosidebar': True,
  43. 'tostatus': False,
  44. 'type': 'debug',
  45. 'wires': [],
  46. 'x': 860,
  47. 'y': 320,
  48. 'z': '47ed5c46.b85244'
  49. }
  50. ]

Find a node that is streaming video feeds to the person-detection node: As you could see in the above flow, our person-detection node is connected to a Camera Feed node which supplies the video frames from a camera. By reviewing the JSON configuration, we can easily figure out that Node-RED manages the connected nodes using the wires field. In other words, you can see the id of the person-detection node is in the wires field of the Caemra Feed node.

Retrieve the id value of your Person Detection node by extracting from the wires block in the Camera Feed node section in the flows.json.


Grab the video frame from the Camera Feed node to perform Person Detection:
All Viso Suite containers use Redis to transfer the video frames. Hence, the Camera Feed node reads the video frame from the camera and outputs the frame to redis_<VideoFeed Node ID>_<output number>. In our flow, the Camera Feed node has only one output, so the output redis key would be redis_eb2a3e76.aeb2b_0.

The person-detection node can grab the video by using this key. Here is a sample python code snippet to retrieve it and convert to a NumPy frame:
  1.  def get_frame_from_redis(self, redis_key):
  2.     base_str = self.r.get(redis_key)   
  3.     if base_str is None or len(base_str) == 0:
  4.         return None
  5.     else:
  6.         str_frame = base64.b64decode(base_str)   # base64 decode
  7.         numpy_frame = cv2.imdecode(np.frombuffer(str_frame, dtype=np.uint8), -1)
  8.         return numpy_frame
Execute the person detection engine and send the results to other modules: Now, you can execute the person detection engine to detect people and send the results to other modules. Use TensorFlow, Pytorch, OpenVino, OpenCV, etc. to detect persons from the grabbed video frame.

Send the Results to other Modules: We normally use MQTT to send the result to other modules. The MQTT result topic should be viso/mqtt_<node id>.

Add a Dockerfile: Add Dockerfile to your project to build and push your container to a docker registry.

Once a module is imported, you need to link the container to this module. Prepare the full URI of your container on docker hub or AWS ECR, and navigate to the imported module to link the container to your node. A detailed guide can be found here.

Step 4: Add the module to your Apps

Now you can use this module on Viso Builder, add it to any of your applications, connect it to other nodes, and get started building your powerful computer vision applications.

    • Related Articles

    • Link a Docker Container With Your Nodes

      A building block (node) can link to (but doesn’t have to) a docker container. Here's how you link a docker container with your nodes. Link a Public Container In your workspace, navigate to Library > Modules Open the Module you would like to link Find ...
    • Add a New Module

      Viso Suite allows you to add public and custom modules. A module contains building blocks (nodes) to create AI applications in the Viso Builder. After installing a module it will appear in your workspace library. Add a Public Module In your viso.ai ...
    • Object Detection: Using a custom model

      The Object Detection Node allows to connect to your own custom model. At first, you need to configure the object detection node in your flow to use a custom model. You will do that by checking the Custom checkbox. The custom model will be indicated ...
    • Overview Video Input Tools Module

      The video input tools module includes nodes to grab the video frames and perform couple of pre-processing tasks on them. In particular, the video input tools include following nodes: Video Feed: The video feed node reads the input stream from the ...
    • Overview Object Counting Node

      The Object Counting Node is used to count one or multiple objects which are previously detected by the object detection node. Input and Output Input: MQTT message (results from object detection node) Output: MQTT message containing IN / OUT results ...