Skip to content

Model Training

EdgeFirst Studio supports training of Vision models using ModelPack tasked for image object detection. EdgeFirst Studio also supports training of "Fusion" models tasked for detecting the object's position in the field. In this stage, datasets with 2D annotations (segmentation masks and 2D bounding boxes) are used for training ModelPack and 3D annotations (3D bounding boxes) are used for training Fusion.

Vision augmentation techniques are also applied during training to enhance the model's performance in a variety of conditions and to increase the number of training samples.

ModelPack

For training Vision models tasked with detecting objects in an image, follow tutorials for Training ModelPack in EdgeFirst Studio. EdgeFirst Studio supports Vision models tasked with performing object detection via bounding boxes, segmentation masks, or multi-task (bounding boxes and segmentation masks).

Fusion

For training Spatial Perception Models tasked with detecting the position of the object in the field, follow tutorials for Training Fusion Models in EdgeFirst Studio. EdgeFirst Studio trains these models by fusing sensor outputs (Camera or Radar) as model inputs. These models are better known as Fusion models. EdgeFirst Studio provides options for toggling these sensors on/off at the start of training. However, by default training uses both the Camera and the Radar outputs.