Maivin Workflow
In this workflow, you will explore recording MCAPs from the Maivin which will be used to create and annotate datasets. Once the dataset has been annotated, you will start training and validating your Vision model using the dataset captured. Finally, you will deploy the model back to the Maivin for inference.
Capture with an EdgeFirst Platform
If you have an EdgeFirst Platform, follow this tutorial to see how to capture and upload datasets into EdgeFirst Studio. Use your browser to connect to the Web UI of the remote device, enter the following URL https://<hostname>/
.
Note
Replace <hostname>
with the hostname of your device.
You will be greeted with the Maivin Web UI Main Page page.

Record MCAP
MCAP recordings can be started and stopped using the Recording Button at the device's top navbar. It is to the left of the MCAP Details hamburger button, which is used to open the MCAP Details Modal.

Both of these buttons are available on every page of the Maivin, Raivin, and other edge devices running the EdgeFirst middleware.
Note
You must close all modals to be able to click the "Recording" and "Details" Buttons.
For more information about recording MCAPs, please read the MCAP Recording section.
Start Recording
To start a recording, simply click the "Recording" button to begin capturing data.

Note
It may take up to 30 seconds for a recording to start, depending on topic tracked.
Low Disk Space
If there is not enough room on the drive to record an MCAP, recording will automatically stop and you will get a "Low Disk Space" error.
If you were to open the MCAP Details Modal while recording, you would see a new MCAP file in the MCAP list.

Stop Recording
To stop recording, click the "Recording" button a second time.
Download MCAP
To download an MCAP recording from the device, click the MCAP Details hamburger button to open the MCAP Details Modal.

Use the "Download" button in the row containing the name of the MCAP file you want to download, which will download the MCAP file to your local machine.
You can also use an SSH client to copy files off the Raivin.
Upload MCAP
To upload an MCAP Recording into EdgeFirst Studio, first login to EdgeFirst Studio. Once logged in to EdgeFirst Studio, navigate to the "Data Snapshots" under the tool options.

Note
A project has already been created intended for object detection. This step has been covered in Getting Started.
Once you are in the "Data Snapshots" page, upload the recorded MCAP by clicking "From File" which opens a new window dialog for selecting the MCAP downloaded in your PC.
EdgeFirst Datasets
You can also drag and drop EdgeFirst Datasets Zip and Arrow files in the "Data Snapshots".

Once the MCAP file is selected, this would start the upload progress in EdgeFirst Studio. This upload progress may take several minutes depending on the size of the MCAP. Once the upload is complete, the status will be shown like the figure on the right.
Upload Progress | Completed Upload |
---|---|
![]() |
![]() |
Restore Snapshot
The snapshot restoration process involves several dataset transformations such as the frame rate specification, depth map generation, and auto-annotations. More information can be found in Studio.
COCO Annotations
The labels supported during the auto-annotation process for the Fully Automatic Ground Truth Generation are the COCO labels listed.
The created snapshots can be found under "Data Snapshots".

To restore the snapshot, click on the snapshot context menu and select "Restore".

Restoring a snapshot will create a new dataset entirely with annotations. Specify the project to contain this new dataset and specify the name and the description of the dataset. Furthermore, toggle the "AI Ground Truth Generation" to auto annotate the dataset samples. The rest of the settings can be kept in their defaults for this tutorial. Click "Restore" to start the restoration process.

The snapshot restore process can be found under the project datasets.

Once completed, the dataset will now contain annotations that resulted from the auto-annotation process.
Next navigate to the gallery of the dataset by clicking on the gallery button . To correct any mistakes from the auto-annotation process, follow the tutorials described under manual annotations.
Finally, split the dataset into training and validation groups.
Train a Vision Model
Now that you have a fully annotated dataset that is split into training and validation samples, you can start training a Vision model. This will briefly show the steps for training a model, but for an in depth tutorial, please see Training ModelPack.
From the "Projects" page, click on "Model Experiments" of your project.

Create a new experiment by clicking "New Experiment" on the top right corner. Enter the name the description of this experiment. Click "Create New Experiment".

Navigate to the "Training Sessions".

Create a new training session by clicking on the "New Session" button on the top right corner.

Follow the settings indicated in red and keep the rest of the settings by their default. Click "Start Session" to start the training session.

The session progress will be shown like the following below.

Once completed the session card will appear like the following below.

On the train session card, expand the session details.

The trained models will be listed under "Artifacts".
Session Details | Artifacts |
---|---|
![]() |
![]() |
Validate Vision Model
Now that you have trained a Vision model, you can now start validating your Vision model. This will briefly show the steps for validating a model, but for an in depth tutorial, please see Validating ModelPack.
On the train session card, expand the session details.

Click the "Validate" button.

Specify the name of the validation session and the model and the dataset for validation. The rest of the settings were kept as defaults. Click "Start Session" at the bottom to start the validation session.

The validation session progress will appear in the "Validation" page as shown below.

Once completed the session card will appear like the following below.

The validation metrics are displayed as charts which can be found by clicking the validation charts.


Deploy the Model
Once you have validated your trained model, let's take a look at an example of how this model can be deployed in your PC by following the tutorial Deploying to the PC.
If you have an NXP i.MX 8M Plus EVK you can also run your model directly on the device using the EdgeFirst Middleware by following the tutorial Deploying to Embedded Targets.
Additional Platforms
Support for additional platforms beyond the NXP i.MX 8M Plus will be available soon. Let us know which platform you'd like to see supported next!
If you have an EdgeFirst Platform such as the Maivin or Raivin then you can deploy and run the model using the bundled EdgeFirst Middleware by following the tutorial Deploying to EdgeFirst Platforms.
No Studio Costs
Deployment of Vision models will not cost any credits from Studio.
To deploy Vision models on a Maivin, please see the ModelPack Deployment instructions.

To deploy Fusion models on a Raivin, please see the Fusion Deployment instructions.
