Skip to content

Maivin Workflow

In this workflow, you will explore recording MCAPs from the Maivin which will be used to create and annotate datasets. Once the dataset has been annotated, you will start training and validating your Vision model using the dataset captured. Finally, you will deploy the model back to the Maivin for inference.

Capture with an EdgeFirst Platform

If you have an EdgeFirst Platform, follow this tutorial to see how to capture and upload datasets into EdgeFirst Studio. Use your browser to connect to the Web UI of the remote device, enter the following URL https://<hostname>/.

Note

Replace <hostname> with the hostname of your device.

You will be greeted with the Maivin Web UI Main Page page.

Web UI Main Page
Web UI Main Page

Record MCAP

MCAP recordings can be started and stopped using the Recording Button at the device's top navbar. It is to the left of the MCAP Details hamburger button, which is used to open the MCAP Details Modal.

MCAP Recording and Details Buttons
MCAP Recording and Details Buttons

Both of these buttons are available on every page of the Maivin, Raivin, and other edge devices running the EdgeFirst middleware.

Note

You must close all modals to be able to click the "Recording" and "Details" Buttons.

For more information about recording MCAPs, please read the MCAP Recording section.

Start Recording

To start a recording, simply click the "Recording" button to begin capturing data.

MCAP Recording
MCAP Recording

Note

It may take up to 30 seconds for a recording to start, depending on topic tracked.

Low Disk Space

If there is not enough room on the drive to record an MCAP, recording will automatically stop and you will get a "Low Disk Space" error.

MCAP Low Disk Space Warning
MCAP Low Disk Space Warning

If you were to open the MCAP Details Modal while recording, you would see a new MCAP file in the MCAP list.

MCAP Modal While Recording
MCAP Modal While Recording

Stop Recording

To stop recording, click the "Recording" button a second time.

Download MCAP

To download an MCAP recording from the device, click the MCAP Details hamburger button "MCAP Details Button" to open the MCAP Details Modal.

MCAP Recorder Interface
MCAP Modal

Use the "Download" button "Download Button" in the row containing the name of the MCAP file you want to download, which will download the MCAP file to your local machine.

You can also use an SSH client to copy files off the Raivin.

Upload MCAP

To upload an MCAP Recording into EdgeFirst Studio, first login to EdgeFirst Studio. Once logged in to EdgeFirst Studio, navigate to the "Data Snapshots" under the tool options.

Data Snapshots
Data Snapshots

Note

A project has already been created intended for object detection. This step has been covered in Getting Started.

Once you are in the "Data Snapshots" page, upload the recorded MCAP by clicking "From File" which opens a new window dialog for selecting the MCAP downloaded in your PC.

EdgeFirst Datasets

You can also drag and drop EdgeFirst Datasets Zip and Arrow files in the "Data Snapshots".

Upload MCAP
Upload MCAP

Once the MCAP file is selected, this would start the upload progress in EdgeFirst Studio. This upload progress may take several minutes depending on the size of the MCAP. Once the upload is complete, the status will be shown like the figure on the right.

Upload Progress Completed Upload
Progress Complete

Restore Snapshot

The snapshot restoration process involves several dataset transformations such as the frame rate specification, depth map generation, and auto-annotations. More information can be found in Studio.

COCO Annotations

The labels supported during the auto-annotation process for the Fully Automatic Ground Truth Generation are the COCO labels listed.

The created snapshots can be found under "Data Snapshots".

Data Snapshots
Data Snapshots

To restore the snapshot, click on the snapshot context menu and select "Restore".

Restore Snapshots
Restore Snapshots

Restoring a snapshot will create a new dataset entirely with annotations. Specify the project to contain this new dataset and specify the name and the description of the dataset. Furthermore, toggle the "AI Ground Truth Generation" to auto annotate the dataset samples. The rest of the settings can be kept in their defaults for this tutorial. Click "Restore" to start the restoration process.

Restore Snapshots Fields
Restore Snapshots Fields

The snapshot restore process can be found under the project datasets.

Restore Snapshots Progress
Restore Snapshots Progress

Once completed, the dataset will now contain annotations that resulted from the auto-annotation process.

Next navigate to the gallery of the dataset by clicking on the gallery button Gallery Button. To correct any mistakes from the auto-annotation process, follow the tutorials described under manual annotations.

Finally, split the dataset into training and validation groups.

Train a Vision Model

Now that you have a fully annotated dataset that is split into training and validation samples, you can start training a Vision model. This will briefly show the steps for training a model, but for an in depth tutorial, please see Training ModelPack.

From the "Projects" page, click on "Model Experiments" of your project.

Model Experiments Page
Model Experiments Page

Create a new experiment by clicking "New Experiment" on the top right corner. Enter the name the description of this experiment. Click "Create New Experiment".

Model Experiments Page
Model Experiments Page

Navigate to the "Training Sessions".

Training Sessions
Training Sessions

Create a new training session by clicking on the "New Session" button on the top right corner.

New Session Button
New Session Button

Follow the settings indicated in red and keep the rest of the settings by their default. Click "Start Session" to start the training session.

Start Training Session
Start Training Session

The session progress will be shown like the following below.

Training Session Progress
Training Session Progress

Once completed the session card will appear like the following below.

Completed Session
Completed Session

On the train session card, expand the session details.

Training Details
Training Details

The trained models will be listed under "Artifacts".

Session Details Artifacts
session artifacts

Validate Vision Model

Now that you have trained a Vision model, you can now start validating your Vision model. This will briefly show the steps for validating a model, but for an in depth tutorial, please see Validating ModelPack.

On the train session card, expand the session details.

Training Details
Training Details

Click the "Validate" button.

Create Validation Session
Create Validation Session

Specify the name of the validation session and the model and the dataset for validation. The rest of the settings were kept as defaults. Click "Start Session" at the bottom to start the validation session.

Start Validation Session
Start Validation Session

The validation session progress will appear in the "Validation" page as shown below.

Validation Progress
Validation Progress

Once completed the session card will appear like the following below.

Completed Session
Completed Session

The validation metrics are displayed as charts which can be found by clicking the validation charts.

Validation Charts Button
Validation Charts Button
Validation Charts
Validation Charts

Deploy the Model

Once you have validated your trained model, let's take a look at an example of how this model can be deployed in your PC by following the tutorial Deploying to the PC.

If you have an NXP i.MX 8M Plus EVK you can also run your model directly on the device using the EdgeFirst Middleware by following the tutorial Deploying to Embedded Targets.

Additional Platforms

Support for additional platforms beyond the NXP i.MX 8M Plus will be available soon. Let us know which platform you'd like to see supported next!

If you have an EdgeFirst Platform such as the Maivin or Raivin then you can deploy and run the model using the bundled EdgeFirst Middleware by following the tutorial Deploying to EdgeFirst Platforms.

No Studio Costs

Deployment of Vision models will not cost any credits from Studio.

To deploy Vision models on a Maivin, please see the ModelPack Deployment instructions.

Segmentation Sample
Preview: Segmentation Inference

To deploy Fusion models on a Raivin, please see the Fusion Deployment instructions.

Segmentation Sample
Preview: Fusion Inference