Skip to content

Web Workflow

In this workflow, you will explore recording a video or capturing images using a mobile device and then upload the captured data into EdgeFirst Studio for annotation and then model training, validation, and deployment using the PC. This workflow requires the user to have signed up and logged in to EdgeFirst Studio and followed the initial steps described in the EdgeFirst Studio Quickstart.

Capture with a Phone

The examples below will show recording of a five second video and image captures of coffee cups using a phone for training a Vision model that detects coffee cups. However, you can choose any type of objects in your dataset.

Data Usage

It is recommended to use a phone connected to a Wifi network. A device connected to mobile data might be subject to intense usage when uploading files; video files or image files can be large in size. In the examples below, the video file used was ~15MB and the image files were ~2MB each.

Record Video

Using a smartphone, try to record a 30 second or more video with the camera application showing various orientations of coffee cups. Typically, the video recording can be started by pressing the red circular button. The video can be stopped by pressing the same button again.

Mobile Video Capture
Android Mobile Video Capture

Capture Images

Furthermore, you can also capture individual images as shown below. You can take image snapshots from the camera by pressing the white circular button.

Mobile Image Capture
Android Mobile Image Capture

Leveraging Videos

It is recommended to use videos rather than individual images. This is because the Automatic Ground Truth Generation (AGTG) feature leverages SAM-2 with tracking information which only needs a single annotation to annotate all frames. However, individual images requires more effort by annotating each image separately.

Limited Datasets

Throughout the demos, the dataset is kept small. However, training on limited datasets will result in poor model performances when the model is deployed under conditions that differs from the dataset samples. It is suggested to increase the amount of training data under various conditions and backgrounds to train a more robust model.

Create Dataset

If you have a video recording or sample images for your dataset, you can create a dataset container in EdgeFirst Studio to contain your video frames or images and annotations.

Navigate to a web browser and login to EdgeFirst Studio. Once logged in to EdgeFirst Studio, navigate to your project. In this case the project name is "Object Detection". Click on the "Datasets" button that is indicated in red below.

Object Detection Project
Object Detection Project

This will bring you to the "Datasets" page of the selected project. Create a new dataset container by clicking the "New Dataset" button that is indicated in red.

New Dataset
New Dataset

Add the dataset and annotation container name, labels, and dataset description as indicated by the fields below. It is up to you to specify the information in the fields and you do not have to strictly follow the example shown below. Click the "Create" button once the fields have been filled.

Dataset Fields
Dataset Fields

Your created dataset will look as follows.

Created Dataset
Created Dataset

Upload Video

Video files can be uploaded into any dataset container in EdgeFirst Studio. Choose the dataset container to upload the video file. In this case, the dataset is called "Coffee Cup". Click on the dataset context menu (three dots) and select import.

Dataset Import Option
Dataset Import Option

This will bring you to the "Import Dataset" page.

Dataset Import
Dataset Import

Click on the drop-down that says "Import Type" and then specify "Videos" and then click "Done" as shown below.

Dataset Video Import
Dataset Video Import

Now that the import type is specified to a "Videos", click on "select files" as indicated.

Select Video File
Select Video File

On an android device, this will bring up the option to specify the location of the files.

Android Select File Options
Android Select File Options

In my current setup, I have selected "My Files" from the options above and then "Videos" which will allow me to pinpoint the location of the video I have recorded.

Android File Manager
Android File Manager

Once the video file has been selected, the FPS (frames per second) is then set to 1 by default, however you can specify this to your desired FPS. Finally, go ahead and click the "Start Import" button to start importing the video file.

Import Fields
Import Fields

Import Duration

Importing a 30 second video could take up to 6 minutes.

This will start the import process and once it is completed, you should see the number of images in the dataset increased. If you do not see any changes, refresh the browser.

Imported Video
Imported Video

Upload Images

HEIC is not fully supported

Apple devices capture HEIC image formats by default. This format has not been fully supported yet in EdgeFirst Studio. Please make sure to use JPEGs, JPGs, or PNGs for uploading images to EdgeFirst Studio.

Image files can be uploaded into any dataset container in EdgeFirst Studio. Choose the dataset container to upload image files. In this case, the dataset is called "Coffee Cup". Click on the dataset context menu (three dots) and select import.

Dataset Import Option
Dataset Import Option

This will bring you to the "Import Dataset" page.

Dataset Import
Dataset Import

Click on "select files". This will bring up the option to specify the location of the files.

Android Mobile Media Picker
Android Mobile Media Picker

In my current setup, I have selected "Photos & Videos" from the options above and then I have multi-selected the images I want to import by press and hold on a single image to enable multi-select. To import, I pressed "Select".

Android Multi-select Images
Android Multi-select Images

Once the image files have been selected, the progress for the image import will be shown.

Image Import Progress
Image Import Progress

Once it completes, you should see the number of images in the dataset increase by the amount of selected images. If you do not see any changes, refresh your browser.

Imported Images
Imported Images

Next view the gallery of the dataset to confirm all the captured data has been uploaded. You should see the imported video file and images in the gallery. Note that videos appear as sequences with a play button overlay on the preview thumbnail.

Coffee Cup Gallery
Coffee Cup Gallery

Once all the captured data has been uploaded to the dataset container, you will now assign groups to the data to split the data into training and validation sets. Follow the tutorial for creating groups with an 80% partition to training and 20% partition to validation. The final outcome for the groups should look as follows.

Dataset Groups
Dataset Groups

Now that you have imported captured images or videos into EdgeFirst Studio and have split the captured data into training and validation partitions, you can now start annotating your data as shown in the next section below.

Annotate Dataset

Now that you have a dataset in your project, you can start annotating the dataset. This will briefly show the steps for annotating the dataset, but for an in depth tutorial on the annotation process, please see Dataset Annotations.

To annotate a dataset, first create an annotation set on the dataset card.

Annotation Set
Annotation Set

A new annotation set was created called "new-annotations".

New Annotation Set
New Annotation Set

Next, open the dataset gallery, by clicking on the gallery button Gallery Button on the top left of the dataset card. The dataset will contain sequences (video) Sequences Icon and images. Click on any sequence card to start annotating sequences.

Coffee Cup Gallery
Coffee Cup Gallery

On the top navbar, switch to the right annotation set.

Switch Annotation Set
Switch Annotation Set

Start the AGTG server by clicking on the "AI Segment Tool" and follow the prompts as indicated.

Auto Segment Mode
Auto Segment Mode

Once the AGTG server has started, go ahead and annotate the starting frame.

AGTG Initial Prompts
AGTG Initial Prompts

Once the starting frame has been annotate, go ahead and propagate the annotations throughout the rest of the frames.

Propagation Process
Propagation Process

Once the propagation completes, click "Save Annotations" to save the propagated annotations.

Propagation Completed
Propagation Completed

Repeat the steps for all the sequences in the dataset. For the case of individual images, the same steps apply except there is no propagation step. You can still use the AGTG feature to quickly annotate images as shown in Add 2D annotations.

Audit Dataset

After the annotation process, review each individual frame or image in the dataset that was auto-annotated. This step is also known as the audit process which is crucial in verifying that the dataset has been properly annotated and ready for training.

To view the dataset and the annotations, click on the dataset gallery.

Dataset Gallery
Dataset Gallery

The auditing step may require adding new annotations for objects that were missed during the AGTG process. Or it may require removing annotations for objects that were improperly annotated. Lastly, for annotations that require minor adjustments, EdgeFirst Studio has the features for adjusting annotations. Please click on the links as provided for further instructions on each of these features.

Once you have audited the dataset and verified that it's properly annotated, split the dataset into training and validation groups.

Split Dataset

Partitioning the dataset is crucial in reserving dataset portions used for training and portions used for validation to assess the performance of the model. In EdgeFirst Studio, the partitions are 80% towards training and 20% towards validation. This operation randomly shuffles the data prior to assigning them to the specified groups.

Warning

The dataset needs to be re-split whenever new sample images or frames are added to the dataset. Newly added samples are not automatically added to any group that already exists.

Consider the following dataset without any groups reserved.

No Groups
No Groups

To create the dataset groups, click on the "+" button in the "Groups" field.

Add Groups
Add Groups

This will open a new dialog to specify the percentages of the partition belonging to the "Training" group or "Validation" group. By default 80% of the samples will be dedicated to training and 20% remaining will be dedicated towards the validation samples.

Groups Field
Groups Field

Once the groups are specified, click "Split" to create the groups. This will automatically divide the samples in the dataset based on the percentages of each group specified.

Dataset Groups
Dataset Groups

Train a Vision Model

Now that you have a fully annotated dataset that is split into training and validation samples, you can start training a Vision model. This will briefly show the steps for training a model, but for an in depth tutorial, please see Training Vision Models.

Navigate back to the "Projects" page. You can go back to the "Projects" page by clicking the Apps Menu waffle button on the top right of the Navigation bar. Click the first selection to take you to the "Projects page".

Apps Menu
Apps Menu

From the "Projects" page, click on "Model Experiments" of your project.

Model Experiments Page
Model Experiments Page

Create a new experiment by clicking "New Experiment" on the top right corner. Enter the name the description of this experiment. Click "Create New Experiment".

Model Experiments Page
Model Experiments Page

Navigate to the "Training Sessions".

Training Sessions
Training Sessions

Create a new training session by clicking on the "New Session" button on the top right corner.

New Session Button
New Session Button

Follow the settings indicated and keep the rest of the settings by their default. Click "Start Session" to start the training session.

Session Name

Do not include any forward slash "/" in the session names as this can result in missing model artifacts.

Start Training Session
Start Training Session

No Datasets Available

In case there are no datasets visible on the dropdown (3). Please refresh your browser.

The session progress will be shown like the following below.

Training Session Progress
Training Session Progress

Once completed the session card will appear like the following below.

Completed Session
Completed Session

On the train session card, expand the session details.

Training Details
Training Details

The trained models will be listed under "Artifacts".

Session Details Artifacts
session artifacts

Validate Vision Model

Now that you have trained a Vision model, you can now start validating your Vision model. This will briefly show the steps for validating a model, but for an in depth tutorial, please see Validating Vision Models.

On the train session card, expand the session details.

Training Details
Training Details

Click the "Validate" button.

Create Validation Session
Create Validation Session

Specify the name of the validation session and the model and the dataset for validation. The rest of the settings were kept as defaults. Click "Start Session" at the bottom to start the validation session.

Start Validation Session
Start Validation Session

No Datasets Available

In case there are no datasets visible on the dropdown. Please refresh your browser.

The validation session progress will appear in the "Validation" page as shown below.

Validation Progress
Validation Progress

Once completed the session card will appear like the following below.

Completed Session
Completed Session

The validation metrics are displayed as charts which can be found by clicking the validation charts.

Validation Charts Button
Validation Charts Button
Validation Charts
Validation Charts

Deploy the Model

Once you have validated your trained model, take a look at examples of deploying this model across different platforms. You can find a checklist of supported devices. We support validation on specific targets and applications for live video inference. Certain platforms are still under development.

Platform On Target Validation Live Video In Development
PC / Linux
Mac/MacOS
i.MX 8M Plus EVK
NVIDIA Orin
Kinara ARA-2
Raivin Radar Fusion
i.MX 95 EVK

If you wish to run validation on device, please follow instructions below.

Additional Platforms

Support for additional platforms beyond these listed will be available soon. Let us know which platform you'd like to see supported next!

Nest Steps

Explore more features by following the Maivin Workflow.