Web-Based Workflow
In this workflow, we will explore recording a video or capturing images using a mobile device and then upload the captured data into EdgeFirst Studio for annotation and then model training, validation, and deployment using the PC. This workflow requires the user to have signed up and logged in to EdgeFirst Studio and followed the initial steps described in the EdgeFirst Studio Quickstart.
Warning
It is recommended to use a mobile device connected to a Wifi network. A device connected to mobile data might be subject to intense usage when uploading files as video files or image files can be large in size. In the examples below, the video file used was ~15MB and the image files were ~2MB.
1. Capture Data Using a Mobile Device
In this workflow, we will be exploring capturing data with samples of coffee cups for training a detection model that detects coffee cups. However, you can choose any type of objects in your dataset, but be mindful that the label for these objects remains consistent for all samples.
In this workflow, we will be recording a 5 second video on coffee cups as shown below.

Furthermore, we will also capture images of coffee cups as shown below.

2. Visit EdgeFirst Studio
Once you have captured your video and some sample images for your dataset on your mobile device, navigate to a web browser on your mobile device and login to EdgeFirst Studio. Once logged in to EdgeFirst Studio, navigate to the "Object Detection" project that was created in the EdgeFirst Studio Quickstart and click on the datasets button that is indicated in red.

This will bring you to the Datasets page of the selected project. Create a new dataset container by clicking on "NEW DATASET".

Add the dataset and annotation container name, labels, and dataset description as indicated by the fields below. Adding a dataset description is optional, but it will be useful when accessing the dataset through the edgefirst-client API. This information can be specified by the user and does not have to strictly follow the example shown below. Click the "CREATE" button once the fields have been filled.

Your created dataset will look as follows.

3. Upload Data to EdgeFirst Studio
Once the dataset container has been created, click on the dataset extended menu (three dots) and select import.

This will bring you to the "Import Dataset" page.

First we will be importing the video recording from step 1. Click on the dropdown that says "Select an Import Type" and then specify "Video" and then click "Done" as shown below.

Now that the import type is specified to a video file, click on "Select File" as indicated.

On an android device, this will bring up the option to specify the location of the files.

In my current setup, I have selected "My Files" from the options above and then "Videos" which will allow me to pinpoint the location of the video I have recorded.

Warning
Only one video can be imported at a time.
Once the video file has been selected, set the desired FPS (frames per second) ratio, and then go ahead and click "START IMPORT" to start importing the video file.

This will start the import process and once it is completed, you should see the number of images in the dataset increased. If you do not see any changes, refresh the browser.

Next we will import the captured images from step 1. Navigate back to the "Import Dataset" page (refer to the top).

Click on "Click to select images". This will bring up the option to specify the location of the files.

In my current setup, I have selected "Media Picker" from the options above and then I have multi-selected the images I want to import by press and hold on a single image to enable multi-select. To import, I pressed "Select".

Once the image files have been selected, the progress for the image import will be shown.

Once it completes, you should see the number of images in the dataset increase by the amount of selected images. If you do not see any changes, refresh your browser.

Next view the gallery of the dataset to confirm all the captured data has been uploaded. You should see the imported video file and images in the gallery. Note that videos appear as sequences with a play button overlay on the preview thumbnail.

Tip
We recommend using videos rather than individual images. This is because Automatic Ground Truth Generation (AGTG) leverages SAM-2 with tracking information which only needs a single annotation to annotate all frames. However, individual images requires the annotation of each image separately.
Once all the captured data has been uploaded to the dataset container, we will now assign groups to the data to split the data into training and validation sets. Follow the tutorial for creating groups with an 80% partition to training and 20% partition to validation. The final outcome for the groups should look as follows.

Now that we have imported some data into EdgeFirst Studio and have split the captured data into training and validation partitions, we can now start annotating our data as shown in the next section below.
4. Annotate Data in EdgeFirst Studio
In this step, we will be using a personal computer with access to Wifi to log in to EdgeFirst Studio for annotating the captured data. When annotating the dataset, we will be using AI assistance to annotate the ground truth to perform auto segmentation and bounding boxes on the objects in the frame. Once logged in to EdgeFirst Studio, follow the Auto Annotations instructions to auto-annotate the video sequence that was imported. Otherwise, follow the Audit 2D Annotations instructions for annotating the images captured.
A complete annotation will have a segmentation mask and a bounding box for each object in the frame. Shown below is an example.

5. Train a Model from the Annotated Data
Once you have a proper dataset that is fully annotated and split into training and validation groups, you can now start training your Vision model. For instructions to train a Vision model, please refer to the Training ModelPack Tutorial.
A completed training session will look like the following figure.

6. Validate the Trained Model
Once the model is trained, you can now start validating the performance of the model to verify if the model is ready for deployment.
For instructions to validate a Vision model, please refere to the Validating ModelPack Tutorial.
Once the validation session completes, the metrics will be displayed like the following figure.

7. Deploy the Model
Once you have validated your trained model, let's take a look at an example of how this model can be deployed in your PC by following the tutorial Deploying Vision Models.