Getting Started
Welcome to EdgeFirst Studio! Follow this Quickstart for user onboarding from start to finish.
Sign Up
-
If you haven't already created an EdgeFirst Studio Account, start by creating an account. If you've already created an account, but you forget your password, click on the link for instructions to reset your password.
-
When creating your account, enter the required fields denoted by the asterisk (*) and then create your account once completed.
Create a New Account When you first sign up to EdgeFirst Studio, you will also automatically create your own organization. You can specify the name of your organization when you first sign up denoted by the "Organization" field. An organization allows multiple users to access your projects, but at the start, you will be the only user in your organization. However, you can create multiple users/profiles in your organization to allow collaboration between members in your team. In this way, multiple members can be assigned to your organization. You can find more information for managing your organization.
A new organization is given $20.00 worth of credits to allow trials of the features in EdgeFirst Studio. The credits in the organization will be shared amongst the members. You can find more information on the billing details.
-
An email will be sent to verify the email you provided. Go ahead and click on the link provided to verify your email.
Email Verification -
Once the email is verified, you can now login to EdgeFirst Studio.
Log In
-
When logging in, enter your username and password you specified. Next click the "Sign In" button to sign in.
Login Page -
Once logged in to EdgeFirst Studio, you will be greeted with the following Projects page.
Projects Splash Page
Create Project
-
Now that you are in the Projects or Main page, create your first project by clicking on the "New Project" button on the top-right corner of the page.
The location of the "New Project" button -
Provide a name and a description of the project as shown in the example below. Click the "Create" button to create your new project.
Project Details -
Your created project will be shown like the example below.
Both Projects
The next sections will invite you to follow along the end-to-end workflow for recording a video or capturing images using a phone and then upload the captured data into EdgeFirst Studio for annotation and then model training, validation, and deployment on various platforms.
Warning
It is recommended to use a phone connected to a Wifi network. A device connected to mobile data might be subject to intense usage when uploading files as video files or image files can be large in size. In the examples below, the video file used was ~15MB and the image files were ~2MB.
Capture with a Phone
The examples below will show capturing image samples of coffee cups using a phone for training a Vision model that detects coffee cups. However, you can choose any type of objects in your dataset.
To capture samples of coffee cups, you can record a video as shown below. In this example, a five second video was recorded.

Furthermore, you can also capture individual images of coffee cups as shown below.

Tip
It is recommended to use videos rather than individual images. This is because the Automatic Ground Truth Generation (AGTG) feature leverages SAM-2 with tracking information which only needs a single annotation to annotate all frames. However, individual images requires more effort to annotate each image separately.
Create a Dataset
Once you have captured your video and some sample images for your dataset, navigate to a web browser on your mobile device and login to EdgeFirst Studio. Once logged in to EdgeFirst Studio, navigate to the "Object Detection" project that was created and click on the datasets button that is indicated in red.

This will bring you to the "Datasets" page of the selected project. Create a new dataset container by clicking the "New Dataset" button that is indicated in red.

Add the dataset and annotation container name, labels, and dataset description as indicated by the fields below. It is up to you to specify the information in the fields and you do not have to strictly follow the example shown below. Click the "Create" button once the fields have been filled.

Your created dataset will look as follows.

Upload Videos or Images
Once the dataset container has been created, click on the dataset extended menu (three dots) and select import.

This will bring you to the "Import Dataset" page.

First you will be importing the video recording from step 1. Click on the dropdown that says "Select an Import Type" and then specify "Video" and then click "Done" as shown below.

Now that the import type is specified to a video file, click on "Select File" as indicated.

On an android device, this will bring up the option to specify the location of the files.

In my current setup, I have selected "My Files" from the options above and then "Videos" which will allow me to pinpoint the location of the video I have recorded.

Warning
Only one video can be imported at a time.
Once the video file has been selected, set the desired FPS (frames per second) ratio, and then go ahead and click the "Start Import" button to start importing the video file.

This will start the import process and once it is completed, you should see the number of images in the dataset increased. If you do not see any changes, refresh the browser.

Next, if you have captured images from step 1 you will import the captured images. Navigate back to the "Import Dataset" page (refer to the top).

Click on "Click to select images". This will bring up the option to specify the location of the files.

In my current setup, I have selected "Media Picker" from the options above and then I have multi-selected the images I want to import by press and hold on a single image to enable multi-select. To import, I pressed "Select".

Once the image files have been selected, the progress for the image import will be shown.

Once it completes, you should see the number of images in the dataset increase by the amount of selected images. If you do not see any changes, refresh your browser.

Next view the gallery of the dataset to confirm all the captured data has been uploaded. You should see the imported video file and images in the gallery. Note that videos appear as sequences with a play button overlay on the preview thumbnail.

Once all the captured data has been uploaded to the dataset container, you will now assign groups to the data to split the data into training and validation sets. Follow the tutorial for creating groups with an 80% partition to training and 20% partition to validation. The final outcome for the groups should look as follows.

Now that you have imported captured images or videos into EdgeFirst Studio and have split the captured data into training and validation partitions, you can now start annotating your data as shown in the next section below.
Annotate the Dataset
In this step, you will need a personal computer (PC) with access to Wifi to log in to EdgeFirst Studio for annotating the dataset. When annotating the dataset, you will be using AI assistance to reduce the effort by running auto segmentation and bounding boxes on the objects in the frame. Once logged in to EdgeFirst Studio, follow the Auto Annotations instructions to auto-annotate the video sequence that was imported. Otherwise, follow the Audit 2D Annotations instructions to annotate the images captured.
A complete annotation will have a segmentation mask and a bounding box for each object in the frame. Shown below is an example.

Train a Vision Model
Once you have a proper dataset that is fully annotated and split into training and validation groups, you can now start training your Vision model. For instructions to train a Vision model, please refer to the Training ModelPack Tutorial.
A completed training session will look like the following figure.

Validate the Trained Model
Once the model is trained, you can now start validating the performance of the model to verify if the model is ready for deployment.
For instructions to validate a Vision model, please refer to the Validating ModelPack Tutorial.
Once the validation session completes, the metrics will be displayed like the following figure.

Deploy the Model
Once you have validated your trained model, let's take a look at an example of how this model can be deployed in your PC by following the tutorial Deploying to the PC.
If you have an NXP i.MX 8M Plus EVK you can also run your model directly on the device using the EdgeFirst Middleware by following the tutorial Deploying to Embedded Targets.
Note
Support for additional platforms beyond the NXP i.MX 8M Plus will be available soon. Let us know which platform you'd like to see supported next!
If you have an EdgeFirst Platform such as the Maivin or Raivin then you can deploy and run the model using the bundled EdgeFirst Middleware by following the tutorial Deploying to EdgeFirst Platforms.
In this Quickstart guide, you have created your EdgeFirst Studio Account, logged in to EdgeFirst Studio, and created your very first project and ran your first experiment by capturing images and videos, annotating datasets, training a Vision model, validating the trained model, and deploying the model back into the PC, EdgeFirst Plaform, or the i.MX 8M Plus EVK.
Next Steps
For these next steps, it is recommended to be familiar with the concepts and UI elements in EdgeFirst Studio as described in the EdgeFirst Studio: Overview. Next, users are invited to follow along other various User Workflows that are tailored towards various hardware requirements and resources available to the user.
Need Help?
📬 Have questions or ran into an issue?
Feel free to email our support team — we’re here to help!