Skip to content

Tourist Plus Workflow

In this workflow, you will explore copying the Coffee Cup dataset from the Sample Project and using the dataset to explore the annotation process. Once the dataset has been annotated, you can use the dataset to train and validate the model, and then finally deploy the model on your PC.

Copy Dataset

To copy a dataset, navigate to the dataset you would like to copy. On the dataset card, select the "Copy Dataset" from the dataset options as shown below.

Copy Dataset
Copy Dataset

This will open a new dialog for the user to specify the "Destination". The "Destination" will be the location of the copied dataset. The "Source" will be set by default to the current dataset card you've selected. However, you can also modify the location here. In the example below, the original dataset is the "Source" which is the "Coffee Cup" dataset from the "Sample Project". The copied dataset will be placed as specified in the "Destination" fields. By default a new dataset container will be created in the specified project. However, you can create a dataset container before copying and specify this dataset container in the "Destination" fields.

Copy Dataset Options
Copy Dataset Options

Once the options are specified, go ahead and click "Apply" at the bottom right to start the copy process. The progress for the dataset copy will be shown on the new dataset card that was created in the project destination that was specified.

Copy Dataset Progress
Copy Dataset Progress

Once the copying process completes, the frames and the annotations have been copied.

Original Dataset Copied Dataset
Original Copied

Annotate Dataset

Now that you have a dataset in your project, you can start annotating the dataset. This will briefly show the steps for annotating the dataset, but for an in depth tutorial on the annotation process, please see Dataset Annotations.

To annotate a dataset, first create an annotation set on the dataset card.

Annotation Set
Annotation Set

A new annotation set was created called "new-annotations".

New Annotation Set
New Annotation Set

Next, open the dataset gallery, by clicking on the gallery button Gallery Button on the top left of the dataset card. The dataset will contain sequences (video) Sequences Icon and images. Click on any sequence card to start annotating sequences.

Coffee Cup Gallery
Coffee Cup Gallery

On the top navbar, switch to the right annotation set.

Switch Annotation Set
Switch Annotation Set

Start the AGTG server by clicking on the "AI Segment Tool" and follow the prompts as indicated.

Auto Segment Mode
Auto Segment Mode

Once the AGTG server has started, go ahead and annotate the starting frame.

AGTG Initial Prompts
AGTG Initial Prompts

Once the starting frame has been annotate, go ahead and propagate the annotations throughout the rest of the frames.

Propagation Process
Propagation Process

Repeat the steps for all the sequences in the dataset. For the case of individual images, the same steps apply except there is no propagation step. More details are provided in the manual annotations.

Train a Vision Model

Now that you have a fully annotated dataset that is split into training and validation samples, you can start training a Vision model. This will briefly show the steps for training a model, but for an in depth tutorial, please see Training ModelPack.

From the "Projects" page, click on "Model Experiments" of your project.

Model Experiments Page
Model Experiments Page

Create a new experiment by clicking "New Experiment" on the top right corner. Enter the name the description of this experiment. Click "Create New Experiment".

Model Experiments Page
Model Experiments Page

Navigate to the "Training Sessions".

Training Sessions
Training Sessions

Create a new training session by clicking on the "New Session" button on the top right corner.

New Session Button
New Session Button

Follow the settings indicated in red and keep the rest of the settings by their default. Click "Start Session" to start the training session.

Start Training Session
Start Training Session

The session progress will be shown like the following below.

Training Session Progress
Training Session Progress

Once completed the session card will appear like the following below.

Completed Session
Completed Session

On the train session card, expand the session details.

Training Details
Training Details

The trained models will be listed under "Artifacts".

Session Details Artifacts
session artifacts

Validate Vision Model

Now that you have trained a Vision model, you can now start validating your Vision model. This will briefly show the steps for validating a model, but for an in depth tutorial, please see Validating ModelPack.

On the train session card, expand the session details.

Training Details
Training Details

Click the "Validate" button.

Create Validation Session
Create Validation Session

Specify the name of the validation session and the model and the dataset for validation. The rest of the settings were kept as defaults. Click "Start Session" at the bottom to start the validation session.

Start Validation Session
Start Validation Session

The validation session progress will appear in the "Validation" page as shown below.

Validation Progress
Validation Progress

Once completed the session card will appear like the following below.

Completed Session
Completed Session

The validation metrics are displayed as charts which can be found by clicking the validation charts.

Validation Charts Button
Validation Charts Button
Validation Charts
Validation Charts

Deploy the Model

Once you have validated your trained model, let's take a look at an example of how this model can be deployed in your PC by following the tutorial Deploying to the PC.

If you have an NXP i.MX 8M Plus EVK you can also run your model directly on the device using the EdgeFirst Middleware by following the tutorial Deploying to Embedded Targets.

Additional Platforms

Support for additional platforms beyond the NXP i.MX 8M Plus will be available soon. Let us know which platform you'd like to see supported next!

If you have an EdgeFirst Platform such as the Maivin or Raivin then you can deploy and run the model using the bundled EdgeFirst Middleware by following the tutorial Deploying to EdgeFirst Platforms.

No Studio Costs

Deployment of Vision models will not cost any credits from Studio.

Nest Steps

Explore more features by following the Web Workflow.