Deploying to Embedded Targets
This guide will walk you through installing the EdgeFirst Middleware in "user-mode" on a target device. If you're using a Maivin or Raivin refer to the Deploying to the Maivin guide instead.
Installation
Warning
This workflow is currently only supported on NXP i.MX 8M Plus EVK platforms running the NXP Yocto BSP.
The EdgeFirst Middleware can be easily installed on embedded targets using the Python package manager to install the middleware binaries and sample models. Simply run the following command to install the middleware.
pip3 install edgefirst
This can also be done through a Python virtual environment to keep all the packages in a user-defined location.
pip3 install virtualenv
python3 -m virtualenv venv
source venv/bin/activate
pip3 install edgefirst
Download the Model
Download the model from EdgeFirst Studio into the device. There are two methods for downloading the model. The first method is to download the model from EdgeFirst Studio and then SCP the model file to the device. The second method is to use the EdgeFirst Client to download the model directly in the device.
As mentioned under the Training Outcomes section, the trained models can be downloaded by clicking the "View Additional Details" button on the training session card in EdgeFirst Studio.
This will open the session details and the models are listed under the "Artifacts" tab as shown below. Click on the downward arrow indicated in red to download the models to your PC. In this example, you will be deploying the TFLite model in the device.
Session Details | Artifacts |
---|---|
![]() |
![]() |
Once the model is downloaded in your PC, you can SCP the model to the device by using this command template.
scp <path to the downloaded TFLite model> <destination path>
An example command is shown below.
scp modelpack.tflite torizon@verdin-imx8mp-15140753:~
This method expects you to have already connected to the device via SSH. The EdgeFirst Client can be installed via pip3 install edgefirst-client
. You can verify the installation with the client version command.
$ edgefirst-client version
EdgeFirst Studio Server: 3.7.8-a50429e Client: 1.3.3
Next login to EdgeFirst Studio with the command.
$ edgefirst-client login
Username: user
Password: ****
You can download the model on the device using the download-artifact
command as shown below.
edgefirst-client download-artifact <session ID> <model name>
For example edgefirst-client download-artifact 3928 'Coffee Cup Detection-t-f58.tflite'
. This command will download to the current working directory.
The download-artifact
expects three arguments.
- session ID: Pass the trainer or validation integer session ID associated with the models.
- model name: Pass the specific model that will be downloaded to the device. Usually this is
mymodel.tflite
.
You can find more information on using the EdgeFirst Client in the command line.
Usage
Once installed, the applications can be launched using the edgefirst
launcher which handles running the various services. The applications can also be run directly, for example edgefirst-camera
to run the camera service. The launcher is an optional but convenient way to run all the applications in "user-mode" in contrast to having the middleware integrated into the system as "system-mode" where the applications will be managed by systemd.
Live Camera Mode
The live camera mode is launched using the following command. By default it will run a person detection and segmentation model, if you wish to run your own model simply use the --model mymodel.tflite
parameter and provide the path to your TFLite model. Currently only ModelPack for Detection & Segmentation is supported by the launcher.
edgefirst live
Note
The live camera mode uses the camera found at /dev/video3
by default. If your camera is on another device entry you can use the --camera /dev/videoX
parameter to select the correct camera, replace X
with the correct camera index.
Once launched, you can access the live camera view and model overlays by pointing your browser to your device's IP address using https://MY_DEVICE_IP
. You'll see something like the following.
If testing a model trained using a few images collected on a phone you'll notice the performance is probably poor. With small datasets it is especially important to include samples captured with the target device's camera which is luckily easy to do from the web interface. There is a toggle button in the top-right of the screen which will start an MCAP recording.
You can also access the MCAP recordings from the working directory where the edgefirst live
command was run.
Once you have your MCAP download to your PC you may upload it to EdgeFirst Studio to annotate and further train your model. Uploading MCAP recordings is done through the Snapshot Dashboard.
Next Steps
In this tutorial, you have fetched the trained and validated model from EdgeFirst Studio, copied the model to the EVK and ran the model live using the EdgeFirst Live application and the EVK's camera.
See our developer guide for examples to query the model outputs using Rust or Python.
For more examples on deploying ModelPack in other platforms, see Model Deployment.