Jump to content
Sign in to follow this  
Tinker Board

Track human poses in real-time with Tinker Edge T

Recommended Posts

Requirements
- Tinker Board T
- Image version >= Tinker_Edge_T-Mendel-Chef-V1.0.0-20200221 [link]
- USB camera

We have built Google Coral PoseNet on Tinker Edge T image. You can refer to the following introduction for how to use it. 

The PoseNet we used here is posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite.

First of all, you need to have a USB camera connected to Tinker Edge T such as the following one.

[Image: iRlM5Ef.png]

Then, you can power on Tinker Edge T and launch the terminal console by clicking on the red box as shown in the following figure using mouse.

[Image: oYjyly1.png]

In the terminal console, issue the following commands to go the the directory /usr/share/project-posenet and run a simple camera example that streams the camera image through posenet and draws the pose on top as an overlay.

$ cd /usr/share/project-posenet
$ python3 pose_camera.py --videosrc=/dev/video2
Bash


The argument --videosrc is to specify from where to stream the camera image. Here the node /dev/video2 is for USB camera we just connected. For more argument information, you can run with -h as the following to get more information.
 

$ python3 pose_camera.py -h

usage: pose_camera.py [-h] [--mirror] [--model MODEL]

                     [--res {480x360,640x480,1280x720}] [--videosrc VIDEOSRC]

                     [--h264]

optional arguments:
 -h, --help            show this help message and exit

 --mirror              flip video horizontally (default: False)

 --model MODEL         .tflite model path. (default: None)

 --res {480x360,640x480,1280x720}

                       Resolution (default: 640x480)

 --videosrc VIDEOSRC   Which video source to use (default: /dev/video0)

 --h264                Use video/x-h264 input (default: False)
Bash


In the end, you should see something as the following and you have successfully run the example.

You could change the  pre-trained model by following step. Find more pre-trained model from below link.

https://github.com/google-coral/project-posenet/tree/master/models/resnet

https://github.com/google-coral/project-posenet/tree/master/models/mobilenet

python3 pose_camera.py --videosrc=/dev/video2 –-model={choose the pre-trained model you want to use}


[Image: xX9aUTr.png]

[Image: AzE1fY0.png]  [Image: eVzgGZ8.png]
For more information, please refer to https://github.com/google-coral/project-posenet.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...