• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

1996scarlet/faster-mobile-retinaface: [CVPR 2020] Reimplementation of RetinaFace ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

1996scarlet/faster-mobile-retinaface

开源软件地址(OpenSource Url):

https://github.com/1996scarlet/faster-mobile-retinaface

开源编程语言(OpenSource Language):

Python 100.0%

开源软件介绍(OpenSource Introduction):

Face Detection @ 500-1000 FPS

Image of PR

Language grade: Python License CVPR

100% Python3 reimplementation of RetinaFace, a solid single-shot face localisation framework in CVPR 2020.

  • Replaced CUDA based anchor generator functions with NumPy APIs.
  • Stored runtime anchors via dict to avoid duplicate counting.
  • Optimized NMS algorithm through vector calculation methods.
  • Reduced FPN layers and anchor density for middle-close range detection.
  • Used low-level Mxnet APIs to speed up the inference process.

Getting Start

Requirements

  • Install GStreamer for reading videos (Optional)
  • Mxnet >= 1.5.0 (preferably CUDA based package)
  • Python >= 3.6
  • opencv-python

While not required, for optimal performance, it is highly recommended to run the code using a CUDA enabled GPU.

Running for Video Files

gst-launch-1.0 -q filesrc location=$YOUR_FILE_PATH !\
  qtdemux ! h264parse ! avdec_h264 !\
  video/x-raw, width=640, height=480 ! videoconvert !\
  video/x-raw, format=BGR ! fdsink | python3 face_detector.py

Real-Time Capturing via Webcam

gst-launch-1.0 -q v4l2src device=/dev/video0 !\
  video/x-raw, width=640, height=480 ! videoconvert !\
  video/x-raw, format=BGR ! fdsink | python3 face_detector.py

Some Tips

  • Be Careful About ! and |
  • Decoding the H.264 (or other format) stream using CPU can cost much. I'd suggest using your NVIDIA GPU for decoding acceleration. See Issues#5 and nvbugs for more details.
  • For Jetson-Nano, following Install MXNet on a Jetson to prepare your envoriment.

Methods and Experiments

For middle-close range face detection, appropriately removing FPN layers and reducing the density of anchors could count-down the overall computational complexity. In addition, low-level APIs are used at preprocessing stage to bypass unnecessary format checks. While inferencing, runtime anchors are cached to avoid repeat calculations. More over, considerable speeding up can be obtained through vector acceleration and NMS algorithm improvement at post-processing stage.

Experiments have been carried out via GTX 1660Ti with CUDA 10.2 on KDE-Ubuntu 19.10.

Scale RetinaFace Faster RetinaFace Speed Up
0.1 2.854ms 2.155ms 32%
0.4 3.481ms 2.916ms 19%
1.0 5.743ms 5.413ms 6.1%
2.0 22.351ms 20.599ms 8.5%

Results of several scale factors at VGA resolution show that our method can speed up by 32%. As real resolution increases, the proportion of feature extraction time spent in the measurement process will increase significantly, which causes our acceleration effect to be diluted.

Plantform Inference Postprocess Throughput Capacity
9750HQ+1660TI 0.9ms 1.5ms 500~1000fps
Jetson-Nano 4.6ms 11.4ms 80~200fps

Theoretically speaking, throughput capacity can reach the highest while the queue is bigger enough.

Citation

@inproceedings{deng2019retinaface,
    title={RetinaFace: Single-stage Dense Face Localisation in the Wild},
    author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos},
    booktitle={arxiv},
    year={2019}
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
ShreyashPromact/azure-mobile-apps-quickstarts: Client and server templates for A ...发布时间:2022-08-30
下一篇:
kyawthiha7/Mobile-App-Pentest发布时间:2022-08-30
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap