• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

vikrant7/mobile-vod-bottleneck-lstm: Implementation of Mobile Video Object Detec ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

vikrant7/mobile-vod-bottleneck-lstm

开源软件地址(OpenSource Url):

https://github.com/vikrant7/mobile-vod-bottleneck-lstm

开源编程语言(OpenSource Language):

Python 100.0%

开源软件介绍(OpenSource Introduction):

Mobile Video Object Detection

Code for the Paper

Mobile Video Object Detection with Temporally-Aware Feature Maps Mason Liu, Menglong Zhu, CVPR 2018

[link][bibtex]

Introduction

This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Proposed approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture.

Additionally, authors propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. This network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames.

This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. This model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.

Dependencies

  1. Python 3.6+
  2. OpenCV
  3. Pytorch 1.0 or Pytorch 0.4+
  4. torch-vision

Dataset

Download Imagenet VID 2015 dataset from [link]. This is the link for ILSVRC2017 as the link for ILSVRC2015 seems to down now.

To get list of training, validation and test dataset (make sure to change path of dataset in the scripts):

  • for basenet training run datasets/get_VID_list.py script.
  • for sequential training of LSTM layers run datasets/get_VID_seqs_list.py script.

Note: Output of this scripts is already in the repo, so no need to run it again

Two custom Pytorch Dataset classes are written in datasets/vid_dataset.py which ingests this dataset and provides random batch / complete data while training and validation. One class is for basenet training while other class is for sequential training where unroll length of LSTM is 10 and 10 consecutive frames from video sequence are provided as input for the same. Here we are unrolling for 10 steps as mentioned in the paper.

Training

Make sure to be in python 3.6+ environment with all the dependencies installed.

As described in section 4.2 of the paper, model has two types of LSTM layers, one is Bottleneck LSTM layer which reduces the number of channels by 0.25 and the other is normal Conv LSTM which has same number of channels as output as that of input.

Training of multiple Conv LSTM layers is done in sequencial order i.e. fine tune and fix all the layers before the newly added LSTM layer.

Make sure to keep batch size same in lstm1, lstm2, lstm3, lstm4 and lstm5 training as the size of hidden and cell state of LSTM layers should be consistent while training. Also, make sure to keep width multiplier same.

By default, GPU is used for training. Here, freeze_net command line argument freezes the model as descriped in the paper.

Before saving the checkpoint model, model gets validated on the validation set. All the checkpoint models are saved in models directory.

Basenet

Basenet is Mobilenet V1 with SSD. Train the basenet by executing following command:

python train_mvod_basenet.py --datasets {path to ILSVRC2015 root dir} --batch_size 60 --num_epochs 30 --width_mult 1

If you want to train with any other width multiplier then change the command line argument width_mult accordingly.

For more help on command line args, execute the following command:

python train_mvod_basenet.py --help

Basenet with 1 Bottleneck LSTM

As described in section 4.2 of the paper, first Bottleneck LSTM layer is placed after Conv13 layer and we freeze all the layers upto and including Conv13 layer. To train model with one Bottleneck LSTM layer execute following command:

python train_mvod_lstm1.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained basenet model} --width_mult 1 --freeze_net

Refer script docstring and inline comments in train_mvod_lstm1.py for understanding of execution.

Basenet with 2 Bottleneck LSTM

As described in section 4.2 of the paper, second Bottleneck LSTM layer is placed after Feature Map 1 layer and we freeze all the layers upto and including Feature Map 1 layer. To train model with two LSTM layers execute following command:

python train_mvod_lstm2.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 1} --width_mult 1 --freeze_net 

Refer script docstring and inline comments in train_mvod_lstm2.py for understanding of execution.

Basenet with 3 Bottleneck LSTM

As described in section 4.2 of the paper, third Bottleneck LSTM layer is placed after Feature Map 2 layer and we freeze all the layers upto and including Feature Map 2 layer. To train model with three Bottleneck LSTM layers execute following command:

python train_mvod_lstm3.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 2} --width_mult 1 --freeze_net

Refer script docstring and inline comments in train_mvod_lstm3.py for understanding of execution.

Basenet with 3 Bottleneck LSTM and 1 LSTM

As described in section 4.2 of the paper, a LSTM layer is placed after Feature Map 3 layer and we freeze all the layers upto and including Feature Map 3 layer. To train model with 3 Bottleneck LSTM layers and 1 LSTM layer execute following command:

python train_mvod_lstm4.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 3} --width_mult 1 --freeze_net

Refer script docstring and inline comments in train_mvod_lstm4.py for understanding of execution.

Basenet with 3 Bottleneck LSTM and 2 LSTM

As described in section 4.2 of the paper, second normal LSTM layer is placed after Feature Map 4 layer and we freeze all the layers upto and including Feature Map 4 layer. To train model with 3 Bottleneck LSTM layers and 2 LSTM layer execute following command:

python train_mvod_lstm5.py --datasets {path to ILSVRC2015 root dir} --batch_size 10 --num_epochs 30 --pretrained {path to pretrained lstm 4} --width_mult 1 --freeze_net

Refer script docstring and inline comments in train_mvod_lstm5.py for understanding of execution.

Evaluation

Evaluation script evaluate.py reports validation accuracy (mAP). For more info execute this command:

python evaluate.py --help

Results

Main Results according to the paper:

training data testing data [email protected] Params (M) MAC (B)
Bottleneck LSTM
(width_mult = 1)
ImageNet VID train ImageNet VID validation 54.4 3.24 1.13
Bottleneck LSTM
(width_mult = 0.5)
ImageNet VID train ImageNet VID validation 43.8 0.86 0.19

Reported metrics:

TODO: Train model and report metric score. Due to limited GPU resource and the huge size of Imagenet VID 2015 dataset, training of the model is taking huge amount of time. I will report the metric score here once training is done. Update : I have trained Basenet and now training of lstm1 is going on.

References

  1. PyTorch Docs. [http://pytorch.org/docs/master]
  2. PyTorch SSD [https://github.com/qfgaohao/pytorch-ssd]
  3. LSTM Object Detection [https://github.com/tensorflow/models/tree/master/research/lstm_object_detection]

Contributors

Thanks a lot to [Pichao Wang] for training the model and suggesting several changes.

License

BSD




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
benbahrenburg/Securely: Security Modules for Titanium Mobile发布时间:2022-08-30
下一篇:
jabajasphin/mobilerobot-openloopcontrol发布时间:2022-08-30
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap