在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:nnom开源软件地址:https://gitee.com/RT-Thread-Mirror/nnom开源软件介绍:Neural Network on Microcontroller (NNoM)NNoM is a high-level inference Neural Network library specifically for microcontrollers. Highlights
The structure of NNoM is shown below: More detail avaialble in Development Guide Discussions welcome using issues.Pull request welcome. QQ/TIM group: 763089399. Latest Updates - v0.4.xRecurrent Layers (RNN) (0.4.1) Recurrent layers (Simple RNN, GRU, LSTM) are implemented in version 0.4.1. Support New Structured Interface (0.4.0) NNoM has provided a new layer interface called Structured Interface, all marked with Per-Channel Quantisation (0.4.0) The new structred API supports per-channel quantisation (per-axis) and dilations for Convolutional layers. New Scripts (0.4.0) From 0.4.0, NNoM will switch to structured interface as default to generate the model header LicensesNNoM is released under Apache License 2.0 since nnom-V0.2.0.License and copyright information can be found within the code. Why NNoM?The aims of NNoM is to provide a light-weight, user-friendly and flexible interface for fast deploying on MCU. Nowadays, neural networks are wider, deeper, and denser.
After 2014, the development of Neural Networks are more focus on structure optimising to improve efficiency and performance, which is more important to the small footprint platforms such as MCUs.However, the available NN libs for MCU are too low-level which make it sooooo difficult to use with these complex strucures. Therefore, we build NNoM to help embedded developers for faster and simpler deploying NN model directly to MCU.
DocumentationsGuides RT-Thread-MNIST example (Chinese) PerformanceThere are many articles compared NNoM with other famous MCU AI tools, such as TensorFlow LiteSTM32Cube.AI .etc. Raphael Zingg etc from Zurich University of Applied Sciences compare nnom with tflite, cube, and e-Ai in their paper "Artificial Intelligence on Microcontrollers" blog https://blog.zhaw.ch/high-performance/2020/05/14/artificial-intelligence-on-microcontrollers/ Butt Usman Ali from POLITECNICO DI TORINO, did below comparison in the thesis: On the deployment of Artificial Neural Networks (ANN) in lowcost embedded systems Both articles shows that NNoM is not only comparable with other popular NN framework but with faster inference time and sometime less memory footprint. Note: These graphs and tables are credited to their authors. Please refer the their original papers for details and copyright. ExamplesDocumented examples Please check examples and choose one to start with. Available Operations
Core Layers
RNN Layers
Activations Activation can be used by itself as layer, or can be attached to the previous layer as "actail" to reduce memory cost. There is no structred API for activation currently, since activation are not usually used as a layer.
Pooling Layers
Matrix Operations Layers
DependenciesNNoM now use the local pure C backend implementation by default. Thus, there is no special dependency needed. However, You will need to enable OptimizationCMSIS-NN/DSP is an optimized backend for ARM-Cortex-M4/7/33/35P. You can select it for up to 5x performance compared to the default C backend. NNoM will use the equivalent method in CMSIS-NN if the condition met. Please check Porting and optimising Guide for detail. Known IssuesThe Converter do not support implicitly defined activationsThe script currently does not support implicit act: x = Dense(32, activation="relu")(x) Use the explicit activation instead. x = Dense(32)(x)x = Relu()(x) Tips - improving accuracy
ContactsJianjia Ma[email protected] Also find me for field supports. Citation are required in publicationPlease contact me using above details if you have any problem. Example: @software{jianjia_ma_2020_4158710, author = {Jianjia Ma}, title = {{A higher-level Neural Network library on Microcontrollers (NNoM)}}, month = oct, year = 2020, publisher = {Zenodo}, version = {v0.4.2}, doi = {10.5281/zenodo.4158710}, url = {https://doi.org/10.5281/zenodo.4158710}} |
请发表评论