河南农业科学 ›› 2022, Vol. 51 ›› Issue (1): 162-170.DOI: 10.15933/j.cnki.1004-3268.2022.01.020

• 农业信息与工程·农产品加工 • 上一篇    下一篇

基于联合剪枝深度模型压缩的种子分选方法研究

董燕,李环宇,李卫杰,李春雷,刘洲峰   

  1. (中原工学院电子信息学院,河南 郑州 450007)
  • 收稿日期:2021-07-05 出版日期:2022-01-15 发布日期:2022-03-18
  • 通讯作者: 李春雷(1979-),男,河南周口人,教授,博士,主要从事模式识别与计算机视觉研究。E-mail:lichunlei1979@zut.edu.cn
  • 作者简介:董燕(1977-),女,甘肃天水人,副教授,硕士,主要从事计算机视觉研究。E-mail:dy@zut.edu.cn
  • 基金资助:
    国家自然科学基金河南联合基金项目(U1804157);国家自然科学基金面上项目(2072489,61772576);河南省教育厅科技创新团队项目(21IRTSTHN013)

Research on Depth Model Compression Method Based on Joint Pruning for Seed Sorting

DONG Yan,LI Huanyu,LI Weijie,LI Chunlei,LIU Zhoufeng   

  1. (School of Electrical and Information Engineering,Zhongyuan University of Technology,Zhengzhou 450007,China)
  • Received:2021-07-05 Published:2022-01-15 Online:2022-03-18

摘要: 现有基于深度学习的种子分级分选方法已取得了理想的识别效果,但由于分选方法的性能依赖于模型的宽度和深度,导致模型参数量显著增加,使其难以部署在实际应用中资源受限的边缘设备上。为此,提出了一种通道和卷积层联合剪枝的深度模型压缩方法,在通道剪枝阶段,将稀疏正则化训练的BN层参数作为衡量通道重要性的指标,在不损失模型精度的前提下,实现最大限度的通道剪枝压缩。然后提出基于线性探针的层剪枝方法,在压缩模型的同时减少内存访问,从而提升模型推理速度。最后采用知识蒸馏技术对剪枝网络进行知识迁移,补偿网络因剪枝而造成的精度损失。结果表明,在红芸豆和玉米种子数据集上所提出的方法使模型计算量减少86.55%和91.55%情况下,分别实现了实际推理速度2.1倍和2.8倍的提升,且仍保持较好的识别准确度(97.38%和96.56%),为模型在实际种子分选系统的部署提供技术支撑。

关键词: 种子分级分选, 计算机视觉, 深度学习, 模型压缩, 层剪枝, 知识蒸馏

Abstract: The crop seed sorting methods based on deep learning have achieved ideal recognition accuracy. However,their performance depends on the width and depth of the CNN(Convolutional neural network)model,which leads to a significant improvement in the number of model parameters,so it is difficult to deploy the larger‐scale model on the resource‐constrained edge devices. To address this issue,this paper proposes a depth model compression method based on joint pruning of channels and convolution layers.In the stage of channel pruning,the parameters of the BN layer trained by sparse regularization were used as the index to measure the importance of channels,to achieve the maximum channel pruning compression on the premise of keeping the accuracy of the model. Then,a layer pruning method based on a linear probe was proposed to compress the model and reduce memory access,to improve the reasoning speed of the model. Finally,the knowledge distillation technology was used in this paper to transfer the knowledge of pruning networks,thereby further compensating for the precision loss caused by pruning.The experimental results showed that,as far as the datasets of red kidney beans and maize seeds were concerned,the proposed method could improve the actual reasoning speed by 2.1 and2.8 times,while reducing the model calculation by 86. 55% and 91.55%. In the meantime,it still maintained good recognition accuracy(97.38% and 96.56%).All these provide technical support for the deployment of the model in the actual seed sorting system.

Key words: Seed classification, Computer vision, Deep learning, Model compression, Layer pruning, Knowledge distillation

中图分类号: