国产欧美精品一区二区,中文字幕专区在线亚洲,国产精品美女网站在线观看,艾秋果冻传媒2021精品,在线免费一区二区,久久久久久青草大香综合精品,日韩美aaa特级毛片,欧美成人精品午夜免费影视

基于改進(jìn)YOLOv3和立體視覺(jué)的園區障礙物檢測方法
DOI:
CSTR:
作者:
作者單位:

中國民航大學(xué) 電子信息與自動(dòng)化學(xué)院

作者簡(jiǎn)介:

通訊作者:

中圖分類(lèi)號:

TP391.41

基金項目:

天津市科技計劃項目(17ZXHLGX00120)


Hu Dandan,Zhang Lisha,Zhang Zhongting
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪(fǎng)問(wèn)統計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    為了解決無(wú)人駕駛障礙物檢測在園區場(chǎng)景中準確率低、實(shí)時(shí)性不足等問(wèn)題,提出一種基于改進(jìn)YOLOv3(You Only Look Once)和立體視覺(jué)的障礙物檢測方法:YOLOv3-CAMPUS。通過(guò)改進(jìn)特征提取網(wǎng)絡(luò )Darknet-53的結構減少前向推斷時(shí)間,進(jìn)而提升模型檢測速度,通過(guò)增加特征融合尺度提升檢測精度和目標定位能力;通過(guò)引入GIOU(Generalized Intersection over Union)改進(jìn)目標定位損失函數,通過(guò)改進(jìn)k-means算法降低初始聚類(lèi)點(diǎn)造成的聚類(lèi)偏差,進(jìn)而提高模型檢測精度;通過(guò)立體視覺(jué)相機獲得預測邊界框中心點(diǎn)的深度信息,確定障礙物與無(wú)人車(chē)的距離。實(shí)驗結果表明,提出的方法較原模型在園區混合數據集(KITTI+PennFudanPed)上平均精度提升了4.19%,檢測速度提升了5.1fps;在自建園區數據集(HD-Campus)上平均精度達到98.57%,均能滿(mǎn)足實(shí)時(shí)性要求。

    Abstract:

    In order to solve problems of low accuracy and lack of real-time performance of unmanned vehicle obstacle detection in the campus scene, an obstacle detection method based on improved YOLOv3 (You Only Look Once) and stereo vision was proposed: YOLOv3-CAMPUS. The forward inference time was reduced and the model detection speed was faster by im-proving the structure of the feature extraction network Darknet-53. The detection accuracy and target location accuracy were improved by increasing the feature fusion scale. Meanwhile, by using GIOU (Generalized Intersection Over Union), the target location loss function was improved. Enhanced K-means algorithm could reduce the cluster deviation caused by the initial clustering point, then the model detection accuracy was ameliorated. In additional, the depth information of the predicted boundary frame’s center point was obtained by stereo vision camera. Then the distance between the obstacle and the unmanned vehicle could be measured. Experimental results show the proposed method increases the average accuracy by 4.19% and the detection speed increases by 5.1 fps compared with the original model on the campus mixed data set (KITTI+PennFudanPED). On the self-built campus data set(HD-Campus), average accuracy could reach 98.57%, and it could satisfy the real-time requirements by using improved method.

    參考文獻
    相似文獻
    引證文獻
引用本文

胡丹丹,張莉莎,張忠婷.基于改進(jìn)YOLOv3和立體視覺(jué)的園區障礙物檢測方法計算機測量與控制[J].,2021,29(9):54-60.

復制
分享
文章指標
  • 點(diǎn)擊次數:
  • 下載次數:
  • HTML閱讀次數:
  • 引用次數:
歷史
  • 收稿日期:2021-03-01
  • 最后修改日期:2021-03-18
  • 錄用日期:2021-03-19
  • 在線(xiàn)發(fā)布日期: 2021-09-23
  • 出版日期:
文章二維碼
营山县| 永清县| 琼结县| 西藏| 浦东新区| 温宿县| 绩溪县| 岑溪市| 张家界市| 乐东| 西乡县| 富蕴县| 霞浦县| 兴山县| 周口市| 浦江县| 翼城县| 通州区| 达州市| 涞源县| 方山县| 巴林左旗| 通辽市| 新和县| 米泉市| 五华县| 广西| 佛教| 台江县| 崇阳县| 武强县| 海丰县| 岑巩县| 平罗县| 西乌| 大渡口区| 黔南| 新建县| 石泉县| 南华县| 隆尧县|