国产欧美精品一区二区,中文字幕专区在线亚洲,国产精品美女网站在线观看,艾秋果冻传媒2021精品,在线免费一区二区,久久久久久青草大香综合精品,日韩美aaa特级毛片,欧美成人精品午夜免费影视

基于Swin Transformer的YOLOv5安全帽佩戴檢測方法
DOI:
CSTR:
作者:
作者單位:

韶關(guān)學(xué)院智能工程學(xué)院

作者簡(jiǎn)介:

通訊作者:

中圖分類(lèi)號:

TP391

基金項目:

廣東大學(xué)生科技創(chuàng )新培育專(zhuān)項資金資助項目(pdjh2022b0470)


YOLOv5 helmet wearing detection method based on Swin Transformer
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪(fǎng)問(wèn)統計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對目前施工現場(chǎng)的安全帽檢測方法存在遮擋目標檢測難度大、誤檢漏檢率高的問(wèn)題,提出一種改進(jìn)YOLOv5的安全帽檢測方法;首先,使用K-means++聚類(lèi)算法重新設計匹配安全帽數據集的先驗錨框尺寸;其次,使用Swin Transformer作為YOLOv5的骨干網(wǎng)絡(luò )來(lái)提取特征,基于可移位窗口的多頭自注意力機制能建模不同空間位置特征之間的依賴(lài)關(guān)系,有效地捕獲全局上下文信息,具有更好的特征提取能力;再次,提出C3-Ghost模塊,基于Ghost Bottleneck對YOLOv5的C3模塊進(jìn)行改進(jìn), 旨在通過(guò)低成本的操作生成更多有價(jià)值的冗余特征圖,有效減少模型參數和計算復雜度;最后,基于雙向特征金字塔網(wǎng)絡(luò )跨尺度特征融合的結構優(yōu)勢提出新型跨尺度特征融合模塊,更好地適應不同尺度的目標檢測任務(wù);實(shí)驗結果表明,與原始YOLOv5相比,改進(jìn)的YOLOv5在安全帽檢測任務(wù)上的mAP@.5:.95指標提升了2.3%,滿(mǎn)足復雜施工場(chǎng)景下安全帽佩戴檢測的準確率要求。

    Abstract:

    Aiming at the problems of difficult detection of occluded objects and high false detection and missed detection rate in the current helmet detection methods on construction sites, an improved YOLOv5 helmet detection method is proposed in this paper. First, use the K-means++ clustering algorithm to redesign the prior anchor box size to match the helmet dataset. Second, Swin Transformer is used as the backbone network of YOLOv5 to extract features. The multi-head self-attention mechanism based on shiftable windows can model the dependencies between different spatial location features, effectively capture global context information, and have better Feature extraction capability. Third, the C3-Ghost module is proposed, which improves the C3 module of YOLOv5 based on Ghost Bottleneck, aiming to generate more valuable redundant feature maps through low-cost operations, effectively reducing model parameters and computational complexity. Fourth, a new feature fusion module is proposed based on the structural advantages of cross-scale feature fusion of bidirectional feature pyramid network, which can better adapt to target detection tasks of different scales. The experimental results show that compared with the original YOLOv5, the mAP@.5:.95 index of the improved YOLOv5 on the helmet detection task is improved by 2.3 percentage points, which meets the accuracy requirements of helmet wearing detection in complex construction scenarios.

    參考文獻
    相似文獻
    引證文獻
引用本文

鄭楚偉,林輝.基于Swin Transformer的YOLOv5安全帽佩戴檢測方法計算機測量與控制[J].,2023,31(3):15-21.

復制
分享
文章指標
  • 點(diǎn)擊次數:
  • 下載次數:
  • HTML閱讀次數:
  • 引用次數:
歷史
  • 收稿日期:2022-07-09
  • 最后修改日期:2022-08-15
  • 錄用日期:2022-08-16
  • 在線(xiàn)發(fā)布日期: 2023-03-15
  • 出版日期:
文章二維碼
永川市| 巨鹿县| 广宗县| 灌南县| 临安市| 宝丰县| 玉林市| 保康县| 辽宁省| 勐海县| 阿拉善盟| 汝南县| 荔波县| 惠水县| 德格县| 马山县| 恩施市| 鹤山市| 淮北市| 芮城县| 灵山县| 留坝县| 宁都县| 定兴县| 陵水| 宁城县| 洛宁县| 岳阳县| 楚雄市| 吉林市| 阳江市| 通化县| 眉山市| 石狮市| 饶河县| 内乡县| 英吉沙县| 庆元县| 庆城县| 桂阳县| 九寨沟县|