基于改进YOLOv10-MCS多模态协同感知的轻量化负障碍物检测算法
DOI:
CSTR:
作者:
作者单位:

福州理工学院 计算与信息科学学院

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

1.福建省中青年教师教育科研一般项目(JAT241223) 2.福州理工学院校级科研基金重点项目(FTKY2024012)


YOLOv10-MCS: A Lightweight Negative Obstacle Detection Algorithm Based on Multimodal Collaborative Sensing
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对复杂环境负障碍物检测存在的特征模糊性显著、多尺度漏检率高及环境干扰鲁棒性不足等问题,提出基于YOLOv10框架的多模态协同感知改进模型YOLOv10-MCS;该算法通过感受野注意力卷积模块替换主干特征提取网络的传统卷积操作,利用动态多分支感受野与空间注意力机制强化低对比度边缘特征提取;构建上下文引导块完成全局语义与局部细节的自适应融合,有效解决边界模糊导致负障碍物漏检问题;设计跨尺度特征融合模块重构颈部特征融合网络,采用通道统一化策略与跨层级拼接机制,在优化多尺度特征一致性的同时实现轻量化架构;联合全局注意力机制的通道-空间重校准显著抑制背景干扰;实验结果表明,YOLOv10-MCS模型精确率达88.13%,mAP提升至85.80%,浮点运算次数降至5.7GFLOPs,相较于原模型,精确率提升5.96%,平均精度均值提升3.3%,计算量降低32.1%;YOLOv10-MCS通过跨模态特征交互机制攻克负障碍物检测难题,其高精度轻量化特性为复杂场景目标检测提供了新的技术路径,在自动驾驶感知系统与机器人动态避障场景中具备工程实践价值。

    Abstract:

    To address the challenges of significant feature ambiguity, high multi-scale missed detection rates, and insufficient environmental interference robustness in negative obstacle detection under complex environments, an improved multimodal collaborative sensing model named YOLOv10-MCS is proposed based on the YOLOv10 framework. In the backbone network, the Receptive-Filed Attention Convolution(RFAConv) module replaces conventional convolution operations,leverages dynamic multi-branch receptive fields and spatial attention mechanisms to enhance low-contrast edge feature extraction. The Context Guided Block(CGB) enables adaptive fusion of global semantics and local details, effectively resolving boundary ambiguity-induced missed detections. The Cross-Scale Feature Fusion Module(CCFM) reconstructs the neck network using channel normalization and cross-layer concatenation, optimizing multi-scale feature consistency while achieving lightweight design. Integrated channel-spatial recalibration via Global Attention Mechanism(GAM) significantly suppresses background interference. Experimental results show that the YOLOv10-MCS model achieves 88.13% precision, 85.80% mean Average Precision(mAP), and 5.7 GFLOPs computational cost. Compared to the baseline model, these metrics represent a 5.96% precision improvement, 3.3% mAP gain, and 32.1% computation reduction. YOLOv10-MCS establishes a new technical framework for complex scene object detection through cross-modal feature interaction. The proposed framework with high-precision lightweight architecture demonstrates deployment potential in autonomous driving perception systems and robotic systems for dynamic obstacle avoidance.

    参考文献
    相似文献
    引证文献
引用本文

范忆梅,蒋存皓,范贤杰,伍彩霞,周力强.基于改进YOLOv10-MCS多模态协同感知的轻量化负障碍物检测算法计算机测量与控制[J].,2025,33(6):288-297.

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-05-15
  • 最后修改日期:2025-05-30
  • 录用日期:2025-05-30
  • 在线发布日期: 2025-06-18
  • 出版日期:
文章二维码