基于合成数据的刚体目标姿态追踪网络
DOI:
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(61601213)


Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    现有刚体姿态估计存在数据稀缺、复杂场景下的低鲁棒性及低实时性等问题,为此提出一种基于合成数据的刚体目标位姿追踪网络结构。采用时空间特征融合技术,捕捉时间与空间特征信息,生成具有时空敏感的特征图;利用残差连接学习更为丰富和抽象的优质特征,改善追踪目标的准确性;对稀缺数据进行数据增强,生成符合现实物理特性的复杂合成数据,以此训练深度学习模型,提高模型的泛化性。在YCB-Video数据集中选取7个物体进行实时姿态追踪实验,结果表明,提出的方法相较于同类相关方法,在复杂场景下对刚体姿态估计的更为准确,在实时估计效率上表现最优。

    Abstract:

    Existing rigid body pose estimation suffers from data scarcity, low robustness in complex scenes and low real-time performance, for this reason, A rigid object pose tracking network based on synthetic data is proposed. Temporal and spatial feature fusion techniques are used to capture temporal and spatial feature information and generate a feature map with temporal and spatial sensitivity. Residual connectivity is utilized to learn richer and more abstract quality features to improve the accuracy of tracking the target. Data augmentation is performed on the scarce data to generate complex synthetic data that conforms to the real physical characteristics, which is used to train the deep learning model and improve the generalization of the model. The proposed method in this paper selects seven objects in the YCB-Video dataset for real-time pose tracking experiments, and the results show that the method in this paper is more accurate in estimating the poses of rigid bodies in complex scenarios and performs optimal performance in real-time estimation efficiency compared with similar related methods.

    参考文献
    相似文献
    引证文献
引用本文

刘千山,林雪剑,朱枫,李佩东.基于合成数据的刚体目标姿态追踪网络计算机测量与控制[J].,2024,32(5):282-289.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-03-15
  • 最后修改日期:2024-03-28
  • 录用日期:2024-03-28
  • 在线发布日期: 2024-05-22
  • 出版日期:
文章二维码