Advertisement

CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection

阅读量:

动机

Within this scenario, extracting radar-based features demands precise correspondence between these features and the central position of their respective objects within the image. Accurate associations between radar detections and objects in the scene are essential for ensuring effective feature alignment.

*基于简单的融合思想 To achieve this goal, a straightforward method involves mapping each radar detection point to the image plane and associating it with an object only when the point lies within the 2D bounding box of that object.

网络结构

在这里插入图片描述

我的见解是:在前面Centernet输出的属性基础上生成一个3D透视视角。随后将其转换为BEV视图,并与雷达数据进行融合处理。

视锥透视

在这里插入图片描述

When several radar detections are detected within this Region of Interest, we choose the nearest one as the corresponding detection for implementing a reasonable solution in driving scenarios.

This makes a significant reduction in radar detections requiring association checks, as points outside this frustum are disregarded. To address inaccuracies in estimated depth values, since at this stage an object's depth relies solely on image-derived features. By expanding this frustum parameter, there's an increased likelihood of including corresponding radar detections within it despite minor inaccuracies in depth estimation. If CenterNet fails to detect or miscalculates results, obviously missed detection would follow; yet it does increase its 3D bounding boxes during subsequent steps.

Pillar Expansion

Every radar point is mapped to a fixed-size pillar, as shown in Fig. 4. The pillars offer a more accurate representation of the physical objects detected by the radar, as these detections are now associated with dimensions of within the three-dimensional space.

在这里插入图片描述

通过将Radar检测到的点直接转化为3D柱体,并随后利用Frustum(视锥)进行叠加处理,在对比图中展示了两种不同的叠加方式:一种为简单叠加(中图),另一种则采用视锥方式进行层次感更强的叠加效果(下图)。

对于每一个与物体相关联的Radar检测结果,在物体的2D边界框内生成三个热力图通道(如图4所示),这三个热力图通道分别以物体的2D边界框为中心或内部区域展开。热力图的高度和宽度与物体的2D边界框尺寸成正比,并由参数α进行调节控制。

对与radar相关联的每个OBJ,在其中心点为中心的基础上,并以预测得到的二维边界框范围为基础进行操作:提取了三张热力图,并对其特征进行了详细说明包括正则化后的坐标参数(d, vx或vy);其中三通道的设计是为了与RGB图像进行信息融合。

CenterPoint

本文研究的核心是基于 Objects 点模型。与传统 BBOX 方法不同,在该方法中无需先验物体形状信息即可完成目标检测任务。具体而言,在此方法中主要通过预测一个中心点,并以此为中心点推导出其他相关属性参数(如物体的朝向、深度等)。

全部评论 (0)

还没有任何评论哟~