apollo自动驾驶-感知-激光雷达检测:lidar_detection
apollo自动驾驶-感知-激光雷达检测:lidar_detection
-
功能介绍
-
- 使用深度学习进行障碍物检测
- 构建目标多边形
-
核心代码解读
-
- 如何使用深度学习进行障碍物检测
- 如何构建目标多边形
-
效果展示
-
- 预测结果和目标内点云分布
功能介绍
该组件用于接收激光雷达生成的点云数据,并用于进行障碍物检测。其主要目的是判断或确定被测物体的类型、尺寸信息以及信心程度。需要注意的是,在当前阶段并未获取到被测物体的速度信息,在后续的lidar_tracking模块中将能够获得这一信息。功能实现主要分为两步:首先利用深度学习算法执行障碍物检测任务;然后通过构建多边形模型来描述被测物体的目标形状。
Apollo当前主要应用中心点深度学习算法来进行目标检测任务。其中大量工作集中在训练阶段完成。这主要是基于获取的模型以及相关的参数文件进行处理的。部署完成后借助推理机制即可迅速识别出障碍物相关信息。
为了确保以便大家更好地理解后续内容,请期待随后开设一期中心点深度学习训练课程。请注意,在未采用深度学习技术的情况下,默认的部分可以通过传统检测方法(例如聚类分析)进行替代处理。
使用深度学习进行障碍物检测
代码中支持了4种深度学习方案的实现,并且系统默认采用中心点法进行处理。用于确定障碍物类型的识别任务旨在识别障碍物的位置及其尺寸和可信度。具体实现流程包含以下几个步骤:
1)解析深度学习模型及参数文件,并通过深度学习框架中的API接口完成模型初始化;
2)接收激光雷达数据后,在数据预处理阶段首先应用ROI(区域-of-interest)筛选关键数据点,并结合下采样技术优化数据质量;
3)启动推理流程即执行深度学习模型的正向传播计算,在该过程中系统利用预先加载的模型参数完成目标识别任务,并输出包含目标类别、尺寸信息及置信度评分的关键检测结果;
4)障碍物检测后处理环节主要包含以下两个子步骤:
a) 目标筛选:通过阈值判断剔除低置信度的目标候选;
b) 坐标转换:将检测到的目标坐标转换为适合后续使用的统一空间坐标系。
基于置信度进行筛选以完成目标过滤操作,并将置信度低于预先设定阈值的目标剔除;
识别出的所有目标均具备7个关键属性:中心坐标xyz表示目标的核心位置区域;尺寸范围dxdydz定义了物体的空间延伸程度;方向角则表征了物体朝向信息;此外每个对象还附带有自身置信度评分;类别及其亚类信息也被纳入考量,并确保激光测量结果能够精确地被分配到相应的对象上;
去除那些激光点数量不足3个的目标实例,并采用非极大值抑制策略来进一步筛选出高质量的目标实例;
对地图内部区域内的非地面散射体进行采集作为背景对象的数据源。
构建目标多边形
这一过程相对较为简单, 即是对上一步所获得的目标进行后续处理, 并重点实现以下两个功能:
- 建立在xy平面内轴对齐的包围盒;
- 利用激光点云数据重新计算目标的具体位置、尺寸以及方向。
核心代码解读
如需获取更多信息关于此模块,请参考配套资源,其中包含清晰的注解和打印版本以便更好地理解。
如何使用深度学习进行障碍物检测
1)加载深度学习模型定义文件及训练参数配置文件,并通过深度学习框架提供的API函数完成模型参数初始化;重点将执行center_point_detection.cc中的Init方法。其中实现了以下功能:
- 解析深度学习模型及其参数文件内容;
- 搭建并初始化深度学习推理机系统;
- 为后续点云处理任务配置一个下采样插件模块;
提示:在初始化过程中,在完成新增操作后有一部分数据未能包含到配置文件中,并且这些数据最终会在终端进行打印输出。
bool CenterPointDetection::Init(const LidarDetectorInitOptions &options) {
// 读取data目录下center_point_param.pb.txt配置信息
std::string config_file = GetConfigFile(options.config_path, options.config_file);
ACHECK(apollo::cyber::common::GetProtoFromFile(config_file, &model_param_));
const auto &model_info = model_param_.info();
std::string model_path = GetModelPath(model_info.name());
// Network files
// 获取模型文件和权重文件的存放路径,这两个文件是通过离线训练得到的
std::string proto_file = GetModelFile(model_path, model_info.proto_file().file());
std::string weight_file = GetModelFile(model_path, model_info.weight_file().file());
AWARN << "autodrv-CenterPointDetection::Init-proto_file:" << proto_file << ",weight_file:" << weight_file;
// Network input and output names
// 读取输入和输出blob的名称,来自配置文件
input_blob_names_ = inference::GetBlobNames(model_info.inputs());
output_blob_names_ = inference::GetBlobNames(model_info.outputs());
AWARN << "autodrv-CenterPointDetection::Init-input_blob_names_:";
for (int i = 0; i < input_blob_names_.size(); i++) {
AWARN << input_blob_names_[i] << ", ";
}
AWARN << "autodrv-CenterPointDetection::Init-output_blob_names_:";
for (int i = 0; i < output_blob_names_.size(); i++) {
AWARN << output_blob_names_[i] << ", ";
}
// 创建推理器,实际上执行的是深度学习的正向传播,根据配置文件,这里使用的深度学习框架是PaddlePaddle
inference_.reset(inference::CreateInferenceByName(
model_info.framework(), proto_file, weight_file, output_blob_names_, input_blob_names_));
std::map<std::string, std::vector<int>> shape_map;
inference::AddShape(&shape_map, model_info.inputs());
inference::AddShape(&shape_map, model_info.outputs());
AWARN << "autodrv-CenterPointDetection::Init-inference_->Init-begin";
// shape_map.first:数据块的名称,如配置文件中输出的名称_generated_var_11
// shape_map.second:数据块名称后面跟随的数据块结构形状,一般指的是维数
if (!inference_->Init(shape_map)) {
return false;
}
AWARN << "autodrv-CenterPointDetection::Init-inference_->Init-end";
// enable_ground_removal: false
if (model_param_.preprocess().enable_ground_removal()) {
z_min_range_ = std::max(z_min_range_, static_cast<float>(model_param_.preprocess().ground_removal_height()));
}
nms_strategy_ = model_param_.nms_strategy();
diff_class_iou_ = model_param_.diff_class_iou();
diff_class_nms_ = model_param_.diff_class_nms();
cone_score_threshold_ = model_param_.postprocess().cone_score_threshold();
ped_score_threshold_ = model_param_.postprocess().ped_score_threshold();
cyc_score_threshold_ = model_param_.postprocess().cyc_score_threshold();
small_mot_score_threshold_ = model_param_.postprocess().small_mot_score_threshold();
big_mot_score_threshold_ = model_param_.postprocess().big_mot_score_threshold();
// 创建下采样的插件
if (model_param_.has_plugins()) {
const auto &plugin = model_param_.plugins();
const auto &name = plugin.name();
down_sample_ = apollo::cyber::plugin_manager::PluginManager::Instance()->CreateInstance<BaseDownSample>(
ConfigUtil::GetFullClassName(name));
if (!down_sample_) {
AINFO << "Failed to find down_sample plugin: " << name << ", skipped";
return false;
}
DownSampleInitOptions option;
option.config_path = plugin.config_path();
option.config_file = plugin.config_file();
if (!down_sample_->Init(option)) {
AINFO << "Failed to init down_sample plugin: " << name << ", skipped";
return false;
}
}
// if (diff_class_nms_) {
// nms_strategy_table_ = {
// {base::ObjectType::BICYCLE, {base::ObjectType::PEDESTRIAN,
// base::ObjectType::UNKNOWN}},
// {base::ObjectType::PEDESTRIAN, {base::ObjectType::UNKNOWN}},
// {base::ObjectType::VEHICLE, {base::ObjectType::BICYCLE,
// base::ObjectType::PEDESTRIAN, base::ObjectType::UNKNOWN}}
// };
// }
AWARN << "autodrv-CenterPointDetection::Init-end";
return true;
}
cpp

推理机的初始化会转到此函数,这里是深度学习框架的初始化部分,其中,
- predictor_ = paddle_infer::CreatePredictor(config);构建了预测器,并负责进行深度学习模型的正向计算过程以生成目标检测结果;该函数属于深度学习框架中的API接口,请参见paddle_inference_api.h文件。
- auto blob = std::make_sharedapollo::perception::base::Blob(shape.second);在创建Blob数据块的过程中
bool PaddleNet::Init(const std::map<std::string, std::vector<int>>& shapes) {
AWARN << "autodrv-PaddleNet::Init-begin";
paddle_infer::Config config;
// 向深度学习框架中传入模型文件和权重文件
config.SetModel(model_file_, params_file_);
if (gpu_id_ >= 0) {
config.EnableUseGpu(MemoryPoolInitSizeMb, gpu_id_);
}
// false
// 是否使用深度学习推理(Inference)优化器
if (FLAGS_use_trt) {
paddle::AnalysisConfig::Precision precision;
if (FLAGS_trt_precision == 0) {
precision = paddle_infer::PrecisionType::kFloat32;
} else if (FLAGS_trt_precision == 1) {
precision = paddle_infer::PrecisionType::kInt8;
} else if (FLAGS_trt_precision == 2) {
precision = paddle_infer::PrecisionType::kHalf;
} else {
AERROR << "Tensorrt type can only support 0, 1, 2, but recieved is" << FLAGS_trt_precision << "\n";
return false;
}
config.EnableTensorRtEngine(1 << 30, 1, 3, precision, FLAGS_trt_use_static, FLAGS_use_calibration);
if (FLAGS_collect_shape_info) {
config.CollectShapeRangeInfo(FLAGS_dynamic_shape_file);
}
if (FLAGS_use_dynamicshape)
config.EnableTunedTensorRtDynamicShape(FLAGS_dynamic_shape_file, true);
}
config.EnableMemoryOptim();
config.SwitchIrOptim(true);
AWARN << "autodrv-PaddleNet::Init-CreatePredictor";
// 创建预测器,这里引用的是深度学习框架中的API库函数,可在paddle_inference_api.h查到
predictor_ = paddle_infer::CreatePredictor(config);
if (predictor_ == nullptr) {
AWARN << "autodrv-PaddleNet::Init-predictor_ is nullptr";
return false;
}
// Check input and output
// 获取深度学习中输入和输出数据名称
auto input_names = predictor_->GetInputNames();
auto output_names = predictor_->GetOutputNames();
AWARN << "autodrv-PaddleNet::Init-input_names:";
for (int i = 0; i < input_names.size(); i++) {
AWARN << input_names[i] << ",";
}
AWARN << "autodrv-PaddleNet::Init-output_names:";
for (int i = 0; i < output_names.size(); i++) {
AWARN << output_names[i] << ",";
}
// 在深度学习框架中读出的名字找配置文件输入数据的名字
std::vector<std::string> exist_names;
for (const std::string& name : input_names_) {
if (std::find(input_names.begin(), input_names.end(), name) != input_names.end()) {
exist_names.push_back(name);
}
}
input_names_ = exist_names;
// 在深度学习框架中读出的名字找配置文件输出数据的名字
exist_names.clear();
for (const std::string& name : output_names_) {
if (std::find(output_names.begin(), output_names.end(), name) != output_names.end()) {
exist_names.push_back(name);
}
}
output_names_ = exist_names;
// add blobs
// 增加blob,blob会填充输入数据块和输出数据块
for (const auto& shape : shapes) {
// 用数据块的形状初始化一个Blob
// 不过在初始化时shape.second没有数据,在循环执行的函数中重新进行了reshape
auto blob = std::make_shared<apollo::perception::base::Blob<float>>(shape.second);
blobs_.emplace(shape.first, blob);
}
return true;
}
void Blob<Dtype>::Reshape(const std::vector<int>& shape) {
// shape一般含有2个元素,第一个代表数据量大小(循环xyzi),第二个代表数据特性的数目(4)
CHECK_LE(shape.size(), kMaxBlobAxes);
count_ = 1;
shape_.resize(shape.size());
if (!shape_data_ || shape_data_->size() < shape.size() * sizeof(int)) {
// SyncedMemory用于管理CPU和GPU之间的内存分配和同步
shape_data_.reset(new SyncedMemory(shape.size() * sizeof(int), use_cuda_host_malloc_));
}
int* shape_data = static_cast<int*>(shape_data_->mutable_cpu_data());
for (size_t i = 0; i < shape.size(); ++i) {
CHECK_GE(shape[i], 0);
if (count_ != 0) {
CHECK_LE(shape[i], std::numeric_limits<int>::max() / count_)
<< "blob size exceeds std::numeric_limits<int>::max()";
}
// 记录数据大小
count_ *= shape[i];
// Blob类中记录shape数据
shape_[i] = shape[i];
// SyncedMemory类中记录shape数据
shape_data[i] = shape[i];
}
// 如果有数据,则为data分配空间
if (count_ > capacity_) {
capacity_ = count_;
data_.reset(new SyncedMemory(capacity_ * sizeof(Dtype), use_cuda_host_malloc_));
}
}
cpp

2)执行障碍物检测,囊括了剩余的步骤。其中,
- down_sample_->Process(down_sample_options, cur_cloud_ptr_);完成了下采样的处理过程,并基于pcl库实现了该操作;
- input_data_blob->Reshape(points_shape);对该Blob块进行了形状重塑,并为输入数据分配内存并记录了各维度的空间分布情况;
- inference_->Infer();通过推理过程确定目标类型及其空间信息,并计算其置信度;
- FilterDiffScore(output_bbox_blob, output_label_blob, output_score_blob, &out_detections, &out_labels, &out_scores);利用打分结果进行目标筛选与确认;
- GetObjects(frame->lidar2world_pose, out_detections, out_labels, out_scores, &frame->segmented_objects);通过检测结果获取目标物体信息并存储于frame->segmented_objects中;
- FilterObjectsbyPoints(&frame->segmented_objects);对frame->segmented_objects中的目标进行筛选与去噪处理;
- FilterObjectsbyClassNMS(&frame->segmented_objects);采用非极大值抑制策略优化后端处理结果;
- FilterForegroundPoints(&frame->segmented_objects);从segmented_objects中提取出符合地图约束条件的背景目标点云信息。
bool CenterPointDetection::Detect(const LidarDetectorOptions &options, LidarFrame *frame) {
// check input
if (frame == nullptr) {
AERROR << "Input null frame ptr.";
return false;
}
if (frame->cloud == nullptr) {
AERROR << "Input null frame cloud.";
return false;
}
if (frame->cloud->size() == 0) {
AERROR << "Input none points.";
return false;
}
// record input cloud and lidar frame
original_cloud_ = frame->cloud;
original_world_cloud_ = frame->world_cloud;
lidar_frame_ref_ = frame;
// check output
frame->segmented_objects.clear();
if (cudaSetDevice(model_param_.preprocess().gpu_id()) != cudaSuccess) {
AERROR << "Failed to set device to gpu " << model_param_.preprocess().gpu_id();
return false;
}
Timer timer;
int num_points;
// 使用原始点云(frame->cloud)进行初始化,开辟一个新的内存空间,并创建智能指针cur_cloud_ptr_
cur_cloud_ptr_ = std::shared_ptr<base::PointFCloud>(new base::PointFCloud(*original_cloud_));
// enable_roi_outside_removal: false
if (model_param_.preprocess().enable_roi_outside_removal()) {
cur_cloud_ptr_->CopyPointCloud(*original_cloud_, frame->roi_indices);
}
// down sample the point cloud through filtering beams
// enable_downsample_beams: false
if (model_param_.preprocess().enable_downsample_beams()) {
base::PointFCloudPtr downsample_beams_cloud_ptr(new base::PointFCloud());
if (DownSamplePointCloudBeams(
cur_cloud_ptr_, downsample_beams_cloud_ptr, model_param_.preprocess().downsample_beams_factor())) {
cur_cloud_ptr_ = downsample_beams_cloud_ptr;
} else {
AWARN << "Down-sample beams factor must be >= 1. Cancel down-sampling."
" Current factor: "
<< model_param_.preprocess().downsample_beams_factor();
}
}
// down sample the point cloud through filtering voxel grid
// enable_downsample_pointcloud: true
// 按照体素方格进行下采样
AWARN << "autodrv-CenterPointDetection::Detect-cur_cloud_ptr_size:" << cur_cloud_ptr_->size();
if (model_param_.preprocess().enable_downsample_pointcloud()) {
DownSampleOptions down_sample_options;
ACHECK(down_sample_ != nullptr);
down_sample_->Process(down_sample_options, cur_cloud_ptr_);
}
downsample_time_ = timer.toc(true);
num_points = cur_cloud_ptr_->size();
AINFO << "num points before fusing: " << num_points;
// fuse clouds of preceding frames with current cloud
cur_cloud_ptr_->mutable_points_timestamp()->assign(cur_cloud_ptr_->size(), 0.0);
// enable_fuse_frames: false
if (model_param_.preprocess().enable_fuse_frames() && model_param_.preprocess().num_fuse_frames() > 1) {
// before fusing
while (!prev_world_clouds_.empty()
&& frame->timestamp - prev_world_clouds_.front()->get_timestamp()
> model_param_.preprocess().fuse_time_interval()) {
prev_world_clouds_.pop_front();
}
// transform current cloud to world coordinate and save to a new ptr
base::PointDCloudPtr cur_world_cloud_ptr = std::make_shared<base::PointDCloud>();
for (size_t i = 0; i < cur_cloud_ptr_->size(); ++i) {
auto &pt = cur_cloud_ptr_->at(i);
Eigen::Vector3d trans_point(pt.x, pt.y, pt.z);
trans_point = lidar_frame_ref_->lidar2world_pose * trans_point;
PointD world_point;
world_point.x = trans_point(0);
world_point.y = trans_point(1);
world_point.z = trans_point(2);
world_point.intensity = pt.intensity;
cur_world_cloud_ptr->push_back(world_point);
}
cur_world_cloud_ptr->set_timestamp(frame->timestamp);
// fusing clouds
for (auto &prev_world_cloud_ptr : prev_world_clouds_) {
num_points += prev_world_cloud_ptr->size();
}
FuseCloud(cur_cloud_ptr_, prev_world_clouds_);
// after fusing
while (static_cast<int>(prev_world_clouds_.size()) >= model_param_.preprocess().num_fuse_frames() - 1) {
prev_world_clouds_.pop_front();
}
prev_world_clouds_.emplace_back(cur_world_cloud_ptr);
}
fuse_time_ = timer.toc(true);
// shuffle points and cut off
// enable_shuffle_points: false
if (model_param_.preprocess().enable_shuffle_points()) {
num_points = std::min(num_points, model_param_.preprocess().max_num_points());
std::vector<int> point_indices = GenerateIndices(0, num_points, true);
base::PointFCloudPtr shuffle_cloud_ptr(new base::PointFCloud(*cur_cloud_ptr_, point_indices));
cur_cloud_ptr_ = shuffle_cloud_ptr;
}
shuffle_time_ = timer.toc(true);
// points_array[x, y, z, i,timestampe, ......]
// num_point_feature: 4 坐标和反射强度
std::vector<float> points_data(num_points * model_param_.preprocess().num_point_feature());
// normalizing_factor: 1
// 将点云数据push到points_data容器中
CloudToArray(cur_cloud_ptr_, points_data.data(), model_param_.preprocess().normalizing_factor());
cloud_to_array_time_ = timer.toc(true);
// 获取配置文件中输入数据块名称对应的推理机中的输入和输出Blob类
auto input_data_blob = inference_->get_blob(input_blob_names_.at(0));
// _generated_var_11存储检测目标的bbox数据
auto output_bbox_blob = inference_->get_blob(output_blob_names_.at(0));
// _generated_var_12存储检测目标的打分数据
auto output_score_blob = inference_->get_blob(output_blob_names_.at(1));
// _generated_var_13存储检测目标的标签数据
auto output_label_blob = inference_->get_blob(output_blob_names_.at(2));
// 初始化一个points_shape容器,其中有2个元素,一个是参与推理的数据量(xyzi),一个是特性数量(4)
std::vector<int> points_shape{num_points, model_param_.preprocess().num_point_feature()};
AWARN << "autodrv-CenterPointDetection::Detect-num_points:" << num_points << ",points_shape:" << points_shape.size()
<< "," << points_shape[0] << "," << points_shape[1];
// 为输入数据块分配内存空间,在初始化中只是创建了Blob实体
input_data_blob->Reshape(points_shape);
float *data_ptr = input_data_blob->mutable_cpu_data();
// 将实际数据复制到新开辟的内存空间
memcpy(data_ptr, points_data.data(), points_data.size() * sizeof(float));
// 执行推理过程,这个是整个检测算法中最关键的一步
// 这一步之前,做的是数据预处理,之后做的是后处理
inference_->Infer();
inference_time_ = timer.toc(true);
// paddle inference
std::vector<float> out_detections;
std::vector<int64_t> out_labels;
std::vector<float> out_scores;
// FilterScore(output_bbox_blob, output_label_blob, output_score_blob,
// model_param_.postprocess().score_threshold(), &out_detections,
// &out_labels, &out_scores);
// 通过得到的打分的大小进行目标过滤
FilterDiffScore(output_bbox_blob, output_label_blob, output_score_blob, &out_detections, &out_labels, &out_scores);
// 获取目标,并存储在frame->segmented_objects中
// frame->lidar2world_pose:激光雷达坐标系到世界坐标系下的转换
GetObjects(frame->lidar2world_pose, out_detections, out_labels, out_scores, &frame->segmented_objects);
std::stringstream sstr;
sstr << "[CenterPointDetection BeforeNMS] " << std::to_string(frame->timestamp)
<< " objs: " << frame->segmented_objects.size() << std::endl;
for (auto obj : frame->segmented_objects) {
sstr << "id = " << obj->id << ": " << obj->center(0) << ", " << obj->center(1) << ", " << obj->center(2) << ", "
<< obj->size(0) << ", " << obj->size(1) << ", " << obj->size(2) << ", " << obj->theta << ", "
<< static_cast<int>(obj->type) << ", " << obj->confidence << std::endl;
}
ADEBUG << sstr.str();
// filter_by_points: true
if (model_param_.filter_by_points()) {
// min_points_threshold: 3
// 过滤掉点数量小于3的目标
FilterObjectsbyPoints(&frame->segmented_objects);
}
PERF_BLOCK("class_nms")
// diff_class_nms: true nms:非极大值抑制
if (diff_class_nms_) {
FilterObjectsbyClassNMS(&frame->segmented_objects);
}
PERF_BLOCK_END
nms_time_ = timer.toc(true);
// filter semantic if it's not unknown
// filter_by_semantic_type: false
if (model_param_.filter_by_semantic_type()) {
FilterObjectsbySemanticType(&frame->segmented_objects);
}
// 提取背景目标对应的点云,位于地图内、非地面点、不在目标内的点
FilterForegroundPoints(&frame->segmented_objects);
// 对于每个目标,获取目标内,且在地图内的点云的数目
SetPointsInROI(&frame->segmented_objects);
AWARN << "autodrv-CenterPointDetection::Detect-segmented_objects:" << frame->segmented_objects.size();
for (auto obj : frame->segmented_objects) {
double x = obj->center(0);
double y = obj->center(1);
double z = obj->center(2);
double dx = obj->size(0);
double dy = obj->size(1);
double dz = obj->size(2);
AWARN << "@segmented_objects@" << std::setprecision(10) << obj->id << ": " << obj->center(0) << ", "
<< obj->center(1) << ", " << obj->center(2) << ", " << obj->size(0) << ", " << obj->size(1) << ", "
<< obj->size(2) << ", " << obj->theta << ", " << static_cast<int>(obj->type) << "Cuboid vertex:("
<< x - 0.5 * dx << "," << y - 0.5 * dy << "," << z - 0.5 * dz << "," << x + 0.5 * dx << ","
<< y - 0.5 * dy << "," << z - 0.5 * dz << "," << x + 0.5 * dx << "," << y + 0.5 * dy << ","
<< z - 0.5 * dz << "," << x - 0.5 * dx << "," << y + 0.5 * dy << "," << z - 0.5 * dz << ","
<< x - 0.5 * dx << "," << y - 0.5 * dy << "," << z + 0.5 * dz << "," << x + 0.5 * dx << ","
<< y - 0.5 * dy << "," << z + 0.5 * dz << "," << x + 0.5 * dx << "," << y + 0.5 * dy << ","
<< z + 0.5 * dz << "," << x - 0.5 * dx << "," << y + 0.5 * dy << "," << z + 0.5 * dz;
for (int i = 0; i < obj->lidar_supplement.point_ids.size(); i++) {
int index = obj->lidar_supplement.point_ids[i];
AWARN << "@obj_cloud@:" << i << "," << std::setprecision(10) << frame->cloud->points()[index].x << ","
<< frame->cloud->points()[index].y << "," << frame->cloud->points()[index].z << ","
<< frame->cloud->points()[index].intensity;
}
}
postprocess_time_ = timer.toc(true);
AINFO << "CenterPoint: "
<< "\n"
<< "down sample: " << downsample_time_ << "\t"
<< "fuse: " << fuse_time_ << "\t"
<< "shuffle: " << shuffle_time_ << "\t"
<< "cloud_to_array: " << cloud_to_array_time_ << "\t"
<< "inference: " << inference_time_ << "\t"
<< "postprocess: " << postprocess_time_ << "\t"
<< "nms: " << nms_time_ << "\t";
std::stringstream ssstr;
ssstr << "[CenterPointDetection AfterNMS] " << std::to_string(frame->timestamp)
<< " objs: " << frame->segmented_objects.size() << std::endl;
for (auto obj : frame->segmented_objects) {
ssstr << "id = " << obj->id << ": " << obj->center(0) << ", " << obj->center(1) << ", " << obj->center(2)
<< ", " << obj->size(0) << ", " << obj->size(1) << ", " << obj->size(2) << ", " << obj->theta << ", "
<< static_cast<int>(obj->type) << std::endl;
}
ADEBUG << ssstr.str();
AWARN << "autodrv-CenterPointDetection::Detect" << ssstr.str();
return true;
}
cpp

基于深度学习预测的结果对目标进行提取,在预测过程中得到了7个属性参数:xyz被确定为目标物体的中心坐标;dxdydz被确定为目标物体的长宽高;yaw被确定为目标物体的姿态角;此外还包含了目标类别与置信度信息;随后调用函数GetBoxIndices(num_objects, detections, box_corner, box_rectangular, objects)以获取目标区域内的点云数据
void CenterPointDetection::GetObjects(
const Eigen::Affine3d &pose,
const std::vector<float> &detections,
const std::vector<int64_t> &labels,
const std::vector<float> &scores,
std::vector<std::shared_ptr<Object>> *objects) {
// num_output_box_feature: 7
// bbox的属性数目
int num_objects = detections.size() / model_param_.postprocess().num_output_box_feature();
objects->clear();
// 创建存储目标的实体
base::ObjectPool::Instance().BatchGet(num_objects, objects);
for (int i = 0; i < num_objects; ++i) {
auto &object = objects->at(i);
// 创建目标的id号
object->id = i;
// no velocity
// 7个属性:xyz为目标的中心位置、dxdydz为目标的长宽高、yaw为目标的朝向
float x = detections.at(i * model_param_.postprocess().num_output_box_feature() + 0);
float y = detections.at(i * model_param_.postprocess().num_output_box_feature() + 1);
float z = detections.at(i * model_param_.postprocess().num_output_box_feature() + 2);
float dx = detections.at(i * model_param_.postprocess().num_output_box_feature() + 4);
float dy = detections.at(i * model_param_.postprocess().num_output_box_feature() + 3);
float dz = detections.at(i * model_param_.postprocess().num_output_box_feature() + 5);
float yaw = detections.at(i * model_param_.postprocess().num_output_box_feature() + 6);
yaw = -yaw - M_PI / 2;
object->size(0) = dx;
object->size(1) = dy;
object->size(2) = dz;
object->center(0) = x;
object->center(1) = y;
object->center(2) = z;
// directions
object->theta = yaw;
object->direction[0] = cosf(yaw);
object->direction[1] = sinf(yaw);
object->direction[2] = 0;
// 标记目标的方向已确定
object->lidar_supplement.is_orientation_ready = true;
// 目标的置信度
object->confidence = scores.at(i);
// centerpoint-model original output
object->lidar_supplement.detections.resize(7);
object->lidar_supplement.detections[0] = x;
object->lidar_supplement.detections[1] = y;
object->lidar_supplement.detections[2] = z;
object->lidar_supplement.detections[3] = dx;
object->lidar_supplement.detections[4] = dy;
object->lidar_supplement.detections[5] = dz;
object->lidar_supplement.detections[6] = yaw;
// compute vertexes of bounding box and transform to world coordinate
object->lidar_supplement.on_use = true;
// 标记不是背景目标,应该属于前景目标
object->lidar_supplement.is_background = false;
// classification
// 目标的类别是6个,先初始化为0.0
object->lidar_supplement.raw_probs.push_back(
std::vector<float>(static_cast<int>(base::ObjectType::MAX_OBJECT_TYPE), 0.f));
// 记录分类时使用的方法"CenterPointDetection"
object->lidar_supplement.raw_classification_methods.push_back(Name());
// 获取子类别分类
object->sub_type = GetObjectSubType(labels.at(i));
// 通过子类别获取父类别
object->type = base::kSubType2TypeMap.at(object->sub_type);
if (object->sub_type == base::ObjectSubType::TRAFFICCONE) {
object->type = base::ObjectType::UNKNOWN;
}
// 对于一个目标,raw_probs表示的是父类别,是一个容器,赋值为1的代表分类结果,其他的类别赋值为0
object->lidar_supplement.raw_probs.back()[static_cast<int>(object->type)] = 1.0f;
// copy to type
// 把父类别的概率重新赋值给object->type_probs变量
object->type_probs.assign(
object->lidar_supplement.raw_probs.back().begin(), object->lidar_supplement.raw_probs.back().end());
}
// xy平面上目标的4个角顶点坐标
std::vector<float> box_corner(num_objects * 8);
// xy平面上轴对齐xy最大最小值
std::vector<float> box_rectangular(num_objects * 4);
// 获取xy平面上4个角顶点坐标和轴对齐xy最大最小值
GetBoxCorner(num_objects, detections, box_corner, box_rectangular);
// 识别激光点分配给对应的目标中
GetBoxIndices(num_objects, detections, box_corner, box_rectangular, objects);
AINFO << "[CenterPoint] we get " << num_objects << " objs";
}
void CenterPointDetection::GetBoxIndices(
int num_objects,
const std::vector<float> &detections,
const std::vector<float> &box_corner,
const std::vector<float> &box_rectangular,
std::vector<std::shared_ptr<Object>> *objects) {
for (size_t point_idx = 0; point_idx < original_cloud_->size(); ++point_idx) {
const auto &point = original_cloud_->at(point_idx);
// filter_ground_points: false 对地面点的过滤
if (model_param_.filter_ground_points()
&& original_cloud_->points_label(point_idx) == static_cast<uint8_t>(LidarPointLabel::GROUND)) {
continue;
}
float px = point.x;
float py = point.y;
float pz = point.z;
int box_count = 0;
// box_corner[i * 8 + 0] = left_up_x;
// box_corner[i * 8 + 1] = left_up_y;
// box_corner[i * 8 + 2] = right_up_x;
// box_corner[i * 8 + 3] = right_up_y;
// box_corner[i * 8 + 4] = right_down_x;
// box_corner[i * 8 + 5] = right_down_y;
// box_corner[i * 8 + 6] = left_down_x;
// box_corner[i * 8 + 7] = left_down_y;
// box_rectangular[i * 4] = std::min(std::min(std::min(left_up_x, right_up_x), right_down_x), left_down_x);
// box_rectangular[i * 4 + 1] = std::min(std::min(std::min(left_up_y, right_up_y), right_down_y), left_down_y);
// box_rectangular[i * 4 + 2] = std::max(std::max(std::max(left_up_x, right_up_x), right_down_x), left_down_x);
// box_rectangular[i * 4 + 3] = std::max(std::max(std::max(left_up_y, right_up_y), right_down_y), left_down_y);
for (int box_idx = 0; box_idx < num_objects; box_idx++) {
if (box_count >= model_param_.point2box_max_num()) {
break;
}
// 滤除不在box范围内的点
if (px < box_rectangular[box_idx * 4 + 0] || px > box_rectangular[box_idx * 4 + 2]) {
continue;
}
if (py < box_rectangular[box_idx * 4 + 1] || py > box_rectangular[box_idx * 4 + 3]) {
continue;
}
// z值目标bbox中心的高度、h值是bbox框的高度
float z = detections[box_idx * model_param_.postprocess().num_output_box_feature() + 2];
float h = detections[box_idx * model_param_.postprocess().num_output_box_feature() + 5];
// 若激光点的高度低于盒子的最低点或者高于盒子的最高点,则滤除
if (pz < (z - h / 2 - model_param_.postprocess().bottom_enlarge_height())
|| pz > (z + h / 2 + model_param_.postprocess().top_enlarge_height())) {
continue;
}
// 提取盒子xy平面上的4个交点
float x1 = box_corner[box_idx * 8 + 0];
float x2 = box_corner[box_idx * 8 + 2];
float x3 = box_corner[box_idx * 8 + 4];
float x4 = box_corner[box_idx * 8 + 6];
float y1 = box_corner[box_idx * 8 + 1];
float y2 = box_corner[box_idx * 8 + 3];
float y3 = box_corner[box_idx * 8 + 5];
float y4 = box_corner[box_idx * 8 + 7];
// double angl1 = (px - x1) * (x2 - x1) + (py - y1) * (y2 - y1);
// double angl2 = (px - x2) * (x3 - x2) + (py - y2) * (y3 - y2);
// double angl3 = (px - x3) * (x4 - x3) + (py - y3) * (y4 - y3);
// double angl4 = (px - x4) * (x1 - x4) + (py - y4) * (y1 - y4);
// 计算激光点与每个边的夹角
double angl1 = (px - x1) * (y2 - y1) - (py - y1) * (x2 - x1);
double angl2 = (px - x2) * (y3 - y2) - (py - y2) * (x3 - x2);
double angl3 = (px - x3) * (y4 - y3) - (py - y3) * (x4 - x3);
double angl4 = (px - x4) * (y1 - y4) - (py - y4) * (x1 - x4);
// 叉积的结果在同一方向上,说明点在内部
if ((angl1 <= 0 && angl2 <= 0 && angl3 <= 0 && angl4 <= 0)
|| (angl1 >= 0 && angl2 >= 0 && angl3 >= 0 && angl4 >= 0)) {
// 获取对象
auto &object = objects->at(box_idx);
// 获取世界坐标系下的点坐标
const auto &world_point = original_world_cloud_->at(point_idx);
// 获取检测时的时间戳
const double timestamp = original_cloud_->points_timestamp(point_idx);
// 获取激光雷达自身坐标系下的高度
const float height = original_cloud_->points_height(point_idx);
const int32_t beam_id = original_cloud_->points_beam_id(point_idx);
// 获取激光点的语义标签ROI或者GROUND
const uint8_t label = original_cloud_->points_label(point_idx);
const uint8_t semantic_label = original_cloud_->points_semantic_label(point_idx);
object->lidar_supplement.point_ids.push_back(point_idx);
// 存储位于目标中激光雷达坐标系下的激光点
object->lidar_supplement.cloud.push_back(point, timestamp, height, beam_id, label, semantic_label);
// 存储位于目标中激光世界坐标系下的激光点
object->lidar_supplement.cloud_world.push_back(
world_point, timestamp, height, beam_id, label, semantic_label);
}
}
}
}
cpp

应用非极大值抑制策略,并对候选目标进行筛选。
该候选区域与其它候选区域之间的相似性计算。
筛选出IOU高于0.25的目标组合。
接着,在这些组合中分别应用非极大值抑制逻辑。
最终移除不符合条件的候选区域。
void CenterPointDetection::FilterObjectsbyClassNMS(std::vector<std::shared_ptr<base::Object>> *objects) {
// 普通nms策略,返回置信度低的索引
auto ordinary_nms_strategy = [&](size_t i, size_t j, bool use_strategy = false) {
// return FILTER_INDEX
// 读取目标
size_t type_i = static_cast<size_t>(objects->at(i)->type);
size_t type_j = static_cast<size_t>(objects->at(j)->type);
// 挑选其中一个置信度低的索引
size_t filter_index = (objects->at(i)->confidence < objects->at(j)->confidence) ? i : j;
// ordinary_nms_strategy: VEHICLE > BICYCLE > PEDESTRIAN > TRAFFICCONE
if (nms_strategy_ || use_strategy) {
return type_i > type_j ? j : i;
}
return filter_index;
};
// 特殊nms策略,对保留的目标keep_index重新赋值概率
auto special_nms_strategy = [&](size_t keep_index,
base::ObjectType keep_type,
size_t filter_index,
base::ObjectType filter_type,
float conf_buffer) {
if (nms_strategy_) {
return;
}
objects->at(keep_index)->lidar_supplement.raw_probs.clear();
objects->at(keep_index)->type_probs.clear();
objects->at(keep_index)
->lidar_supplement.raw_probs.push_back(
std::vector<float>(static_cast<int>(base::ObjectType::MAX_OBJECT_TYPE), 0.f));
// 滤除掉的和保留的置信度之和
float sum_confidence = objects->at(keep_index)->confidence + objects->at(filter_index)->confidence;
// 对低置信度的被滤除的目标提升一下置信度
float filter_prob
= 1.0 * (objects->at(filter_index)->confidence + conf_buffer) / (sum_confidence + conf_buffer);
float keep_prob = 1.0 * objects->at(keep_index)->confidence / (sum_confidence + conf_buffer);
// 对被保留的的目标的概率赋值
objects->at(keep_index)->lidar_supplement.raw_probs.back()[static_cast<int>(filter_type)] = filter_prob;
objects->at(keep_index)->lidar_supplement.raw_probs.back()[static_cast<int>(keep_type)] = keep_prob;
// 概率重新赋值给type_probs
objects->at(keep_index)
->type_probs.assign(
objects->at(keep_index)->lidar_supplement.raw_probs.back().begin(),
objects->at(keep_index)->lidar_supplement.raw_probs.back().end());
};
std::vector<bool> delete_array(objects->size(), false);
std::vector<bool> nms_visited(objects->size(), false);
std::vector<std::vector<size_t>> nms_pairs;
nms_pairs.resize(objects->size());
// different class nms
// 将本目标与其他目标计算IOU,提取IOU大于0.25的目标对
// 如果目标已经参与了组对,就不再与其他目标组对,一个目标可能与多个目标组队
for (size_t i = 0; i < objects->size(); i++) {
auto &obj_i = objects->at(i);
if (!obj_i || nms_visited[i]) {
continue;
}
for (size_t j = i + 1; j < objects->size(); j++) {
auto &obj_j = objects->at(j);
if (!obj_j) {
continue;
}
// diff_class_iou: 0.25
// 3dbox的重合度较低
if (get_3dbox_iou(obj_i, obj_j) <= diff_class_iou_) {
continue;
}
nms_visited[j] = true;
nms_pairs[i].push_back(j);
}
}
// do NMS
for (size_t i = 0; i < objects->size(); i++) {
if (nms_pairs[i].size() == 0) {
continue;
}
// 该目标只组成一个目标对
if (nms_pairs[i].size() == 1) {
// 获取置信度低的目标索引
size_t filter_index = ordinary_nms_strategy(i, nms_pairs[i][0]);
// 获取置信度高的目标索引
size_t keep_index = filter_index == i ? nms_pairs[i][0] : i;
delete_array[keep_index] = false;
delete_array[filter_index] = true;
// special-nms-strategy
// 获取2个目标的父类型
auto filter_type = objects->at(filter_index)->type;
auto reserve_type = objects->at(keep_index)->type;
// Only trafficcone reserve but bicycle filter -> add bicycle conf
// filter_type:被滤除的类型是自行车,reserve_type:被保留的类型是UNKNOWN
if (filter_type == base::ObjectType::BICYCLE && reserve_type == base::ObjectType::UNKNOWN) {
special_nms_strategy(
keep_index, base::ObjectType::UNKNOWN, filter_index, base::ObjectType::BICYCLE, 0.3);
}
// Only pedestrian reserve but bicycle filter -> add bicycle conf
// filter_type:被滤除的类型是自行车,reserve_type:被保留的类型是行人
if (filter_type == base::ObjectType::BICYCLE && reserve_type == base::ObjectType::PEDESTRIAN) {
special_nms_strategy(
keep_index, base::ObjectType::PEDESTRIAN, filter_index, base::ObjectType::BICYCLE, 0.2);
}
// Only trafficcone reserve but ped filter -> add ped conf
// if (filter_type == base::ObjectType::PEDESTRIAN &&
// reserve_type == base::ObjectType::UNKNOWN) {
// special_nms_strategy(keep_index, base::ObjectType::UNKNOWN,
// filter_index, base::ObjectType::PEDESTRIAN, 0.2);
// }
// Only ped reserve but trafficcone filter -> add trafficcone conf
// filter_type:被滤除的类型是UNKNOWN,reserve_type:被保留的类型是行人
if (filter_type == base::ObjectType::UNKNOWN && reserve_type == base::ObjectType::PEDESTRIAN) {
special_nms_strategy(
keep_index, base::ObjectType::PEDESTRIAN, filter_index, base::ObjectType::UNKNOWN, 0.2);
}
ADEBUG << "Only Two NMS: reserve " << keep_index << " filter " << filter_index;
}
// 该目标组成一个以上的目标对
if (nms_pairs[i].size() >= 2) {
// car strategy
// 查询目标对里面是否含有车辆的类型
bool have_car = false;
if (objects->at(i)->type == base::ObjectType::VEHICLE) {
have_car = true;
}
for (size_t k = 0; k < nms_pairs[i].size() && !have_car; k++) {
auto type = objects->at(nms_pairs[i][k])->type;
if (type == base::ObjectType::VEHICLE) {
have_car = true;
break;
}
}
// no car -> BICYCLE priorTo PEDESTRIAN priorTo TRAFFICCONE
// have car -> confidence
// 含有车辆类型
if (have_car) {
auto keep_conf = objects->at(i)->confidence;
auto keep_index = i;
for (size_t k = 0; k < nms_pairs[i].size(); k++) {
// 提取置信度高的目标索引,同时会将置信度低的目标delete_array置true
if (objects->at(nms_pairs[i][k])->confidence > keep_conf) {
delete_array[keep_index] = true;
keep_conf = objects->at(nms_pairs[i][k])->confidence;
keep_index = nms_pairs[i][k];
} else {
delete_array[nms_pairs[i][k]] = true;
ADEBUG << "NMS filter_index: " << nms_pairs[i][k];
}
}
} else { // 不含有车辆类型
auto keep_type = static_cast<size_t>(objects->at(i)->type);
auto keep_index = i;
for (size_t k = 0; k < nms_pairs[i].size(); k++) {
// 提取目标类型排序较大的类型
if (static_cast<size_t>(objects->at(nms_pairs[i][k])->type) > keep_type) {
delete_array[keep_index] = true;
keep_type = static_cast<size_t>(objects->at(nms_pairs[i][k])->type);
keep_index = nms_pairs[i][k];
} else {
delete_array[nms_pairs[i][k]] = true;
ADEBUG << "NMS filter_index: " << nms_pairs[i][k];
}
}
}
}
}
// 删除delete_array为true的目标
size_t valid_num = 0;
for (size_t i = 0; i < objects->size(); i++) {
auto &object = objects->at(i);
if (!delete_array[i]) {
objects->at(valid_num) = object;
valid_num++;
}
}
objects->resize(valid_num);
}
cpp

确定背景的目标点,在深度学习中提前设定的目标被视为前景目标。在地图内部的非地面点云数据中进行筛选处理,并排除被视为前景目标的数据,则剩余部分则视为属于背景目标的数据集。这些信息被存储在lidar_frame_ref_->secondary_indices中。
void CenterPointDetection::FilterForegroundPoints(std::vector<std::shared_ptr<Object>> *objects) {
CloudMask mask;
mask.Set(original_cloud_->size(), 0);
// roi_indices:地图内的点云索引;non_ground_indices:非地面点索引
// 将属于地图内,非地面的的激光点云mask位置置1
mask.AddIndicesOfIndices(lidar_frame_ref_->roi_indices, lidar_frame_ref_->non_ground_indices, 1);
for (uint32_t i = 0; i < objects->size(); ++i) {
auto &object = objects->at(i);
base::PointIndices ind;
// 提取位于目标内的激光点云的索引
ind.indices = object->lidar_supplement.point_ids;
// 将位于目标内的点云的mask位置置0
mask.RemoveIndices(ind);
}
// 最终会获取位于地图内、非地面点、不在目标内的点,符合这三个条件的点可以认为是前景目标的点
mask.GetValidIndices(&lidar_frame_ref_->secondary_indices);
}
cpp

如何构建目标多边形
在前一阶段获取的目标(frame->segmented\_objects)基础上
- ComputePolygon2D(objects->at(i));计算了xy平面上的轴对齐包围盒;
- ComputePolygonSizeCenter(objects->at(i));重新计算或确定目标的坐标信息及其姿态信息。
bool ObjectBuilder::Build(const ObjectBuilderOptions& options, LidarFrame* frame) {
if (frame == nullptr) {
return false;
}
// 获取目标
std::vector<ObjectPtr>* objects = &(frame->segmented_objects);
for (size_t i = 0; i < objects->size(); ++i) {
if (objects->at(i)) {
// 为检测的每个目标赋值一个id号
objects->at(i)->id = static_cast<int>(i);
// 在xy平面上构建2维多边形
ComputePolygon2D(objects->at(i));
// 获取目标的中心位置和大小
ComputePolygonSizeCenter(objects->at(i));
// 获取目标检测的时间戳
ComputeOtherObjectInformation(objects->at(i));
// 计算目标距离地面的高度
ComputeHeightAboveGround(objects->at(i));
// need_judge_front_critical=false
if (FLAGS_need_judge_front_critical) {
JudgeFrontCritical(objects->at(i), frame->lidar2novatel_extrinsics);
}
}
}
return true;
}
void ObjectBuilder::ComputePolygon2D(ObjectPtr object) {
Eigen::Vector3f min_pt;
Eigen::Vector3f max_pt;
PointFCloud& cloud = object->lidar_supplement.cloud;
// 获取轴对齐方向xyz最小最大范围
GetMinMax3D(cloud, &min_pt, &max_pt);
// 针对目标内点云数量小于4的情况
if (cloud.size() < 4u) {
SetDefaultValue(min_pt, max_pt, object);
return;
}
LinePerturbation(&cloud);
// 利用目标内的激光点构建一个凸多边形
algorithm::ConvexHull2D<PointFCloud, PolygonDType> hull;
hull.GetConvexHull(cloud, &(object->polygon));
}
cpp

基于位于目标区域内的点云数据,在完成对目标中心位置及其尺寸的重新重构后,在算法实现中引入了必要的几何计算功能。具体而言,在算法CalculateBBoxSizeCenter2DXY中通过调用(object->lidar_supplement.cloud)参数获取点云数据,并结合dir参数完成对(object->size)和(object->center)属性值的赋值操作;这一过程的关键在于通过坐标系变换来实现计算效率的提升,并最终实现了两种不同坐标系统之间的相互转换关系。
void ObjectBuilder::ComputePolygonSizeCenter(ObjectPtr object) {
if (object->lidar_supplement.cloud.size() < 4u) {
return;
}
// dir为单位向量,cloud:目标内的点云
Eigen::Vector3f dir = object->direction;
// 求取目标的中心位置和大小(长宽高)
algorithm::CalculateBBoxSizeCenter2DXY(object->lidar_supplement.cloud, dir, &(object->size), &(object->center));
if (object->lidar_supplement.is_background) {
float length = object->size(0);
float width = object->size(1);
Eigen::Matrix<float, 3, 1> ortho_dir(-object->direction(1), object->direction(0), 0.0);
if (length < width) {
object->direction = ortho_dir;
object->size(0) = width;
object->size(1) = length;
}
}
for (size_t i = 0; i < 3; ++i) {
if (object->size(i) < kEpsilonForSize) {
object->size(i) = kEpsilonForSize;
}
}
// 得到目标的航向角
object->theta = static_cast<float>(atan2(object->direction(1), object->direction(0)));
}
void CalculateBBoxSizeCenter2DXY(
const PointCloudT &cloud, const Eigen::Vector3f &dir, Eigen::Vector3f *size,
Eigen::Vector3d *center,
float minimum_edge_length = std::numeric_limits<float>::epsilon()) {
// NOTE: direction should not be (0, 0, 1)
Eigen::Matrix3d projection;
Eigen::Vector3d dird(dir[0], dir[1], 0.0);
dird.normalize();
// 为方便计算,构建一个从雷达坐标系到目标坐标系的转换,这里只进行旋转
projection << dird[0], dird[1], 0.0, -dird[1], dird[0], 0.0, 0.0, 0.0, 1.0;
constexpr double kDoubleMax = std::numeric_limits<double>::max();
Eigen::Vector3d min_pt(kDoubleMax, kDoubleMax, kDoubleMax);
Eigen::Vector3d max_pt(-kDoubleMax, -kDoubleMax, -kDoubleMax);
Eigen::Vector3d loc_pt(0.0, 0.0, 0.0);
for (size_t i = 0; i < cloud.size(); i++) {
// 将激光点云坐标转换到目标坐标系下
loc_pt = projection * Eigen::Vector3d(cloud[i].x, cloud[i].y, cloud[i].z);
// 计算轴对齐包围盒
min_pt(0) = std::min(min_pt(0), loc_pt(0));
min_pt(1) = std::min(min_pt(1), loc_pt(1));
min_pt(2) = std::min(min_pt(2), loc_pt(2));
max_pt(0) = std::max(max_pt(0), loc_pt(0));
max_pt(1) = std::max(max_pt(1), loc_pt(1));
max_pt(2) = std::max(max_pt(2), loc_pt(2));
}
// 计算大小
(*size) = (max_pt - min_pt).cast<float>();
// 计算中心位置
Eigen::Vector3d coeff = (max_pt + min_pt) * 0.5;
coeff(2) = min_pt(2);
// 将中心位置再转换会雷达坐标系
*center = projection.transpose() * coeff;
constexpr float kFloatEpsilon = std::numeric_limits<float>::epsilon();
float minimum_size = std::max(minimum_edge_length, kFloatEpsilon);
(*size)(0) = (*size)(0) <= minimum_size ? minimum_size : (*size)(0);
(*size)(1) = (*size)(1) <= minimum_size ? minimum_size : (*size)(1);
(*size)(2) = (*size)(2) <= minimum_size ? minimum_size : (*size)(2);
}
cpp

效果展示
预测结果和目标内点云分布
上下两幅图展示了不同视角下的物体结构示意图,在该示意图中用紫色标记代表通过深度学习算法推导出的边界框顶点位置,在此基础上进一步使用三维坐标系中的蓝色标记则对应分配到该区域中的三维点云数据分布情况。

