基于MATLAB的车道线识别、自动驾驶识别
使用单目相机的视觉感知
这个例子重点说明了构建一个能够进行车道边界和车辆检测的单眼摄像机传感器仿真的过程。该传感器将在车辆坐标系中执行这些检测并报告结果。通过本例学习自动驾驶工具箱™使用的坐标系统及其相关计算机视觉技术。
概述
包含ADAS功能或被设计为全自动驾驶的车辆依赖于多种先进传感技术
车道边界检测
检测车辆、人和其他物体
从自我车辆到障碍物的距离估计
随后来自单眼摄像机传感器的读数可用于触发车道偏离警告、碰撞警报,并可建立车道保持辅助控制系统。当与其他传感器协同工作时,则被用来执行紧急制动系统以及其它重要安全功能。
该实例在经过充分开发的单眼相机系统中发现了特征子集,并且识别了车道边缘以及后方车辆,并记录它们相对于车辆坐标系的位置。
定义相机配置
掌握相机的内在和外在校准参数在像素和车辆坐标之间的准确转换中起着关键作用。
主要设定摄像机内部参数;这些参数基于棋盘校准技术在先前步骤中被求得;通过应用程序可以获得该摄像机设置
focalLength = [309.4362, 344.2161]; % [fx, fy] in pixel units %[fx,fy],以像素为单位表示
principalPoint = [318.9034, 257.5352]; % [cx, cy] optical center in pixel coordinates %[cx,cy]光学中心的像素坐标
imageSize = [480, 640]; % [nrows, mcols]
请务必注意,在未考虑透镜失真系数的情况下进行操作是不合适的。由于数据中的失真程度较低,在相机初始化过程中通常不会涉及此参数的计算与保存。
camIntrinsics被赋值为由focalLength、principalPoint及imageSize所构成的cameraIntrinsics结果
接下来确定以车辆底盘为基础的摄像头朝向,在相机外部进行设置以确定三维摄像机坐标系在车辆坐标系中的位置。
height = 2.1798; % mounting height in meters from the ground 安装高度%,以米为单位
pitch = 14; % pitch of the camera in degrees 摄像机的角度
上述量可以由外部函数返回的旋转和平移矩阵得到。
节距用于描述照相机相对于水平位置的角度偏移。
对于本例中的照相机系统来说,在静止状态下没有滚转和偏航。
所有关于传感器数据与环境数据整合的信息都储存在单个相机对象中。
sensor = monoCamera(camIntrinsics, height, 'Pitch', pitch);
请特别注意,请问您是否了解?单一摄像头设备建立了精确且独特的车辆坐标系。其中x轴方向与车辆前进方向一致,在y轴朝向车辆左侧的位置上,并且z轴垂直向上指向地面。

通常情况下,默认设置下将坐标系的原点置于地面。通过调整相机传感器的位置属性参数即可实现原点位置的变化。此外,在单一相机配置下可实现从图像到车辆坐标系统以及从车辆坐标系统到图像坐标的转换过程。
坐标系间的转换基于对平坦道路的假设。它建立了一个同源性矩阵来映射成像平面与地面平面之间的位置关系。非平面的道路会引入距离计算中的误差,在离车辆较远的地方尤为明显。
加载视频的一帧
在对整个视频进行处理之前,在分析每个独立的视频帧时,请解释单目相机传感器设计所涉及的核心概念。
首先,请创建一个能够打开视频文件的视频阅读器对象。为了节省内存资源,在处理过程中建议每次仅将一个视频帧加载到内存中。
videoName = 'caltech_cordova1.avi';
videoReader = VideoReader(videoName);
阅读一个包含车道标记和车辆的有趣的框架 。
timeStemp = 0.06667; % Starting from the beginning of the video, this represents the time from when the video started.
videoReader.CurrentTime = timeStamp; // 设置为选定帧的 assigned value
frame = readFrame(videoReader); % read frame at timeStamp seconds 读取帧
imshow(frame) % display frame 显示帧

注释中提到的示例假设未考虑镜头畸变。在这种情况中,请采用无畸变图像函数以去除镜头畸变的影响。
创建鸟瞰图像
多种方法用于分割与检测车道标记。其中一种常用的方法是利用鸟瞰图进行图像变换。尽管这种方法会带来计算成本,
但这种转换确实带来了显著的优势。
在鸟瞰视图中,
车道标记呈现出均匀的厚度,
从而简化了分割过程。
属于同一车道的道路边界线通常会平行排列,
这使得后续分析更加便捷。
根据所选相机参数设置, 网格生成器将原始图像转换为其对应的俯视图. 该网格生成器可让您定义需应用车辆坐标系进行变换的区域. 请注意, 当摄像头的高度(以米为单位)被指定时, 车辆坐标系由单体摄像头对象确定. 举例说明, 如果高度是以毫米而不是米为单元来设定, 则模拟其余部分采用毫米作为单元.
% With vehicle coordinates, determine the area for transformation %使用车辆坐标,确定要进行变换的区域
distance measured ahead of the sensor equals 30; % measured in meters, as previously defined for monoCamera height input
spaceToOneSide被指定为6米;注:其他所有距离参数均采用米作为单位
bottomOffset = 3;
outView = [bottomOffset, distAheadOfSensor, -spaceToOneSide, spaceToOneSide]; % [xmin, xmax, ymin, ymax]
imageSize = [NaN, 250]; % output image width in pixels; height is chosen automatically to preserve units per pixel ratio %输出图像宽度为像素;自动选择高度以保持每像素比的单位
birdsEyeConfig = birdsEyeView(sensor, outView, imageSize);.
生成鸟瞰图。
birdsEyeImage = transformImage(birdsEyeConfig, frame);
figure
imshow(birdsEyeImage

远离传感器的区域更模糊,因为有更少的像素,因此需要更多的插值。
请注意,在车辆坐标系中定位车道边界候选像素的能力将使您无需依赖鸟瞰图即可完成后续的一个处理步骤
在车辆坐标中找到车道标记
基于鸟瞰图的图像中,您可以借助分段车道标记岭功能来提取 lane markings candidate pixels on the road surface. 这一技术被选用的原因在于其简洁性与相对的有效性. 目前可用的替代分割方法主要包括 semantic segmentation (深度学习)以及 controllable filters. 您可以根据这些方法获取下一阶段所需的 binary mask.
该函数的主要输入参数均以世界单位定义为基准,并且如前所述,在实际应用中可选取分段车道标记岭并测量其车道标记宽度。采用world coordinate system(即world units)使得您能够轻松地尝试新的sensor配置(即new sensors),即使在input image size发生变化的情况下(即input image size varies)。对于提高design in different countries and universal conditions下的effectiveness and compatibility具有重要意义
% Convert to grayscale %转换为灰度
birdsEyeImage = rgb2gray(birdsEyeImage);
% Lane marker segmentation ROI in world units %车道标记分割ROI属于世界单位
vehicleROI = outView - [-1, 2, -3, 3]; % showing a view of up to 3 meters on both sides and up to 4 meters ahead of the sensor %显示左右各3米和前4米的视角
approxLaneMarkerWidthVehicle = 0.25; % 25 centimeters 25厘米
% Detect lane features 检测车道特征
laneSensitivity = 0.25;
bird's-eye view in B/W is determined by segmenting the ridge lines of lane markers within the bird's-eye image based on the configured parameters and the approximate width of lane markers on a vehicle.
'ROI', vehicleROI, 'Sensitivity', laneSensitivity);
figure
imshow(birdsEyeViewBW)

定位各个车道标记存在于安装于摄像头传感器上的车辆坐标中。本示例采用了二次多项式的形状来表示抛物线车道边界模型ax² + bx + c。其他形式如三次多项式或样条曲线同样可行。将其转换为车辆坐标系必要是为了避免由于透视扭曲导致抛物线无法准确表示车道标记的曲率。
该系统采用基于车道模型的方法实现车辆路径规划。
道路工程中设置了多处专用路段以保障施工安全。
% Extract lane candidate points within the vehicle coordinate system %从车辆坐标系中提取车道候选点
[imageX, imageY] = find(birdsEyeViewBW);
xyBoundaryPoints = imageToVehicle(birdsEyeConfig, [imageY, imageX]);
因为分割后得到的点集中存在大量非实际车道标记的数据点,在这种情况下,为了减少这些干扰数据的影响而采用基于随机样本共识(RANSAC)的鲁棒曲线拟合算法。
返回抛物线线边界对象、边界数组中的边界及其抛物线参数(a、b、c)。
maxLanes = 2; % look for maximum of two lane markers %查找最多的两个车道标记
boundaryWidth = 3*approxLaneMarkerWidthVehicle; % augment the boundary width 增加边界宽度的%
[boundaries, boundaryPoints] = findParabolicLaneBoundaries(xyBoundaryPoints,boundaryWidth,...)
'最大边界数量', 'maxLaneNumber', '验证边界函数', @validateBoundaryFcn);
请注意,在查找抛物线车道边界时必须提供一个函数句柄用于验证边界过程涉及fcn。这些示例函数排列在该示例文件的末尾部分。通过此额外输入您可以根据a、b、c参数筛选掉一些异常曲线。此外它还能利用前一视频帧预测的a、b、c值范围来处理连续帧中的时间信息。
在之前的步骤中识别出的一些曲线可能仍存在无效的情况。例如,在一条路径符合人行横道标记的情况下,则应予以排除。通过应用额外的启发式策略来拒绝许多这些路径。
% Set up criteria for rejecting boundaries dependent on the size of the boundary %根据边界长度建立拒绝边界的标准
maxPossibleXLength = diff(vehicleROI(1:2));
minXLength = maxPossibleXLength * 0.60; % establish a threshold %建立阈值
% Reject short boundaries 拒绝短边界的比例
isOfMinLength = arrayfun(@(b)diff(b.XExtent) > minXLength, boundaries);
boundaries = boundaries(isOfMinLength);
基于利用抛物线形状拟合车道边界并计算其强度指标的方法来去除多余区域。依据结合区域-of-interest(ROI)位置以及图像分辨率设定合适的车道强度阈值。
% To compute the maximum strength, assume all image pixels within the ROI
% are lane candidate points
%要计算最大强度,请假设ROI内的所有图像像素
%是车道候选点
birdsImageROI = vehicleToImageROI(birdsEyeConfig, vehicleROI);
[laneImageX,laneImageY] = meshgrid(birdsImageROI(1):birdsImageROI(2),birdsImageROI(3):birdsImageROI(4));
% Convert the image points to vehicle points%将图像点转换为车辆点
variable vehiclePoints被赋值为 imageToVehicle(birdsEyeConfig,由两个单列向量 laneImageX(:) 和 laneImageY(:) 构成的矩阵)的结果
% Find the maximum number of unique x-axis locations possible for any lane
% boundary
%查找任何车道可能存在的唯一x轴位置的最大数目
边界百分比
maxPointsInOneLane = numel(unique(vehiclePoints(:,1)));
% Set the maximum length of a lane boundary to the ROI length
%将车道边界的最大长度设置为ROI长度
maxLaneLength = diff(vehicleROI(1:2));
% Compute the maximum possible lane strength for this image size/ROI size
% specification
%计算此图像大小/ROI大小的最大可能的车道强度
%规格
maxStrength = maxPointsInOneLane/maxLaneLength;
% Reject weak boundaries
拒绝弱边界的比例
isStrong = [boundaries.Strength] > 0.4*maxStrength;
boundaries = boundaries(isStrong);
该分类方法包含在辅助函数中。了解车道标记的类型对于自动转向车辆至关重要。例如,在某些情况下被禁止穿过实体标记。
boundaries = classifyLaneTypes(boundaries, boundaryPoints);
% Locate two ego lanes if they are present
如果两个自我通道,定位它们存在
xOffset = 0; % 0 meters from the sensor 距离传感器%0米
distanceToBoundaries = boundaries.computeBoundaryModel(xOffset);
% Find candidate ego boundaries
%找到候选人的自我边界
leftEgoBoundaryIndex = [];
rightEgoBoundaryIndex = [];
minLDistance = min(distanceToBoundaries(distanceToBoundaries>0));
minRDistance = max(distanceToBoundaries(distanceToBoundaries<=0));
if ~isempty(minLDistance)
leftEgoBoundaryIndex = distanceToBoundaries == minLDistance;
end
if ~isempty(minRDistance)
rightEgoBoundaryIndex = distanceToBoundaries == minRDistance;
end
leftEgoBoundary = boundaries(leftEgoBoundaryIndex);
rightEgoBoundary = boundaries(rightEgoBoundaryIndex);
在鸟瞰图图像和常规视图中显示检测到的车道标记。
xVehiclePoints = bottomOffset:distAheadOfSensor;
laneWithEgoBird = insertLanesBetween(birdsEyePic, leftEgoLimit, configData, xVehPts, {'Color': 'Red'})
birdsEyeView = addLaneEdge(birdsEyeView,rightEgoEdge,birdseyeConfig,xVehiclePoints,'Color','Green');
insertFrameWithEgoLane = addLaneBoundary(frame, leftSideBoundary, sensor, xVehiclePoints, 'colour', 'Red');
frameWithEgoLane = insertLaneBoundary(frameWithEgoLane, right ego boundary, sensor, x vehicle points, Color: Green);
figure
subplot('Position', [0, 0, 0.5, 1.0]) % [left, bottom, width, height] in normalized units
imshow(birdsEyeWithEgoLane)
subplot('Position', [0.5, 0, 0.5, 1.0])
imshow(frameWithEgoLane)

在车辆坐标中定位车辆
检测和跟踪车辆在前碰撞警告(FCW)和自动紧急制动(AEB)系统中至关重要。
部署一个基于聚合通道特征(ACF)的检测器,并通过预先训练实现对前后部区域的有效识别;这类检测器在处理重要场景时表现出显著的效果;然而,在探测前方道路中穿过的自身车辆时,则该方法显示不足
detector = vehicleDetectorACF();
% Width of a common vehicle is between 1.5 to 2.5 meters
%普通车辆的宽度在1.5米到2.5米之间
vehicleWidth = [1.5, 2.5];
利用configureDetectorMonoCamera功能对通用ACF检测器进行针对性优化以适应典型的汽车程序中的几何特性 这种摄像头设置使得新的探测器能够仅专注于路面上的目标 从而显著降低了计算开销 并且降低了误报率的数量
monoDetector = configureDetectableSingleSensor(detector, sensor, vehicleWidth);
[bboxes, scores] = detect(monoDetector, frame);
由于此示例仅展示了如何针对演示目的而对单个帧进行处理,并未涉及复杂的场景或多辆车的情况,它不具备足够的通用性以支持在原始检测基础上应用目标追踪功能. 通过引入追踪机制,返回车辆位置的结果变得更加稳定,即使在部分车辆被遮挡的情况下也能持续提供可靠的位置信息. 如需进一步了解基于摄像头的目标追踪方法,请参阅"使用摄像头实现多辆车辆追踪"的相关示例.
接下来首先将车辆检测的数据转换为车辆坐标系。本示例末尾包含了用于计算该场景下目标物体的位置的具体方法,在这一过程中系统会根据检测算法从图像空间获取并分析返回的目标 bounding box信息,并将其映射至对应的车行坐标系中。具体而言,在车行坐标系下系统会识别并标记出bounding box区域底端中心点的位置信息。由于采用了单眼摄像头以及基于同质成像原理的工作模式,在这种情况下我们只能依据地面距离信息来进行精确的定位计算;而在三维空间环境中若要实现任意点位的目标定位则需借助双目校准或其他多角度测量手段来确定
locations = computeVehicleLocations(bboxes, sensor);
% Overlay the detections on the video frame
%覆盖视频帧上的检测结果
imgOut = insertVehicleDetections(frame, locations, bboxes);
figure;
imshow(imgOut);

模拟一个完整的传感器与视频输入
现在您已经掌握了各个环节的内在运作机制,请系统性地将其整合起来使用,并通过时间戳提取进一步的时间信息。
重新播放从头开始,并执行处理视频的任务。这段代码被简化了,在此过程中,默认情况下这些参数已进行了设定。由于所有关键参数已在前面步骤中进行过定义,在这里,默认情况下这些参数已进行了设定。
videoReader.CurrentTime = 0;
isPlayerOpen = true;
snapshot = [];
while hasFrame(videoReader) && isPlayerOpen
% Grab a frame of video %获取视频的一帧
frame = readFrame(videoReader);
% Compute birdsEyeView image %计算鸟目视图图像
birdsEyeImage = transformImage(birdsEyeConfig, frame);
birdsEyeImage = rgb2gray(birdsEyeImage);
% Detect lane boundary features %检测车道边界特征
Birds-Eye View BW = detectLaneMarkers(bird's-eye view image, bird’s-eye configuration, ...
approxLaneMarkerWidthVehicle, 'ROI', vehicleROI, ...
'Sensitivity', laneSensitivity);
% Obtain lane candidate points in vehicle coordinates
%获取车辆坐标中的车道候选点
[imageX, imageY] = find(birdsEyeViewBW);
xyBoundaryPoints = imageToVehicle(birdsEyeConfig, [imageY, imageX]);
% Find lane boundary candidates %查找车道边界候选人
[boundaries, boundaryPoints] = findParabolicLaneBoundaries(xyBoundaryPoints,boundaryWidth, ...
该脚本在括号内定义了+1,并将该值与变量f进行连接操作。
% Reject boundaries based on their length and strength
%根据边界的长度和强度拒绝边界
isOfMinLength = arrayfun(@(b)diff(b.XExtent) > minXLength, boundaries);
boundaries = boundaries(isOfMinLength);
isStrong = [boundaries.Strength] > 0.2*maxStrength;
boundaries = boundaries(isStrong);
% Classify lane marker type %分类车道标记类型
boundaries = classifyLaneTypes(boundaries, boundaryPoints);
% Find ego lanes %查找自我通道
xOffset = 0; % 0 meters from the sensor 距离传感器%0米
distanceToBoundaries = boundaries.computeBoundaryModel(xOffset);
% Find candidate ego boundaries
%找到候选人的自我边界
leftEgoBoundaryIndex = [];
rightEgoBoundaryIndex = [];
minLDistance = min(distanceToBoundaries(distanceToBoundaries>0));
minRDistance = max(distanceToBoundaries(distanceToBoundaries<=0));
if ~isempty(minLDistance)
leftEgoBoundaryIndex = distanceToBoundaries == minLDistance;
end
if ~isempty(minRDistance)
rightEgoBoundaryIndex = distanceToBoundaries == minRDistance;
end
leftEgoBoundary = boundaries(leftEgoBoundaryIndex);
rightEgoBoundary = boundaries(rightEgoBoundaryIndex);
% Detect vehicles
%检测车辆
[bboxes, scores] = detect(monoDetector, frame);
locations = computeVehicleLocations(bboxes, sensor);
% Visualize sensor outputs and intermediate results. Pack the core
% sensor outputs into a struct.
%可视化传感器输出和中间结果。堆芯
%的传感器输出到一个结构体中。
sensorOut.leftEgoBoundary = leftEgoBoundary;
sensorOut.rightEgoBoundary = rightEgoBoundary;
sensorOut.vehicleLocations = locations;
sensorOut.xVehiclePoints = bottomOffset:distAheadOfSensor;
sensorOut.vehicleBoxes = bboxes;
% Pack additional visualization data, including intermediate results
%打包其他可视化数据,包括中间结果
intOut.birdsEyeImage = birdsEyeImage;
intOut.birdsEyeConfig = birdsEyeConfig;
intOut.vehicleScores = scores;
intOut.vehicleROI = vehicleROI;
intOut.birdsEyeBW = birdsEyeViewBW;
closePlayers = ~hasFrame(videoReader);
isPlayerOpen = visualizeSensorResults(frame, sensor, sensorOut, ...
intOut, closePlayers);
timeStamp = 7.5333; % take snapshot for publishing at timeStamp seconds
%获取快照以便发布时间戳秒
if abs(videoReader.CurrentTime - timeStamp) < 0.01
snapshot = takeSnapshot(frame, sensor, sensorOut);
end
end
显示视频帧。快照是在时间戳秒时拍摄的。
if ~isempty(snapshot)
figure
imshow(snapshot)
end

在不同的视频上尝试传感器设计
螺旋单传感器类将构建并完成所有必要的步骤以模拟单目相机传感器组装成一个完整的包该方案适用于任何视频流。基于世界单位设计的大多数参数使得该方案对相机参数的变化具有良好的适应性包括图像尺寸的变化。值得注意的是其中所使用的螺旋单传感器类中的代码与上一节中的循环存在显著差异其核心目的是介绍基础原理而非深入探讨细节。
除了提供一个新的视频之外,还需搭配相应的摄像头设置进行操作.这一过程的展示位于此页面顶部右侧位置,在其他地方无法看到其具体操作步骤.不妨亲自制作一段视频试用以加深理解.
% Sensor configuration %传感器配置
focalLength = [309.4362, 344.2161];
principalPoint = [318.9034, 257.5352];
imageSize = [480, 640];
height = 2.1798; % mounting height in meters from the ground
安装高度%,以米为单位
pitch = 14; % pitch of the camera in degrees 摄像机的角度
内参 cam 等于 cameraIntrinsics 函数调用 focalLength、principalPoint 和 imageSize 参数
sensor = monoCamera(camIntrinsics, height, 'Pitch', pitch);
videoReader = VideoReader('caltech_washington1.avi');
Create the helperMonoSensor object and apply it to the video.
创建螺旋单传感器对象,并将其应用于视频。
monoSensor = helperMonoSensor(sensor);
monoSensor.LaneXExtentThreshold = 0.5;
% To remove false detections from shadows in this video, we only return
% vehicle detections with higher scores.
%要删除此视频中阴影中的错误检测,我们只返回
分数较高的车辆检测率为%。
monoSensor.VehicleDetectionThreshold = 20;
isPlayerOpen = true;
snapshot = [];
while hasFrame(videoReader) && isPlayerOpen
frame = readFrame(videoReader); % get a frame 得到一个框架
sensorOut = processFrame(monoSensor, frame);
closePlayers = ~hasFrame(videoReader);
IsPlayerOpen = outputSensorData(monoSensor, frame, sensorOut, terminateSessions);
timeStamp = 11.1333; % take snapshot for publishing at timeStamp seconds
%获取快照以便发布时间戳秒
if abs(videoReader.CurrentTime - timeStamp) < 0.01
snapshot = takeSnapshot(frame, sensor, sensorOut);
end
end
Display the video frame. Snapshot is taken at timeStamp seconds.
显示视频帧。快照是在时间戳秒时拍摄的。
if ~isempty(snapshot)
figure
imshow(snapshot)
end
支持功能
视觉化传感器通过实验模拟系统展示了关键数据以及过渡阶段的数据。
该函数名为isPlayerOpen,并用于显示来自传感器的结果(如frame、sensor及sensorOut等)。
intOut, closePlayers)
% Unpack the main inputs%打开主要输入数据
leftEgoBoundary = sensorOut.leftEgoBoundary;
rightEgoBoundary = sensorOut.rightEgoBoundary;
locations = sensorOut.vehicleLocations;
xVehiclePoints = sensorOut.xVehiclePoints;
bboxes = sensorOut.vehicleBoxes;
% Unpack additional intermediate data %卸载其他中间数据
birdsEyeViewImage = intOut.birdsEyeImage;
birdsEyeConfig = intOut.birdsEyeConfig;
vehicleROI = intOut.vehicleROI;
birdsEyeViewBW = intOut.birdsEyeBW;
% Visualize left and right ego-lane boundaries in bird's-eye view
%在鸟瞰图中可视化左右自我通道的边界
birdsEyeWithOverlays = computeLaneMarkings(bird's-eye view image, left_ego_boundary, birdsEyeConfig, x_vehicle_points, Color set to red);
The birds-eye view overlay is generated by the insert function for the lane boundary. The function parameters include the current birds-eye view overlay, right ego boundary, birds-eye configuration data, and x vehicle points. The color is specified as 'Green'.
% Visualize ego-lane boundaries in camera view
%在摄像头视图中可视化自我车道边界
frameWithOverlays = insertLaneBoundaryFunction(frame, leftEgoBoundaryParam, sensingModule, movingVehiclePointsSet, '颜色设置为’Color’', '颜色值设为'Red'');
The frameWithOverlays is being inserted into the function call for lane boundary insertion using the sensors and vehicle points. The parameters include the right ego boundary and the colour Green.
将 frameWithOverlays 设置为 insertVehicleDetections 函数返回的结果
imageROI = vehicleToImageROI(birdsEyeConfig, vehicleROI);
ROI = [imageROI(1) imageROI(3) imageROI(2)-imageROI(1) imageROI(4)-imageROI(3)];
% Highlight candidate lane points that include outliers
%突出显示包括异常值的候选车道点
birdsEyeViewImage = insertShape(birdsEyeViewImage, 'rectangle', ROI); % show detection ROI 显示检测ROI
俯视图图像 = 俯视图二值图与俯视图图像在蓝色通道上进行叠加;
% Display the results %显示结果
frames = {frameWithOverlays, birdsEyeViewImage, birdsEyeWithOverlays};
persistent players;
if isempty(players)
frameNames = {'Lane markers及车辆检测', '原始分割', ' Lane markers的检测结果'};
players = helperVideoPlayerSet(frames, frameNames);
end
update(players, frames);
% Stop the sequence when the primary device is closing %在第一个设备关闭时停止循环
isPlayerOpen = isOpen(players, 1);
if (~isPlayerOpen || closePlayers) % close down the other players
%关闭其他玩家
clear players;
end
end
确定车辆的位置信息。该系统通过图像处理算法获取并返回包含目标物体区域的矩形边界框;进而输出该物体在地面参考系下的中心点坐標值。此外,在车体固定坐标系下定位该物体的位置信息;由于所采用的是单眼相机传感器并基于简单的同质图模型;因此仅能实现水平面上的距离测量;而在三维空间中定位任意点则需依赖于立体摄像机或三角测量传感器等辅助设备
function locations = computeVehicleLocations(bboxes, sensor)
locations = zeros(size(bboxes,1),2);
for i = 1:size(bboxes, 1)
bbox = bboxes(i, :);
% Get [x,y] location of the center of the lower portion of the
% detection bounding box in meters. bbox is [x, y, width, height] in
% image coordinates, where [x,y] represents upper-left corner.
的下部中心的位置
%检测边界框,以米为单位。bbox为[x、y、宽度、高度]在
%图像坐标,其中[x,y]表示左上角。
yBottom = bbox(2) + bbox(4) - 1;
xCenter = bbox(1) + (bbox(3)-1)/2; % approximate center
locations(i,:) = imageToVehicle(sensor, [xCenter, yBottom]);
end
end
The function insertVehicleDetections adds bounding boxes and shows [x,y] locations associated with the returned vehicle detections.
插入车辆检测:插入边界框,并显示与返回的车辆检测对应的[x,y]位置。
function imgOut = insertVehicleDetections(imgIn, locations, bboxes)
imgOut = imgIn;
for i = 1:size(locations, 1)
location = locations(i, :);
bbox = bboxes(i, :);
label = sprintf('X=%0.2f, Y=%0.2f', location(1), location(2));
imgOut = insertObjectAnnotation(imgOut, ...
'rectangle', bbox, label, 'Color','g');
end
end
****vehicleToImageROI**** translates the ROI from the vehicle coordinate system into the image coordinate system of a bird's-eye view representation.
车辆图像的收益比将基于车辆坐标系的区域-of-interest(ROI)映射到以鸟瞰图视角呈现的图像坐标系统。
function imageROI = vehicleToImageROI(birdsEyeConfig, vehicleROI)
vehicleROI = double(vehicleROI);
loc2 = abs(vehicleToImage(birdsEyeConfig, [vehicleROI(2) vehicleROI(4)]));
loc1 = abs(vehicleToImage(birdsEyeConfig, [vehicleROI(1) vehicleROI(4)]));
loc4 = vehicleToImage(birdsEyeConfig, [vehicleROI(1) vehicleROI(4)]);
loc3 = vehicleToImage(birdsEyeConfig, [vehicleROI(1) vehicleROI(3)]);
[minRoiX, maxRoiX, minRoiY, maxRoiY] = deal(loc4(1), loc3(1), loc2(2), loc1(2));
imageROI = round([minRoiX, maxRoiX, minRoiY, maxRoiY]);
end
The function validateBoundaryFcn discards a portion of the lane boundary curves generated by applying the RANSAC algorithm.
验证边界算法拒绝使用RANSAC算法计算的一些车道边界曲线。
function isGood = validateBoundaryFcn(params)
if ~isempty(params)
a = params(1);
% Reject any curve with a small 'a' coefficient, which makes it highly
% curved.
%拒绝任何具有较小的“a”系数的曲线,这使它具有很高的系数
曲线百分比。
isGood = abs(a) < 0.003; % a from ax^2+bx+c a来自ax^2+bx+c
else
isGood = false;
end
end
classifyLaneTypes determines lane marker types as solid, dashed, etc.
分类:车道类型将车道标记类型确定为实线、虚线等。
function boundaries = classifyLaneTypes(boundaries, boundaryPoints)
for bInd = 1 : numel(boundaries)
vehiclePoints = boundaryPoints{bInd};
% Sort by x 按x排序
vehiclePoints = sortrows(vehiclePoints, 1);
xVehicle = vehiclePoints(:,1);
xVehicleUnique = unique(xVehicle);
% Dashed vs solid
xdiff = diff(xVehicleUnique);
% Sufficiently large threshold to remove spaces between points of a
% solid line, but not large enough to remove spaces between dashes
%有足够大的阈值,以删除a的点之间的空格
%实线,但没有大到足以删除破折号之间的空格
xdifft = mean(xdiff) + 3*std(xdiff);
largeGaps = xdiff(xdiff > xdifft);
% Safe default %安全默认值
boundaries(bInd).BoundaryType= LaneBoundaryType.Solid;
if largeGaps>2
% Ideally, these gaps should be consistent, but you cannot rely
% on that unless you know that the ROI extent includes at least 3 dashes.
%理想情况下,这些差距应该是一致的,但您不能依赖
除非你知道ROI范围内至少包含3个破折号。
boundaries(bInd).BoundaryType = LaneBoundaryType.Dashed;
end
end
end
takeSnapshot captures the output for the HTML publishing report.
获取快照捕获HTML发布报告的输出。
function I = takeSnapshot(frame, sensor, sensorOut)
% Unpack the inputs %解压缩输入
leftEgoBoundary = sensorOut.leftEgoBoundary;
rightEgoBoundary = sensorOut.rightEgoBoundary;
locations = sensorOut.vehicleLocations;
xVehiclePoints = sensorOut.xVehiclePoints;
bboxes = sensorOut.vehicleBoxes;
LaneAnnotationFrameWithOverlays = computeLaneBoundary(image, leftEgoLaneMarking, lidarSensor, vehiclePositionPoints, 'Color', 'Red');
overlaid_frame = addLaneBoundary(overlaid_frame,right ego boundary,sensor,x_vehicle_points(Color: 'Green')))
FrameWithOverlays = InsertVehicleDetections(FrameWithOverlays, locations, bboxes);
I = frameWithOverlays;
end

远离传感器的区域更模糊,因为有更少的像素,因此需要更多的插值。
请小心注意,在车辆坐标系中准确识别出车道边界的候选像素后,请确保能够完成后续处理步骤而不依赖鸟瞰图
