Advertisement

无人车自动驾驶指南与总结

阅读量:

Udacity联合百度课程
无人车自动驾驶指南与总结[一]

Sebastian : Udacity创始人,斯坦福教授。
David Silver : Udacity无人车模块负责人,硅谷网络工程师。

无人驾驶车核心模块:定位、感知、预测、规划和控制.

具体实现:
计算机视觉与传感器融合,实现 [感知]。
[定位]有多种方式实现, 如高精地图和LiDAR等技术。
[预测] 预测运动的物体的轨迹,方便无人车进行下一个路径规划.
[规划]包括路径导航和车辆行驶轨迹规划.
最后是转向油门刹车的自动化 [控制] 部分。

高精度地图是2021目前安全无人驾驶中,需要依赖的技术.
高精地图:机器人使用的地图,包含大量的驾驶辅助信息,交叉路口的布局,路标位置,限速路段和交通灯。
厘米级的精度。
1.高精地图可帮助[定位], 激光雷达收集地标特征,与高精地图上的已知地标比较(预处理-坐标转换-数据融合)。
2.高精地图可帮助[感知],高精度地图可以缩小感知圈为感兴趣区域(ROI)。
3. 高精地图可帮助[规划],帮助识别车道确切的中心线,在人行横道、减速带及低速限制的地方,高精度地图可提前判断。

Localization
This course will introduce you to the different sensors and how we can use them for state estimation and localization in a self-driving car. By the end of this course, you will be able to: - Understand the key methods for parameter and state estimation used for autonomous driving, such as the method of least-squares - Develop a model for typical vehicle localization sensors, including GPS and IMUs - Apply extended and unscented Kalman Filters to a vehicle state estimation problem - Understand LIDAR scan matching and the Iterative Closest Point algorithm - Apply these tools to fuse multiple sensor streams into a single state estimate for a self-driving car For the final project in this course, you will implement the Error-State Extended Kalman Filter (ES-EKF) to localize a vehicle using data from the CARLA simulator. This is an advanced course, intended for learners with a background in mechanical engineering, computer and electrical engineering, or robotics. To succeed in this course, you should have programming experience in Python 3.0, familiarity with Linear Algebra (matrices, vectors, matrix multiplication, rank, Eigenvalues and vectors and inverses), Statistics (Gaussian probability distributions), Calculus and Physics (forces, moments, inertia, Newton’s Laws).

Peception
This course will introduce you to the main perception tasks in autonomous driving, static and dynamic object detection, and will survey common computer vision methods for robotic perception. By the end of this course, you will be able to work with the pinhole camera model, perform intrinsic and extrinsic camera calibration, detect, describe and match image features and design your own convolutional neural networks. You’ll apply these methods to visual odometry, object detection and tracking, and semantic segmentation for drivable surface estimation. These techniques represent the main building blocks of the perception system for self-driving cars. For the final project in this course, you will develop algorithms that identify bounding boxes for objects in the scene, and define the boundaries of the drivable surface. You’ll work with synthetic and real image data, and evaluate your performance on a realistic dataset. This is an advanced course, intended for learners with a background in computer vision and deep learning. To succeed in this course, you should have programming experience in Python 3.0, and familiarity with Linear Algebra (matrices, vectors, matrix multiplication, rank, Eigenvalues and vectors and inverses).

Planning
This course will introduce you to the main planning tasks in autonomous driving, including mission planning, behavior planning and local planning. By the end of this course, you will be able to find the shortest path over a graph or road network using Dijkstra’s and the A* algorithm, use finite state machines to select safe behaviors to execute, and design optimal, smooth paths and velocity profiles to navigate safely around obstacles while obeying traffic laws. You’ll also build occupancy grid maps of static elements in the environment and learn how to use them for efficient collision checking. This course will give you the ability to construct a full self-driving planning solution, to take you from home to work while behaving like a typical driving and keeping the vehicle safe at all times. For the final project in this course, you will implement a hierarchical motion planner to navigate through a sequence of scenarios in the CARLA simulator, including avoiding a vehicle parked in your lane, following a lead vehicle and safely navigating an intersection. You’ll face real-world randomness and need to work to ensure your solution is robust to changes in the environment. This is an intermediate course, intended for learners with some background in robotics, and it builds on the models and controllers devised in Course 1 of this specialization. To succeed in this course, you should have programming experience in Python 3.0, and familiarity with Linear Algebra (matrices, vectors, matrix multiplication, rank, Eigenvalues and vectors and inverses) and calculus (ordinary differential equations, integration).

一、定位:

1.GPS+RTK定位
GPS(Global Positioning System): [三角测量]延伸的技术,由美国政府开发,距离地面2万公里。这类系统通用名称为全球导航卫星系统GNSS(Global Navigation Satellite System)。原理:通过通信,算出A点到卫星1、卫星2、卫星3的距离,便可以确定A点在GPS地图中的位置;
2021-3-1 downloaded at https://www.gps.gov/systems/gps/space/
GPS由[卫星]、[世界各地控制站]和[GPS接收器]组成。
GPS配备了[高精度的原子钟],确保Distance=C x Time接收器到卫星的距离的准确性。为了进一步减少误差,我们用实时运动定位(RTK)。
RTK:在地面上建立几个基站,每个基站知道自己的位置,同时通过GPS来定位自己,得到一个[定位偏差],然后,把这个偏差发送给其他的GPS接收器,以供其通过计算偏差调整自身的位置。
在RTK辅助下,GPS的定位误差可以限制在10cm以内。
缺点在于,[高楼及一些障碍物]可能阻挡GPS信号,且[更新频率很低,10Hz] ,每秒更新10次。

2.IMU定位
已知加速度、初速度和初始位置,可推算任何时间点车的速度和位置。
惯性测量单元 = [三轴加速计]+[陀螺仪] ,测量加速度,[频率为1000Hz],从而IMU可提供接近实时的位置信息。但是随着时间会出现[运动误差],我们只能在很短的时间段内,依靠IMU进行定位。
结合GPS和IMU实现更精准的定位,GPS弥补了运动误差,IMU弥补了更新频率低。但仍然不够,[山间隧道和峡谷],我们会丢失GPS信号。

3.激光雷达定位
用LiDAR测到的点云与高精度地图连续匹配,判断位置与行驶方向。这里有很多算法,[迭代最近点(ICP)]、[滤波算法(误差平方和算法SSD)]、卡尔曼滤波。
LiDAR定位优点 :稳健性
缺点:高精地图保持实时更新难度大。

4.视觉定位:摄像头与高精地图配合,[粒子滤波]。
优点:简单,容易获得源图片。

二、感知:

感知周围环境,人用眼睛认识环境;
自动驾驶用摄像头、激光雷达及雷达。
在这里插入图片描述
感知会用到很多[计算机视觉CV]方面的知识。

感知的核心任务:
[Detection ]

[Classification] 用CNN查找图像中对象的位置并分类

[Tracking] 持续追踪物体,[避免遮挡]。算法如:[局部二值模式] 和 [方向梯度直方图]

[Segmentation] 把像素与语义匹配.全卷积网络FCN(Fully Convolutional Network)

图像分类器
将图像作为输入,并输出标识该图片的“类别”的算法, 也就是识别图像的算法。
图像分类器的一般步骤:
Input Data — Pre-processing (resizing、rotating、color transformation)— Feature — Classifying Module

摄像头图像:R、G、B像素矩阵。

LiDAR图像:通过激光脉冲来获得点云(Point Cloud),得到摄像头无法轻易得到的信息,如距离和高度。

机器学习
利用特殊算法,一次一次的训练,让计算机从数据中学习,学习结果放在模型数据结构当中。
金融公司利用其预测汇率和证劵交易;零售企业用其预测需求;医生利用其辅助医疗诊断。
机器学习又分三类:
监督学习Supervised Learning:模型利用了人类创造的真值标记。
无监督学习Unsupervised Learning:计算机自行学习。
半监督学习Semi-supervised Learning

神经网络
启发于生物神经系统,是用来通过数据学习复杂模型的工具。通过大量的训练,计算机可以识别车、行人、交通信号和电线杆。通过给不同的特质赋予不同的权重。
[反向传播算法] 、[CNN卷积神经网络]。

感知融合策略用[卡尔曼滤波波算法]:
在这里插入图片描述

三、预测:

预测很重要,是无人车运动轨迹生成的核心。
需要实时、准确,算法的延迟越短越好,预测模块也应该能够学习新的行为。

预测:将复杂的车辆运动转换为车道转换序列,使用现有的车道序列观测值来[训练神经网络]进行预测,最后将车道序列预测与车辆物理结合起来,为每个物体生产估计轨迹。

[基于模型Model-Based]的预测
优点:直观,结合了现有的物理知识和交通法规还有人类行为方面知识。

[数据驱动Data-Driven]预测:使用机器学习算法,通过观察结果来训练模型。

[基于车道序列]的预测:把道路分成很多部分。
在这里插入图片描述
深度学习[循环神经网络RNN算法]。

四、规划:

分为
1.路径规划:侧重于地图上A到B,[A*算法]
在这里插入图片描述
2. 轨迹生成:行驶轨迹、行驶速度规划。
在这里插入图片描述
在这里插入图片描述
[成本函数]
[Frenet坐标]

五、控制:

控制有两个输入:
[目标轨迹] 与 [车辆状态]。

输出的是:
对转向、加速度和制动的控制。
在这里插入图片描述
当偏离目标轨迹时,我们采取控制行动来纠正这种偏差。

用到的算法:
[PID控制]
[LQR线性二次调节器]
[MPC模型预测控制]

全部评论 (0)

还没有任何评论哟~