Advertisement

MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction

阅读量:

背景假设:

CC

CC

CC

CC

主要思路:

CC

CC

Our method is significantly influenced by the concept of predetermined anchors, which originated from ML-based systems and are used for addressing complex issues involving multiple data types.

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

CC

形式化描述:

CC

CC

CC

CC

CC

CC

CC

CC

CC

This approach represents a finite collection of K anchor trajectories, denoted as A = {ak}K k=1. Each anchor trajectory is composed of states: ak = [ak1, ..., akT]. The method incorporates uncertainty over this finite collection using the_softmax_distribution.

在这里插入图片描述

CC

CC

Assuming that uncertainty follows a unimodal distribution when the intent is known, we model control uncertainty as a Gaussian distribution whose mean depends on the waypoint states along each anchor trajectory.

在这里插入图片描述

CC

CC

CC

CC

注释

To derive a distribution over the entire state space, we integrate out the agent's intent:

在这里插入图片描述

CC

CC

主流程:

在这里插入图片描述

Figure 1: The MultiPath method computes the distribution over future trajectories per agent in a scene through the following steps: 1) By employing a top-down scene representation, the Scene CNN encodes the state of individual agents and their interactions at a mid-level feature level. 2) For each agent in the scene, an agent-centric view of the mid-level feature representation is cropped, and the probabilities over a fixed set of K predefined anchor trajectories are predicted for each agent. 3) For each anchor trajectory, the model regresses offset states from the anchor states and estimates uncertainty distributions for every future time step.

  • Input representation

CC

  • Obtaining anchor trajectories

CC

CC

在这里插入图片描述

CC

CC

  • Learning

By training our model systematically through imitation learning, we fit our parameters to maximize the log-likelihood of recorded driving trajectories. Let the data consist of pairs {(xm,ˆsm)} where m ranges from 1 to M. We develop a deep neural network with weights θ that learns to predict distribution parameters π(ak|x), µ(x)kt and Σ(x)kt based on Equation 2’s framework. The negative log-likelihood loss is formulated as follows:

在这里插入图片描述

CC

CC

CC

CC

CC

  • Neural network details

CC

CC

CC

CC

CC

注释

补充知识:
Conditional variational autoencoders (CVAEs)

全部评论 (0)

还没有任何评论哟~