Advertisement

python车辆识别硬件_【辅助驾驶】Python OpenCV实现车辆检测

阅读量:

一、功能

对车辆前方的车辆进行检测,效果如图:

二、算法

1、传统检测方法

常规的机器学习方法,包括训练和应用两个过程。

训练过程:需要搭建涵盖正样本与负样本的训练集,并通过HOG与SIFT等特征提取方法来获得特征描述。随后,在此基础上应用SVM(支持向量机)以及决策树等模型对上一步提取的特征与对应标签进行建模。具体而言,在模型构建阶段需要自动优化模型参数以实现高效的分类任务。

应用:抽取识别对象图像的HOG和SIFT等特征,并基于训练好的支持向量机模型或决策树模型完成对这些特征的数据分类任务。

2、神经网络

通过神经网络训练正负样本,可以直接识别。

在自动驾驶场景中,神经网络存在两个主要问题:其一是在FPGA、ARM等硬件上运行时需要承担大量的并行计算任务,并占据较大的硬件资源而导致速度较慢;其二则是由于神经网络具有黑箱特性,在中间数据不可获取的情况下难以进行调试与验证工作。因此,在实现自动驾驶视觉系统时,选择OpenCV作为图像处理库是合理的。

三、代码

1)提取HOG特征,以下为实现方法:

Define a function to return HOG features and visualization

函数定义为计算HOG(斜方梯度)特征的函数,在给定的方向数量、每个单元格的像素数量和块中的单元格数量下获取图像特征向量,默认布尔值设置为False(可可视化),参数包括图像变量img、方向数目变量orient、每个细胞包含像素数目变量pix_per_cell、每块包含细胞数目变量cell_per_block以及布尔值变量feature_vec,默认设为True

if vis == True:

feature变量和hog_image计算得到的图像特征 = hog(img, orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),
详细说明参数设置:
- orientations: 设置方向数量为orient值
- pixels_per_cell: 每个像素块的尺寸设置为(pix_per_cell, pix_per_cell)

cells_per_block=(cell_per_block, cell_per_block), transform_sqrt=False,

visualise=True, feature_vector=False)

return features, hog_image

else:

features = hog(img, orientations=orient, pixels_per_cell=(pix_per_cell, pix_per_cell),

cells_per_block=(cell_per_block, cell_per_block), transform_sqrt=False,

visualise=False, feature_vector=feature_vec)

return features

2)训练分类器

这里使用SVM分类器,以下为代码:

t = time.time()

car_features = utils.extract_features(cars, cspace=colorspace, orient=orient,

pix_per_cell=pix_per_cell, cell_per_block=cell_per_block,

hog_channel=hog_channel)

non-cars' features are calculated using the feature_extractor to extract descriptors from non-cars with specified color space and orientation settings.

pix_per_cell=pix_per_cell, cell_per_block=cell_per_block,

hog_channel=hog_channel)

t2 = time.time()

print(round(t2-t, 2), 'Seconds to extract features...')

Create an array stack of feature vectors

X = np.vstack((car_features, notcar_features))

X = X.astype(np.float64)

Fit a per-column scaler

X_scaler = StandardScaler().fit(X)

Apply the scaler to X

scaled_X = X_scaler.transform(X)

Define the labels vector

y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))

Split up data into randomized training and test sets

rand_state = np.random.randint(0, 100)

X_train, X_test, y_train, y_test = train_test_split(

X, y, test_size=0.2, random_state=rand_state)

print('Feature vector length:', len(X_train[0]))

Use a linear SVC

svc = LinearSVC()

Check the training time for the SVC

t = time.time()

svc.fit(X_train, y_train)

t2 = time.time()

t2 = time.time()

print(round(t2-t, 2), 'Seconds to train classfier...')

Check the score of the SVC

print('Test Accuracy of classfier = ', round(svc.score(X_test, y_test), 4))

Check the prediction time for a single sample

t=time.time()

n_predict = 10

print('My classfier predicts: ', svc.predict(X_test[0:n_predict]))

print('For these',n_predict, 'labels: ', y_test[0:n_predict])

t2 = time.time()

print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with classfier')

3)应用滑动窗口(sliding windows)实现车辆检测

Create a unique function designed to extract features through the use of HOG sub-sampling, which will generate predictions.

find_cars(image,y_start,y_stop,scale,color_space,hog_channels,svc,x_scaler.orientation,)

每个像素单元的大小、每块中的细胞数量以及空间尺寸等属性均被指定

array of rectangles where cars were detected

windows = []

img = img.astype(np.float32) / 255

img_tosearch = img[ystart:ystop, :, :]

apply color conversion if other than 'RGB'

if cspace != 'RGB':

if cspace == 'HSV':

ctrans_tosearch = cv2.cvtColor(img_tosearch, cv2.COLOR_RGB2HSV)

elif cspace == 'LUV':

ctrans_tosearch = cv2.cvtColor(img_tosearch, cv2.COLOR_RGB2LUV)

elif cspace == 'HLS':

ctrans_tosearch = cv2.cvtColor(img_tosearch, cv2.COLOR_RGB2HLS)

elif cspace == 'YUV':

ctrans_tosearch = cv2.cvtColor(img_tosearch, cv2.COLOR_RGB2YUV)

elif cspace == 'YCrCb':

ctrans_tosearch = cv2.cvtColor(img_tosearch, cv2.COLOR_RGB2YCrCb)

else:

ctrans_tosearch = np.copy(img)

rescale image if other than 1.0 scale

if scale != 1:

imshape = ctrans_tosearch.shape

该变量通过OpenCV库中的resize函数被调整为指定的尺寸

select colorspace channel for HOG

if hog_channel == 'ALL':

ch1 = ctrans_tosearch[:, :, 0]

ch2 = ctrans_tosearch[:, :, 1]

ch3 = ctrans_tosearch[:, :, 2]

else:

ch1 = ctrans_tosearch[:, :, hog_channel]

Define blocks and steps as above

nxblocks = (ch1.shape[1] // pix_per_cell) + 1 # -1

nyblocks = (ch1.shape[0] // pix_per_cell) + 1 # -1

nfeat_per_block = orient * cell_per_block *

64 was the orginal sampling rate, with 8 cells and 8 pix per cell

window = 64

nblocks_per_window = (window // pix_per_cell) - 1

cells_per_step = 2 # Instead of overlap, define how many cells to step

nxsteps = (nxblocks - nblocks_per_window) // cells_per_step

nysteps = (nyblocks - nblocks_per_window) // cells_per_step

Compute individual channel HOG features for the entire image

hog1等于utils获取HOG特征函数调用结果

if hog_channel == 'ALL':

hog2 = utils.get_hog_features(ch2, orient, pix_per_cell, cell_per_block, feature_vec=False)

The variable hog3 is assigned the result of the function utils.get the HOG features of ch3 using the parameters orient, pix_per_cell, cell_per_block, and setting feature_vec to False.

for xb in range(nxsteps):

for yb in range(nysteps):

ypos = yb * cells_per_step

xpos = xb * cells_per_step

Extract HOG for this patch

hog_feat1等于从矩阵hog1中提取指定区域并展平得到

if hog_channel == 'ALL':

变量hog_feat2被赋值为矩阵hog2中从ypos到ypos+nblocks_per_window行和从xpos到xpos+nblocks_per_window列的子矩阵通过ravel方法展平

hog_feat3被赋值为二维切片区域hog3[ypos:ypos+nblocks_per_window,xpos:xpos+nblocks_per_window]经过展平后的结果

hog_features = np.hstack((hog_feat1, hog_feat2, hog_feat3))

else:

hog_features = hog_feat1

xleft = xpos * pix_per_cell

ytop = ypos * pix_per_cell

test_prediction = svc.predict(hog_features)

if test_prediction == 1 or show_all_rectangles:

xbox_left = np.int(xleft * scale)

ytop_draw = np.int(ytop * scale)

win_draw = np.int(window * scale)

windows.append(

((xbox_left, ytop_draw + ystart), (xbox_left + win_draw, ytop_draw + win_draw + ystart)))

return windows

4)应用热图(heatMap)过滤错误检测(false positive)

基于多个不同尺寸的滑动窗口设置,并考虑到各窗口之间存在重叠情况,在实际应用中可能会导致同一辆车图像被多个窗口捕获并进行检测。为了减少误报数量,在这种情况下可以利用这一现象进行过滤。

根据改写规则对原文进行同义改写

def add_heat(heatmap, bbox_list):

Iterate through list of bboxes

for box in bbox_list:

Add += 1 for all pixels inside each bbox

Assuming each "box" takes the form ((x1, y1), (x2, y2))

heatmap[box[0][1]:box[1][1], box[0][0]:box[1][0]] += 1

全部评论 (0)

还没有任何评论哟~