Advertisement

Randlanet 训练自己的数据集

阅读量:

以下是基于您提供的代码和任务的要求生成的摘要:

基于 RandLANet 的点云分类与分割
该代码实现了从数据预处理到模型训练的完整流程:
数据预处理

  • 使用 PLY 格式保存点云数据。
  • 通过 KNN 搜索进行特征提取和投影。
  • 支持多分类任务(当前支持类别数量为3)。
    模型训练
  • 使用自定义网络 RandLANet 进行点云分类与分割。
  • 调整了关键超参数(如学习率、批量大小等)以适应不同规模的数据集。
  • 提供了多步采样策略以提高模型性能。
    技术实现
  • 通过 tf.data 构建高效的输入管道。
  • 使用 KNN 理解空间关系并结合语义信息提升分类准确性。
  • 支持多GPU加速和多种优化策略以提升训练效率。
    环境支持
  • 针对 Ubuntu 系统调整了斜杠路径问题。
    该系统适用于室内三维场景的数据分析与建模任务。

此摘要涵盖了项目的主要功能和技术要点,并严格控制在100-200字范围内。

本文聚焦于从环境配置到C++推理部署的技术探讨。本文最初使用的代码基于tensorflow 1.x版本进行开发。值得注意的是目前主流显卡多为NVIDIA 30系列芯片架构且仅在CUDA 11.1及以上版本下兼容运行。详细介绍了如何在Windows和Ubuntu系统上完成环境配置并训练自定义数据集。同时提供如何将Randlanet集成到个人软件中的完整指南。

本篇博客是内容训练自己的数据集

我的数据格式是参照 semantic3d的 【x,y,z,r,g,b,label】
话不多说 直接贴代码

将数据文件放置于 data/original_data 文件夹内。由于博主采用了Windows操作系统进行训练(注:此处可补充说明具体环境配置),其中路径通常使用反斜杠符号 '' 表示。但在Ubuntu系统中,默认使用斜杠 '/' 作为路径分隔符(注:此处可补充说明不同操作系统的默认设置),因此需要将反斜杠替换为正斜杠。

创建文件1

data_prepare_Ds.py

复制代码
    from sklearn.neighbors import KDTree
    from os.path import join, exists, dirname, abspath
    import numpy as np
    import os, glob, pickle
    import sys
    
    BASE_DIR = dirname(abspath(__file__))
    ROOT_DIR = dirname(BASE_DIR)
    sys.path.append(BASE_DIR)
    sys.path.append(ROOT_DIR)
    from helper_ply import write_ply
    from helper_tool import DataProcessing as DP
    
    grid_size = 0.06  #我的点云数据集比较密,所以下采样间隔取大一点
    
    dataset_path = 'E:\ RandLA-Net\ data\ original_data'
    original_pc_folder = join(dirname(dataset_path), 'original_ply')
    sub_pc_folder = join(dirname(dataset_path), 'input_{:.3f}'.format(grid_size))
    os.mkdir(original_pc_folder) if not exists(original_pc_folder) else None
    os.mkdir(sub_pc_folder) if not exists(sub_pc_folder) else None
    
    for pc_path in glob.glob(join(dataset_path, '*.txt')):
    print(pc_path)
    # file_name = pc_path.split('/')[-1][:-4]
    file_name = pc_path.split('\ ')[-1][:-4]
    #file_name=os.path.basename(pc_path)[:-4]
    
    # check if it has already calculated
    if exists(join(sub_pc_folder, file_name + '_KDTree.pkl')):
        continue
        
    pc = DP.load_pc_ds(pc_path)
    labels=pc[:,-1].astype(np.uint8)
    #labels = np.zeros(pc.shape[0], dtype=np.uint8)
    print('len(labels):',len(labels))
    full_ply_path = join(original_pc_folder, file_name + '.ply')
    
    #  Subsample to save space
    # sub_points, sub_colors, sub_labels = DP.grid_sub_sampling(pc[:, :3].astype(np.float32),
    #                                                           pc[:, 3:6].astype(np.uint8), labels, 0.01)
    
    sub_points = pc[:, :3].astype(np.float32)
    sub_colors = pc[:, 3:6].astype(np.uint8)
    sub_labels = labels
    
    #sub_labels = np.squeeze(sub_labels)
    print('sub_points:',len(sub_points))
    print('sub_colors:', len(sub_colors))
    print('sub_labels:', len(sub_labels))
    write_ply(full_ply_path, (sub_points, sub_colors, sub_labels), ['x', 'y', 'z', 'red', 'green', 'blue', 'class'])
    
    # save sub_cloud and KDTree file
    sub_xyz, sub_colors, sub_labels = DP.grid_sub_sampling(sub_points, sub_colors, sub_labels, grid_size)
    sub_colors = sub_colors / 255.0
    #sub_labels = np.squeeze(sub_labels)
    sub_ply_file = join(sub_pc_folder, file_name + '.ply')
    write_ply(sub_ply_file, [sub_xyz, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class'])
    
    search_tree = KDTree(sub_xyz, leaf_size=50)
    kd_tree_file = join(sub_pc_folder, file_name + '_KDTree.pkl')
    with open(kd_tree_file, 'wb') as f:
        pickle.dump(search_tree, f)
    
    proj_idx = np.squeeze(search_tree.query(sub_points, return_distance=False))
    proj_idx = proj_idx.astype(np.int32)
    proj_save = join(sub_pc_folder, file_name + '_proj.pkl')
    with open(proj_save, 'wb') as f:
        pickle.dump([proj_idx, labels], f)

创建文件2

main_Ds.py

复制代码
    from os.path import join, exists
    from RandLANet import Network
    from tester_Ds import ModelTester
    from helper_ply import read_ply
    from helper_tool import Plot
    from helper_tool import DataProcessing as DP
    from helper_tool import ConfigDs as cfg
    import tensorflow.compat.v1 as tf
    tf.disable_v2_behavior()
    import numpy as np
    import pickle, argparse, os
    
    
    class Ds: 
    def __init__(self):
        self.name = 'Ds'
        self.path = 'D:\ RandLA-Net-master\ data\ Semantic3D\ test'
        self.label_to_names = {0: 'background',
                               1: 'poweline',
                               2: 'veg'
                               }
        self.num_classes = len(self.label_to_names)
        self.label_values = np.sort([k for k, v in self.label_to_names.items()])
        self.label_to_idx = {l: i for i, l in enumerate(self.label_values)}
        self.ignored_labels = np.sort([])
    
        self.original_folder = join(self.path, 'original_data')
        self.full_pc_folder = join(self.path, 'original_ply')
        self.sub_pc_folder = join(self.path, 'input_{:.3f}'.format(cfg.sub_grid_size))
    
        self.val_split = ['Cloud']   #"Cloud"是点云文件的名字,把需要的val放这里
        self.test_split= ['Cloud_center']
    
        # Initial training-validation-testing files
        self.train_files = []
        self.val_files = []
        self.test_files = []
        cloud_names = [file_name[:-4] for file_name in os.listdir(self.original_folder) if file_name[-4:] == '.txt']
    
        for pc_name in cloud_names:
            pc_file=join(self.sub_pc_folder, pc_name + '.ply')
            if pc_name in self.val_split:
                self.val_files.append(pc_file)
            elif pc_name in self.test_split:
                self.test_files.append(pc_file)
            else:
                self.train_files.append(pc_file)
    
    
        # Initiate containers
        self.val_proj = []
        self.val_labels = []
        self.test_proj = []
        self.test_labels = []
    
        self.possibility = {}
        self.min_possibility = {}
        self.class_weight = {}
        self.input_trees = {'training': [], 'validation': [], 'test': []}
        self.input_colors = {'training': [], 'validation': [], 'test': []}
        self.input_labels = {'training': [], 'validation': []}
    
        self.ascii_files = {'Cloud_center.ply': 'Cloud_center.labels'}
    
    
    
        self.load_sub_sampled_clouds(cfg.sub_grid_size)
    
    def load_sub_sampled_clouds(self, sub_grid_size):
    
        tree_path = join(self.path, 'input_{:.3f}'.format(sub_grid_size))
        files = np.hstack((self.train_files, self.val_files, self.test_files))
    
        for i, file_path in enumerate(files):
            cloud_name = file_path.split('\ ')[-1][:-4]
            print('Load_pc_' + str(i) + ': ' + cloud_name)
            if file_path in self.val_files:
                cloud_split = 'validation'
            elif file_path in self.train_files:
                cloud_split = 'training'
            else:
                cloud_split = 'test'
    
            # Name of the input files
            kd_tree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name))
            sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name))
    
            # read ply with data
            data = read_ply(sub_ply_file)
            sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T
            if cloud_split == 'test':
                sub_labels = None
            else:
                sub_labels = data['class']
    
            # Read pkl with search tree
            with open(kd_tree_file, 'rb') as f:
                search_tree = pickle.load(f)
    
            self.input_trees[cloud_split] += [search_tree]
            self.input_colors[cloud_split] += [sub_colors]
            if cloud_split in ['training', 'validation']:
                self.input_labels[cloud_split] += [sub_labels]
    
        # Get validation and test re_projection indices
        print('\nPreparing reprojection indices for validation and test')
    
        for i, file_path in enumerate(files):
    
            # get cloud name and split
            cloud_name = file_path.split('\ ')[-1][:-4]
    
            # Validation projection and labels
            if file_path in self.val_files:
                proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name))
                with open(proj_file, 'rb') as f:
                    proj_idx, labels = pickle.load(f)
                self.val_proj += [proj_idx]
                self.val_labels += [labels]
    
            # Test projection
            if file_path in self.test_files:
                proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name))
                with open(proj_file, 'rb') as f:
                    proj_idx, labels = pickle.load(f)
                self.test_proj += [proj_idx]
                self.test_labels += [labels]
        print('finished')
        return
    
    # Generate the input data flow
    def get_batch_gen(self, split):
        if split == 'training':
            num_per_epoch = cfg.train_steps * cfg.batch_size
        elif split == 'validation':
            num_per_epoch = cfg.val_steps * cfg.val_batch_size
        elif split == 'test':
            num_per_epoch = cfg.val_steps * cfg.val_batch_size
    
        # Reset possibility
        self.possibility[split] = []
        self.min_possibility[split] = []
        self.class_weight[split] = []
    
        # Random initialize
        for i, tree in enumerate(self.input_trees[split]):
            self.possibility[split] += [np.random.rand(tree.data.shape[0]) * 1e-3]
            self.min_possibility[split] += [float(np.min(self.possibility[split][-1]))]
    
        if split != 'test':
            _, num_class_total = np.unique(np.hstack(self.input_labels[split]), return_counts=True)
            self.class_weight[split] += [np.squeeze([num_class_total / np.sum(num_class_total)], axis=0)]
    
        def spatially_regular_gen():
    
            # Generator loop
            for i in range(num_per_epoch):  # num_per_epoch
    
                # Choose the cloud with the lowest probability
                cloud_idx = int(np.argmin(self.min_possibility[split]))
    
                # choose the point with the minimum of possibility in the cloud as query point
                point_ind = np.argmin(self.possibility[split][cloud_idx])
    
                # Get all points within the cloud from tree structure
                points = np.array(self.input_trees[split][cloud_idx].data, copy=False)
    
                # Center point of input region
                center_point = points[point_ind, :].reshape(1, -1)
    
                # Add noise to the center point
                noise = np.random.normal(scale=cfg.noise_init / 10, size=center_point.shape)
                pick_point = center_point + noise.astype(center_point.dtype)
                query_idx = self.input_trees[split][cloud_idx].query(pick_point, k=cfg.num_points)[1][0]
    
                # Shuffle index
                query_idx = DP.shuffle_idx(query_idx)
    
                # Get corresponding points and colors based on the index
                queried_pc_xyz = points[query_idx]
                queried_pc_xyz[:, 0:2] = queried_pc_xyz[:, 0:2] - pick_point[:, 0:2]
                queried_pc_colors = self.input_colors[split][cloud_idx][query_idx]
                if split == 'test':
                    queried_pc_labels = np.zeros(queried_pc_xyz.shape[0])
                    queried_pt_weight = 1
                else:
                    queried_pc_labels = self.input_labels[split][cloud_idx][query_idx]
                    queried_pc_labels = np.array([self.label_to_idx[l] for l in queried_pc_labels])
                    queried_pt_weight = np.array([self.class_weight[split][0][n] for n in queried_pc_labels])
    
                # Update the possibility of the selected points
                dists = np.sum(np.square((points[query_idx] - pick_point).astype(np.float32)), axis=1)
                delta = np.square(1 - dists / np.max(dists)) * queried_pt_weight
                self.possibility[split][cloud_idx][query_idx] += delta
                self.min_possibility[split][cloud_idx] = float(np.min(self.possibility[split][cloud_idx]))
    
                if True:
                    yield (queried_pc_xyz,
                           queried_pc_colors.astype(np.float32),
                           queried_pc_labels,
                           query_idx.astype(np.int32),
                           np.array([cloud_idx], dtype=np.int32))
    
        gen_func = spatially_regular_gen
        gen_types = (tf.float32, tf.float32, tf.int32, tf.int32, tf.int32)
        gen_shapes = ([None, 3], [None, 3], [None], [None], [None])
        return gen_func, gen_types, gen_shapes
    
    def get_tf_mapping(self):
        # Collect flat inputs
        def tf_map(batch_xyz, batch_features, batch_labels, batch_pc_idx, batch_cloud_idx):
            batch_features = tf.map_fn(self.tf_augment_input, [batch_xyz, batch_features], dtype=tf.float32)
            input_points = []
            input_neighbors = []
            input_pools = []
            input_up_samples = []
    
            for i in range(cfg.num_layers):
                neigh_idx = tf.py_func(DP.knn_search, [batch_xyz, batch_xyz, cfg.k_n], tf.int32)
                sub_points = batch_xyz[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :]
                pool_i = neigh_idx[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :]
                up_i = tf.py_func(DP.knn_search, [sub_points, batch_xyz, 1], tf.int32)
                input_points.append(batch_xyz)
                input_neighbors.append(neigh_idx)
                input_pools.append(pool_i)
                input_up_samples.append(up_i)
                batch_xyz = sub_points
    
            input_list = input_points + input_neighbors + input_pools + input_up_samples
            input_list += [batch_features, batch_labels, batch_pc_idx, batch_cloud_idx]
    
            return input_list
    
        return tf_map
    
    def tf_augment_input(inputs):
        xyz = inputs[0]
        features = inputs[1]
        theta = tf.random_uniform((1,), minval=0, maxval=2 * np.pi)
        # Rotation matrices
        c, s = tf.cos(theta), tf.sin(theta)
        cs0 = tf.zeros_like(c)
        cs1 = tf.ones_like(c)
        R = tf.stack([c, -s, cs0, s, c, cs0, cs0, cs0, cs1], axis=1)
        stacked_rots = tf.reshape(R, (3, 3))
    
        # Apply rotations
        transformed_xyz = tf.reshape(tf.matmul(xyz, stacked_rots), [-1, 3])
        # Choose random scales for each example
        min_s = cfg.augment_scale_min
        max_s = cfg.augment_scale_max
        if cfg.augment_scale_anisotropic:
            s = tf.random_uniform((1, 3), minval=min_s, maxval=max_s)
        else:
            s = tf.random_uniform((1, 1), minval=min_s, maxval=max_s)
    
        symmetries = []
        for i in range(3):
            if cfg.augment_symmetries[i]:
                symmetries.append(tf.round(tf.random_uniform((1, 1))) * 2 - 1)
            else:
                symmetries.append(tf.ones([1, 1], dtype=tf.float32))
        s *= tf.concat(symmetries, 1)
    
        # Create N x 3 vector of scales to multiply with stacked_points
        stacked_scales = tf.tile(s, [tf.shape(transformed_xyz)[0], 1])
    
        # Apply scales
        transformed_xyz = transformed_xyz * stacked_scales
    
        noise = tf.random_normal(tf.shape(transformed_xyz), stddev=cfg.augment_noise)
        transformed_xyz = transformed_xyz + noise
        rgb = features[:, :3]
        stacked_features = tf.concat([transformed_xyz, rgb], axis=-1)
        return stacked_features
    
    def init_input_pipeline(self):
        print('Initiating input pipelines')
        cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels]
        gen_function, gen_types, gen_shapes = self.get_batch_gen('training')
        gen_function_val, _, _ = self.get_batch_gen('validation')
        gen_function_test, _, _ = self.get_batch_gen('test')
        self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes)
        self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes)
        self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes)
    
        self.batch_train_data = self.train_data.batch(cfg.batch_size)
        self.batch_val_data = self.val_data.batch(cfg.val_batch_size)
        self.batch_test_data = self.test_data.batch(cfg.val_batch_size)
        map_func = self.get_tf_mapping()
    
        self.batch_train_data = self.batch_train_data.map(map_func=map_func)
        self.batch_val_data = self.batch_val_data.map(map_func=map_func)
        self.batch_test_data = self.batch_test_data.map(map_func=map_func)
    
        self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size)
        self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size)
        self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size)
    
        iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes)
        self.flat_inputs = iter.get_next()
        self.train_init_op = iter.make_initializer(self.batch_train_data)
        self.val_init_op = iter.make_initializer(self.batch_val_data)
        self.test_init_op = iter.make_initializer(self.batch_test_data)
    
    
    if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]')
    parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis')
    parser.add_argument('--model_path', type=str, default='None', help='pretrained model path')
    FLAGS = parser.parse_args()
    
    GPU_ID = FLAGS.gpu
    os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
    os.environ['CUDA_VISIBLE_DEVICES'] = str(GPU_ID)
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    
    Mode = FLAGS.mode
    dataset = Ds()
    dataset.init_input_pipeline()
    
    if Mode == 'train':
        model = Network(dataset, cfg)
        model.train(dataset)
    elif Mode == 'test':
        cfg.saving = False
        model = Network(dataset, cfg)
        if FLAGS.model_path is not 'None':
            chosen_snap = FLAGS.model_path
        else:
            chosen_snapshot = -1
            logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')])
            chosen_folder = logs[-1]
            snap_path = join(chosen_folder, 'snapshots')
            snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta']
            chosen_step = np.sort(snap_steps)[-1]
            chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step))
        tester = ModelTester(model, dataset, restore_snap=chosen_snap)
        tester.test(model, dataset)
    
    else:
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            sess.run(dataset.train_init_op)
            #print(sess.run())
            while True:
                flat_inputs = sess.run(dataset.flat_inputs)
                #print('flat_inputs:',flat_inputs)
                pc_xyz = flat_inputs[0]
                print('pc_xyz:',pc_xyz)
                sub_pc_xyz = flat_inputs[1]
                print('sub_pc_xyz:',sub_pc_xyz)
                labels = flat_inputs[21]
                print('labels:',labels)
                Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :])
                Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]])
复制代码
    注意这里
    self.ignored_labels = np.sort([])

相较于原先的Semantic3D框架,在我的实现中不仅增加了对类别的支持,并且还引入了类别0进行训练。若你们选择不将类别0纳入训练,则应修改为:self.ignored_labels = np.sort([0])

创建文件3

tester_Ds.py

复制代码
    from os import makedirs
    from os.path import exists, join
    from helper_ply import read_ply, write_ply
    import tensorflow.compat.v1 as tf
    tf.disable_v2_behavior()
    import numpy as np
    import time
    
    
    def log_string(out_str, log_out):
    log_out.write(out_str + '\n')
    log_out.flush()
    print(out_str)
    
    
    class ModelTester:
    def __init__(self, model, dataset, restore_snap=None):
        # Tensorflow Saver definition
        my_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
        self.saver = tf.train.Saver(my_vars, max_to_keep=100)
    
        # Create a session for running Ops on the Graph.
        on_cpu = False
        if on_cpu:
            c_proto = tf.ConfigProto(device_count={'GPU': 0})
        else:
            c_proto = tf.ConfigProto()
            c_proto.gpu_options.allow_growth = True
        self.sess = tf.Session(config=c_proto)
        self.sess.run(tf.global_variables_initializer())
    
        if restore_snap is not None:
            self.saver.restore(self.sess, restore_snap)
            print("Model restored from " + restore_snap)
    
        # Add a softmax operation for predictions
        self.prob_logits = tf.nn.softmax(model.logits)
        self.test_probs = [np.zeros((l.data.shape[0], model.config.num_classes), dtype=np.float16)
                           for l in dataset.input_trees['test']]
    
        self.log_out = open('log_test_' + dataset.name + '.txt', 'a')
    
    def test(self, model, dataset, num_votes=100):
    
        # Smoothing parameter for votes
        test_smooth = 0.98
    
        # Initialise iterator with train data
        self.sess.run(dataset.test_init_op)
    
        # Test saving path
        saving_path = time.strftime('results\ Log_%Y-%m-%d_%H-%M-%S', time.gmtime())
        test_path = join('test', saving_path.split('\ ')[-1])
        makedirs(test_path) if not exists(test_path) else None
        makedirs(join(test_path, 'predictions')) if not exists(join(test_path, 'predictions')) else None
        makedirs(join(test_path, 'probs')) if not exists(join(test_path, 'probs')) else None
    
        #####################
        # Network predictions
        #####################
    
        step_id = 0
        epoch_id = 0
        last_min = -0.5
    
        while last_min < num_votes:
    
            try:
                ops = (self.prob_logits,
                       model.labels,
                       model.inputs['input_inds'],
                       model.inputs['cloud_inds'],)
    
                stacked_probs, stacked_labels, point_idx, cloud_idx = self.sess.run(ops, {model.is_training: False})
                stacked_probs = np.reshape(stacked_probs, [model.config.val_batch_size, model.config.num_points,
                                                           model.config.num_classes])
    
                for j in range(np.shape(stacked_probs)[0]):
                    probs = stacked_probs[j, :, :]
                    inds = point_idx[j, :]
                    c_i = cloud_idx[j][0]
                    self.test_probs[c_i][inds] = test_smooth * self.test_probs[c_i][inds] + (1 - test_smooth) * probs
                step_id += 1
                log_string('Epoch {:3d}, step {:3d}. min possibility = {:.1f}'.format(epoch_id, step_id, np.min(
                    dataset.min_possibility['test'])), self.log_out)
    
            except tf.errors.OutOfRangeError:
    
                # Save predicted cloud
                new_min = np.min(dataset.min_possibility['test'])
                log_string('Epoch {:3d}, end. Min possibility = {:.1f}'.format(epoch_id, new_min), self.log_out)
    
                if last_min + 4 < new_min:
    
                    print('Saving clouds')
    
                    # Update last_min
                    last_min = new_min
    
                    # Project predictions
                    print('\nReproject Vote #{:d}'.format(int(np.floor(new_min))))
                    t1 = time.time()
                    files = dataset.test_files
                    i_test = 0
                    for i, file_path in enumerate(files):
                        # Get file
                        points = self.load_evaluation_points(file_path)
                        points = points.astype(np.float16)
    
                        # Reproject probs
                        probs = np.zeros(shape=[np.shape(points)[0], 8], dtype=np.float16)
                        proj_index = dataset.test_proj[i_test]
    
                        probs = self.test_probs[i_test][proj_index, :]
    
                        # Insert false columns for ignored labels
                        probs2 = probs
                        for l_ind, label_value in enumerate(dataset.label_values):
                            if label_value in dataset.ignored_labels:
                                probs2 = np.insert(probs2, l_ind, 0, axis=1)
    
                        # Get the predicted labels
                        preds = dataset.label_values[np.argmax(probs2, axis=1)].astype(np.uint8)
    
                        # Save plys
                        cloud_name = file_path.split('\ ')[-1]
    
                        # Save ascii preds
                        ascii_name = join(test_path, 'predictions', dataset.ascii_files[cloud_name])
                        np.savetxt(ascii_name, preds, fmt='%d')
                        log_string(ascii_name + 'has saved', self.log_out)
                        i_test += 1
    
                    t2 = time.time()
                    print('Done in {:.1f} s\n'.format(t2 - t1))
                    self.sess.close()
                    return
    
                self.sess.run(dataset.test_init_op)
                epoch_id += 1
                step_id = 0
                continue
        return
    
    @staticmethod
    def load_evaluation_points(file_path):
        data = read_ply(file_path)
        return np.vstack((data['x'], data['y'], data['z'])).T

修改文件

打开 helper_tool.py
在 class DataProcessing: 添加函数

复制代码
    def load_pc_ds(filename):
        pc_pd = pd.read_csv(filename, header=None, delim_whitespace=True, dtype=np.float32)
        pc = pc_pd.values
        return pc

(类别1,类别2)

复制代码
      elif dataset_name is 'Ds':
            num_per_class = np.array([737003434,21019427,1413965137])

增加一个类

复制代码
    class ConfigDs:
    k_n = 16  # KNN
    num_layers = 5  # Number of layers
    #num_points = 100  # Number of input points
    num_points = 65536  # Number of input points
    num_classes = 3   # Number of valid classes
    sub_grid_size = 0.06  # preprocess_parameter
    
    batch_size = 2  # batch_size during training
    val_batch_size = 4  # batch_size during validation and test
    train_steps = 500  # Number of steps per epochs
    val_steps = 100  # Number of validation steps per epoch
    
    sub_sampling_ratio = [4, 4, 4, 4, 2]  # sampling ratio of random sampling at each layer
    d_out = [16, 64, 128, 256, 512]  # feature dimension
    
    noise_init = 3.5  # noise initial parameter
    max_epoch = 100  # maximum epoch during training
    learning_rate = 1e-2  # initial learning rate
    #learning_rate = 0.1  # initial learning rate
    lr_decays = {i: 0.95 for i in range(0, 500)}  # decay rate of learning rate
    
    train_sum_dir = 'train_log'
    saving = True
    saving_path = None
    
    augment_scale_anisotropic = True
    augment_symmetries = [True, False, False]
    augment_rotation = 'vertical'
    augment_scale_min = 0.8
    augment_scale_max = 1.2
    augment_noise = 0.001
    augment_occlusion = 'none'
    augment_color = 0.8

开始训练

请确保在运行data_prepare_Ds.py之前修改好路径设置。\n\n请注意,在Ubuntu系统中,请确认您使用的是正确的分隔符符号(即斜杠而非正斜杠)。\n\n打开终端后,请执行以下命令:

复制代码
    python main_Ds.py --mode train --gpu 0

可视化的话参考上一节

全部评论 (0)

还没有任何评论哟~