Understanding the Codes and Data
Codes helps Reproducible Research:
https://mloss.org/software/
Machine Learning Open Source Software
此链接指向 https://paperswithcode.com/task/variable-selection 以及其相关资源
http://manaai.cn/index3.html
AI算法市场
Data helps Reproducible Implementation:
http://www.svcl.ucsd.edu/projects/universal-detection/ This is the benchmark introduced in CVPR 2019 paper: Towards Universal Object Detection by Domain Attention[1]. The goal of this benchmark is to encourage designing universal object detection system, capble of solving various detection tasks. To train and evaluate universal/multi-domain object detection systems, we established a new universal object detection benchmark (UODB) of 11 datasets:
1. Pascal VOC[2]
2. WiderFace[3]
3. KITTI[4]
4. LISA[5]
5. DOTA[6]
6. COCO[7]
7. Watercolor[8]
8. Clipart[8]
9. Comic[8]
10. Kitchen[9]
11. DeepLesions[10].
This set includes the popular VOC and COCO, composed of images of everyday objects, e.g. bikes, humans, animals, etc. The 20 VOC categories are replicated on CrossDomain with three subsets of Watercolor, Clipart and Comic, with objects depicted in watercolor, clipart and comic styles, respectively. Kitchen consists of common kitchen objects, collected with an hand-held Kinect, while WiderFace contains human faces, collected on the web. Both KITTI and LISA depict traffic scenes, collected with cameras mounted on moving vehicles. KITTI covers the categories of vehicle, pedestrian and cyclist, while LISA is composed of traffic signs. DOTA is a surveillance-style dataset, containing objects such as vehicles, planes, ships, harbors, etc. imaged from aerial cameras. Finally DeepLesion is a dataset of lesions on medical CT images. Altogether, UODB covers a wide range of variations in category, camera view, image style, etc, and thus establishes a good suite for the evaluation of universal/multi-domain object detection.
FAN: Feature Adaptation Network用于Surveillance Face Recognition和Normalisation
Disentangling Monocular 3D Object Detection
KITTI-nuScenes
DAWN: Vehicle Recognition in Bad Weather Natural Dataset
https://ccv.wordpress.fos.auckland.ac.nz/eisats/set-10/
Robust Vehicle Detection and Distance Estimation Under Challenging Lighting Conditions (2015): iROADS Dataset (Intercity Roads and Adverse Driving Conditions)
Yohann Cabon, Naila Murray, and Martin Humenberger are the authors of Virtual KITTI 2. The research paper was published on arXiv.org (arXiv:2001.10773), 2020
This KITTY data source includes photos taken by cameras in real-world settings; it's commonly used for evaluating depths. This simulated VKITTY data set features scenes captured during daylight hours, nighttime, diverse weather conditions like foggy or rainy days. The real-world benchmark provides ground truth sparse depth information alongside semantic segmentation details. In contrast, this synthetic data source offers comprehensive ground truth for both dense depths and semantic segmentations. During network training, we utilized detailed dense-depth data along with realistic images from both datasets to enhance performance.
