Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution
发布时间
阅读量:
阅读量
1.下载项目
git clone https://github.com/compphoto/BoostingMonocularDepth.git
2.创建环境
conda create -n HighResDepth python=3.7
conda activate HighResDepth
3 实验
Download our mergenet model weights from here
3. 1 To use MiDas-v2 or LeReS as base
conda install pytorch torchvision opencv cudatoolkit=10.2 -c pytorch
conda install matplotlib
conda install scipy
conda install scikit-image
【相关下载】
- For MiDaS-v2, download the model weights from MiDas-v2 直接下载model.pt
- For LeReS, download the model weights from LeReS (Resnext101) 直接下载resnet101
3.2 To use SGRnet as base
【相关下载】
- download the model weights from SGRnet 直接下载model.pth.tar
【相关事项】
- 提前确认好实验的相关配置要求,尤其是torch,可以从requirement文件反复确认。
cd BoostingMonocularDepth
pip uninstall torch
conda uninstall pytorch
conda uninstall libtorch
python
import torch
print(torch.__version__) #注意是双下划线
查询Pytorch_previous-versions,检查对应版本进行安装
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
【Gradio】凡是安装 不是conda install 就是pip install
https://zhuanlan.zhihu.com/p/374238080
python run.py --Final --max_res 2000 --data_dir inputs/ --output_dir outputs_midas/ --depthNet 0
实践效果

全部评论 (0)
还没有任何评论哟~
