使用“阿里云人工智能平台 PAI”制作数字人
体验 阿里云人工智能平台 PAI
PAI-DSW免费试用
https://free.aliyun.com/?spm=5176.14066474.J_5834642020.5.7b34754cmRbYhg&productCode=learn
https://help.aliyun.com/document_detail/2261126.html
体验PAI-DSW
https://help.aliyun.com/document_detail/2261126.html

基于Wav2Lip+TPS-Motion-Model+CodeFormer技术实现动漫风数字人
仅需导入一张动漫形象图片及你希望其表达的文字内容即可;它将精准地输出所述文字并模仿人类的语言动作。
动画角色建模可作为参考官方代码库 EasyPhoto 以及案例 stable_diffusion_easyphoto 等等
语音生成相关案例可以参考:ai_singer_rvc、ai_singer_svc。
环境准备
克隆开源代码(网络不稳克隆容易失败,建议多尝试几次。)
#注意:Wav2Lip开源库不可商用,仅用于教学。请遵纪守法,不要用于非法活动。
!git clone https://github.com/Rudrabha/Wav2Lip.git
!git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git
!git clone https://github.com/sczhou/CodeFormer.git
注释掉codeformer中的import bug:
!sed -i 's/from .version/# from .version/' CodeFormer/basicsr/__init__.py
初始化工作目录:
import os.path as osp
WORKDIR = osp.abspath('.')
print(f'work directory: {WORKDIR}')
WAV2LIP_WORKDIR = osp.join(WORKDIR, 'Wav2Lip')
print(f'wav2lip directory: {WAV2LIP_WORKDIR}')
CODEFORMER_WORKDIR = osp.join(WORKDIR, 'CodeFormer')
print(f'codeformer directory: {CODEFORMER_WORKDIR}')
MOTION_MODEL_WORKDIR = osp.join(WORKDIR, 'Thin-Plate-Spline-Motion-Model')
print(f'motion model directory: {MOTION_MODEL_WORKDIR}')
安装依赖包
!pip install --upgrade pip && \
pip install -r {MOTION_MODEL_WORKDIR}/requirements.txt --extra-index-url https://download.pytorch.org/whl/cu113/ && \
pip install -r {CODEFORMER_WORKDIR}/requirements.txt && \
pip install modelscope==1.10.0 pytorch_wavelets tensorboardX && \
pip install kantts -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html && \
pip install librosa==0.8.0 resampy --no-deps && \
pip install sentencepiece && pip install --upgrade transformers && \
sudo apt update -y && sudo apt install -y ffmpeg
如果在安装依赖包时出现cannot import name 'kaiser' from 'scipy.signal'的错误提示,请采取措施降低scipy版本以解决问题;然后在终端中运行以下命令后就能继续完成依赖包的安装步骤了
pip install --upgrade scipy==1.7.3
下载运行脚本
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/codes/run.py -O run.py
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/codes/utils.py -O utils.py
预训练模型准备
下载wav2lip依赖模型并保存到指定目录。
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/wav2lip/wav2lip.pth -O {WAV2LIP_WORKDIR}/checkpoints/wav2lip.pth
!mkdir -p ~/.cache/torch/hub/checkpoints/
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/wav2lip/s3fd-619a316812.pth -O ~/.cache/torch/hub/checkpoints/s3fd-619a316812.pth
下载Thin-Plate-Spline-Motion-Model依赖模型并保存到指定目录。
!mkdir -p {MOTION_MODEL_WORKDIR}/checkpoints
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/Thin-Plate-Spline-Motion-Model/vox.pth.tar -O {MOTION_MODEL_WORKDIR}/checkpoints/vox.pth.tar
下载codeformer依赖模型并保存到指定目录。
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/codeformer/CodeFormer/codeformer.pth -O {CODEFORMER_WORKDIR}/weights/CodeFormer/codeformer.pth
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/codeformer/facelib/detection_Resnet50_Final.pth -O {CODEFORMER_WORKDIR}/weights/facelib/detection_Resnet50_Final.pth
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/codeformer/facelib/parsing_parsenet.pth -O {CODEFORMER_WORKDIR}/weights/facelib/parsing_parsenet.pth
!mkdir -p {CODEFORMER_WORKDIR}/weights/realesrgan
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/pretrained_models/codeformer/realesrgan/RealESRGAN_x2plus.pth -O {CODEFORMER_WORKDIR}/weights/realesrgan/RealESRGAN_x2plus.pth
运行
下载测试图片和驱动视频:
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/imgs/anime_portrait.png -O anime_portrait.png
!wget -nc http://pai-vision-data-hz.oss-cn-zhangjiakou.aliyuncs.com/projects/liveportrait/videos/drive_video.mp4 -O drive_video.mp4
查看测试图片和驱动视频:
from IPython.display import Image, display, Video
image_path = './anime_portrait.png'
raw_photo = Image(image_path)
display(raw_photo)
video_path = 'drive_video.mp4'
Video(video_path)
生成视频:
可以通过指定 –text 参数来自动生成语音内容;或者可以选择魔搭社区中相关的语音生成模型,并通过修改 –tts_model 参数来实现;此外,还可以通过提供 –audio 参数直接加载一段音频文件
在本案例中,默认运用了开源代码库Thin-Plate-Spline-Motion-Model并采用相对模式生成动作。为了使结果更为理想,请确保人像图片与驱动视频的首帧动作及其口部的动作具有高度一致性。
!python run.py --raw_photo anime_portrait.png --drive_video drive_video.mp4 --out outputs --text "I think It is possible for ordinary people to choose to be extraordinary"
结果可视化:
from IPython.display import Image, display, Video
video_path = 'outputs/final_result.mp4'
Video(video_path)
结果视频
链接:https://pan.baidu.com/s/1eo_udVUg2M6SR-khbrWxKA?pwd=z76z
提取码:z76z
