![]() Run train_pose2vid.py and check loss and full training process in. data/target/, which contain the coordinate of faces (will use in step6). data/target/ and run make_target.py, pose.npy will save in. Rename your own target video as mv.mp4 and put it in.If you want to capture video by camera, you can directly run. data/source/pose_souce.npy (will use in step6). data/source/ and run make_source.py, the label images and coordinate of head will save in. ![]() src/PoseEstimation/network/weight/ĭownload pre-trained vgg_16 for face enhancement here and put in. src/pix2pixHD/models/ĭownload pose_model.pth here and put it in. Lotayou everybody_dance_now_pytorch Pre-trained models and source videoĭownload here and put it in. We train and evaluate on Ubuntu 16.04, so if you don't have linux environment, you can set nThreads=0 in EverybodyDanceNow_reproduce_pytorch/src/config/train_opt.py. Written by Peihuan Wu, Jinghong Lin, Yutao Liao, Wei Qing and Yan Xu, including normalization and face enhancement parts. ![]()
0 Comments
|