Python开源项目PGDiff——人脸重建(Face Restoration),模糊清晰、划痕修复及黑白上色的实践

python ansconda 等的下载、安装等请参阅:

Python开源项目CodeFormer——人脸重建(Face Restoration),模糊清晰、划痕修复及黑白上色的实践icon-default.png?t=N7T8https://blog.csdn.net/beijinghorn/article/details/134334021

友情提示:
(1)这是2023年的论文;

(2)必须有 CUDA !

(3)运行速度慢!效果一般!

(4)有2个bug;后面会介绍一下。

1 PGDiff

https://github.com/pq-yang/PGDiff

1.1 论文Paper


《PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance》
Peiqing Yang1  Shangchen Zhou1  Qingyi Tao2  Chen Change Loy1
S-Lab, Nanyang Technological University  SenseTime Research, Singapore 
Accepted to NeurIPS 2023


PGDiff builds a versatile framework that is applicable to a broad range of face restoration tasks.

If you find PGDiff helpful to your projects, please consider ⭐ this repo. Thanks! 

Supported Applications
Blind Restoration
Colorization
Inpainting
Reference-based Restoration
Old Photo Restoration (w/ scratches)
[TODO] Natural Image Restoration

1.2 进化史 Updates

2021.10.10: Release our codes and models. Have fun! 😋
2021.08.16: This repo is created.

1.3 安装 Installation

Codes and Environment

# git clone this repository
git clone https://github.com/pq-yang/PGDiff.git
cd PGDiff

# create new anaconda env
conda create -n pgdiff python=1.8 -y
conda activate pgdiff

# install python dependencies
conda install mpi4py
pip3 install -r requirements.txt
pip install -e .

Pretrained Model
Download the pretrained face diffusion model from [Google Drive | BaiduPan (pw: pgdf)] to the models folder (credit to DifFace).
https://pan.baidu.com/share/init?surl=VHv48RDUXI8onMEodVFZkw

1.4 功能 Applications

1.4.1 Blind Restoration


To extract smooth semantics from the input images, download the pretrained restorer from [Google Drive | BaiduPan (pw: pgdf)] to the models/restorer folder. The pretrained restorer provided here is modified from the 
 1 generator of Real-ESRGAN. Note that the pretrained restorer can also be flexibly replaced with other restoration models by modifying the create_restorer function and specifying your own --restorer_path accordingly.
https://pan.baidu.com/share/init?surl=IkEnPGDJqFcg4dKHGCH9PQ

Commands
Guidance scale 
 for BFR is generally taken from [0.05, 0.1]. Smaller 
 tends to produce a higher-quality result, while larger 
 yields a higher-fidelity result.

For cropped and aligned faces (512x512):

python inference_pgdiff.py --task restoration --in_dir [image folder] --out_dir [result folder] --restorer_path [restorer path] --guidance_scale [s]

Example:

python inference_pgdiff.py --task restoration --in_dir testdata/cropped_faces --out_dir results/blind_restoration --guidance_scale 0.05


1.4.2 Colorization


We provide a set of color statistics in the adaptive_instance_normalization function as the default colorization style. One may change the colorization style by running the script scripts/color_stat_calculation.py to obtain target statistics (avg mean & avg std) and replace those in the adaptive_instance_normalization function.

Commands
For cropped and aligned faces (512x512):

python inference_pgdiff.py --task colorization --in_dir [image folder] --out_dir [result folder] --lightness_weight [w_l] --color_weight [w_c] --guidance_scale [s]

Example:

Try different color styles for various outputs!

# style 0 (default)
python inference_pgdiff.py --task colorization --in_dir testdata/grayscale_faces --out_dir results/colorization --guidance_scale 0.01

# style 1 (uncomment line 272-273 in `guided_diffusion/script_util.py`)
python inference_pgdiff.py --task colorization --in_dir testdata/grayscale_faces --out_dir results/colorization_style1 --guidance_scale 0.01

# style 3 (uncomment line 278-279 in `guided_diffusion/script_util.py`)
python inference_pgdiff.py --task colorization --in_dir testdata/grayscale_faces --out_dir results/colorization_style3 --guidance_scale 0.01

1.4.3 Inpainting


A folder for mask(s) mask_dir must be specified with each mask image name corresponding to each input (masked) image. Each input mask shoud be a binary map with white pixels representing masked regions (refer to testdata/append_masks). We also provide a script scripts/irregular_mask_gen.py to randomly generate irregular stroke masks on input images.

Note: If you don't specify mask_dir, we will automatically treat the input image as if there are no missing pixels.

Commands
For cropped and aligned faces (512x512):

python inference_pgdiff.py --task inpainting --in_dir [image folder] --mask_dir [mask folder] --out_dir [result folder] --unmasked_weight [w_um] --guidance_scale [s]

Example:

Try different seeds for various outputs!

python inference_pgdiff.py --task inpainting --in_dir testdata/masked_faces --mask_dir testdata/append_masks --out_dir results/inpainting --guidance_scale 0.01 --seed 4321

1.4.4 Reference-based Restoration


To extract identity features from both the reference image and the intermediate results, download the pretrained ArcFace model from [Google Drive | BaiduPan (pw: pgdf)] to the models folder.

A folder for reference image(s) ref_dir must be specified with each reference image name corresponding to each input image. A reference image is suggested to be a high-quality image from the same identity as the input low-quality image. Test image pairs we provided here are from the CelebRef-HQ dataset.
https://pan.baidu.com/share/init?surl=Ku-d57YYavAuScTFpvaP3Q

Commands
Similar to blind face restoration, reference-based restoration requires to tune the guidance scale 
 according to the input quality, which is generally taken from [0.05, 0.1].

For cropped and aligned faces (512x512):

python inference_pgdiff.py --task ref_restoration --in_dir [image folder] --ref_dir [reference folder] --out_dir [result folder] --ss_weight [w_ss] --ref_weight [w_ref] --guidance_scale [s]

Example:

# Choice 1: MSE Loss (default)
python inference_pgdiff.py --task ref_restoration --in_dir testdata/ref_cropped_faces --ref_dir testdata/ref_faces --out_dir results/ref_restoration_mse --guidance_scale 0.05 --ref_weight 25

# Choice 2: Cosine Similarity Loss (uncomment line 71-72)
python inference_pgdiff.py --task ref_restoration --in_dir testdata/ref_cropped_faces --ref_dir testdata/ref_faces --out_dir results/ref_restoration_cos --guidance_scale 0.05 --ref_weight 1e4


1.4.5 Old Photo Restoration


If scratches exist, a folder for mask(s) mask_dir must be specified with the name of each mask image corresponding to that of each input image. Each input mask shoud be a binary map with white pixels representing masked regions. To obtain a scratch map automatically, we recommend using the scratch detection model from Bringing Old Photo Back to Life. One may also generate or adjust the scratch map with an image editing app (e.g., Photoshop).

If scratches don't exist, set the mask_dir augment as None (default). As a result, if you don't specify mask_dir, we will automatically treat the input image as if there are no missing pixels.

Commands

For cropped and aligned faces (512x512):

python inference_pgdiff.py --task old_photo_restoration --in_dir [image folder] --mask_dir [mask folder] --out_dir [result folder] --op_lightness_weight [w_op_l] --op_color_weight [w_op_c] --guidance_scale [s]

Demos
Similar to blind face restoration, old photo restoration is a more complex task (restoration + colorization + inpainting) that requires to tune the guidance scale 
 according to the input quality. Generally,$s$ is taken from [0.0015, 0.005]. Smaller 
 tends to produce a higher-quality result, while larger 
 yields a higher-fidelity result.

Degradation: Light
# no scratches (don't specify mask_dir)
python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/lg --out_dir results/op_restoration/lg --guidance_scale 0.004 --seed 4321

Degradation: Medium
# no scratches (don't specify mask_dir)
python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/med --out_dir results/op_restoration/med --guidance_scale 0.002 --seed 1234

# with scratches
python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/med_scratch --mask_dir testdata/op_mask --out_dir results/op_restoration/med_scratch --guidance_scale 0.002 --seed 1111

Degradation: Heavy
python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/hv --mask_dir testdata/op_mask --out_dir results/op_restoration/hv --guidance_scale 0.0015 --seed 4321

Customize your results with different color styles!

# style 1 (uncomment line 272-273 in `guided_diffusion/script_util.py`)
python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/hv --mask_dir testdata/op_mask --out_dir results/op_restoration/hv_style1 --guidance_scale 0.0015 --seed 4321

# style 2 (uncomment line 275-276 in `guided_diffusion/script_util.py`)
python inference_pgdiff.py --task old_photo_restoration --in_dir testdata/op_cropped_faces/hv --mask_dir testdata/op_mask --out_dir results/op_restoration/hv_style2 --guidance_scale 0.0015 --seed 4321

1.5 引用Citation


If you find our work useful for your research, please consider citing:

@inproceedings{yang2023pgdiff,
  title={{PGDiff}: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance},
  author={Yang, Peiqing and Zhou, Shangchen and Tao, Qingyi and Loy, Chen Change},
  booktitle={NeurIPS},
  year={2023}
}

1.6 权利 License


This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.

1.7 知识 Acknowledgement


This study is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

This implementation is based on guided-diffusion. We also adopt the pretrained face diffusion model from DifFace, the pretrained identity feature extraction model from ArcFace, and the restorer backbone from Real-ESRGAN. Thanks for their awesome works!

1.8 联系 Contact


If you have any questions, please feel free to reach out at peiqingyang99@outlook.com.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/169137.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

算法导论笔记5:贪心算法

P216 第15章动态规划 最优子结构 具有它可能意味着适合应用贪心策略 动态规划(Dynamic Programming)算法的核心思想是:将大问题划分为小问题进行解决,从而一步步获取最优解的处理算法。 剪切-粘贴技术证明 每个子问题的解就是它本身的最优解(利用反证法&#xff0…

【java:牛客每日三十题总结-6】

java:牛客每日三十题总结 总结如下 总结如下 transient 变量和序列化有关,这是一个空接口,起标记作用,具体的序列化由ObjectOutputStream和ObjectInputStream完成。transient修饰的变量不能被序列化,static变量不管加没加transie…

element ui中Select 选择器,自定义显示内容

正常情况下,下拉框选项展示内容,就是选择后展示的label内容 如图所示: 但是要想自定义选项内容,但是展示内容不是选项label的内容,可以在el-option标签内增加div进行自定义选项label展示,但选择后结果展示…

RabbitMQ的 五种工作模型

RabbitMQ 其实一共有六种工作模式: 简单模式(Simple)、工作队列模式(Work Queue)、 发布订阅模式(Publish/Subscribe)、路由模式(Routing)、通配符模式(Topi…

高频SQL50题(基础版)-3

文章目录 主要内容一.SQL练习题1.1174-即时食物配送代码如下(示例): 2.550-游戏玩法分析代码如下(示例): 3.2356-每位教师所教授的科目种类的数量代码如下(示例): 4.1141-查询近30天活跃用户数代码如下&…

Python文件、文件夹操作汇总

目录 一、概览 二、文件操作 2.1 文件的打开、关闭 2.2 文件级操作 2.3 文件内容的操作 三、文件夹操作 四、常用技巧 五、常见使用场景 5.1 查找指定类型文件 5.2 查找指定名称的文件 5.3 查找指定名称的文件夹 5.4 指定路径查找包含指定内容的文件 一、概览 ​在…

照片放大软件 Topaz Gigapixel AI mac中文版简介

Topaz Gigapixel AI mac是一款使用人工智能功能扩展图像的桌面应用程序,同时添加自然细节以获得惊人的效果。使用深度学习技术,A.I.Gigapixel™可以放大图像并填写其他调整大小的产品遗漏的细节,使用A.I.Gigapixel™,您可以裁剪照…

LeetCode 面试题 16.20. T9键盘

文章目录 一、题目二、C# 题解 一、题目 在老式手机上,用户通过数字键盘输入,手机将提供与这些数字相匹配的单词列表。每个数字映射到0至4个字母。给定一个数字序列,实现一个算法来返回匹配单词的列表。你会得到一张含有有效单词的列表。映射…

Xilinx FPGA平台DDR3设计详解(一):DDR SDRAM系统框架

DDR SDRAM(双倍速率同步动态随机存储器)是一种内存技术,它可以在时钟信号的上升沿和下降沿都传输数据,从而提高数据传输的速率。DDR SDRAM已经发展了多代,包括DDR、DDR2、DDR3、DDR4和DDR5,每一代都有不同的…

【Unity之UI编程】编写一个面板交互界面需要注意的细节

👨‍💻个人主页:元宇宙-秩沅 👨‍💻 hallo 欢迎 点赞👍 收藏⭐ 留言📝 加关注✅! 👨‍💻 本文由 秩沅 原创 👨‍💻 收录于专栏:Uni…

LeetCode146.LRU缓存

写了一个小时,终于把示例跑过了,没想到啊提交之后第19/22个测试用例没过 我把测试用例的输出复制在word上看看和我的有什么不同,没想到有18页的word,然后我一直检查终于找出了问题,而且这个bug真的太活该了&#xff0c…

YOLOv5算法进阶改进(1)— 改进数据增强方式 + 添加CBAM注意力机制

前言:Hello大家好,我是小哥谈。本节课设计了一种基于改进YOLOv5的目标检测算法。首先在数据增强方面使用Mosaic-9方法来对训练集进行数据增强,使得网络具有更好的泛化能力,从而更好适用于应用场景。而后,为了更进一步提升检测精度,在backbone中嵌入了CBAM注意力机制模块,…