【深度学习】AIGC ,ControlNet 论文,原理,训练,部署,实战,教程(三)

文章目录

  • 源码资源下载
  • Python环境
  • 试玩controlnet
  • 训练
    • 数据准备
    • 选一个Stable diffusion模型
    • 开始训练

第一篇:https://qq742971636.blog.csdn.net/article/details/131531168

源码资源下载

目前 ControlNet 1.1 还在建设,本文这里使用源码 https://github.com/lllyasviel/ControlNet/tree/main。

此外还需要下载模型文件:https://huggingface.co/lllyasviel/ControlNet

发布在huggingface了,如何下载huggingface的模型文件,使用指令:

$ git lfs install
$ git clone https://huggingface.co/lllyasviel/ControlNet

详细log:

$ git lfs install
Git LFS initialized.kevin@DESKTOP-J33EKGT MINGW64 /f
$ git clone https://huggingface.co/lllyasviel/ControlNet
Cloning into 'ControlNet'...
remote: Enumerating objects: 52, done.
remote: Counting objects: 100% (52/52), done.
remote: Compressing objects: 100% (33/33), done.
remote: Total 52 (delta 16), reused 52 (delta 16), pack-reused 0
Unpacking objects: 100% (52/52), 7.06 KiB | 141.00 KiB/s, done.Filtering content: 100% (16/16), 11.80 GiB | 6.47 MiB/s, done.
Encountered 8 file(s) that may not have been copied correctly on Windows:models/control_sd15_seg.pthmodels/control_sd15_hed.pthmodels/control_sd15_normal.pthmodels/control_sd15_canny.pthmodels/control_sd15_scribble.pthmodels/control_sd15_mlsd.pthmodels/control_sd15_depth.pthmodels/control_sd15_openpose.pthSee: `git lfs help smudge` for more details.

Windows 的Git不能超过4GB,已知的BUG。所以这八个文件直接点下载吧,或者用Linux的Git去下载。

最终整个工程如下:
在这里插入图片描述

Python环境

用aliyun镜像才能安装完。这里先安装了一下diffusers。

conda create -n py38_diffusers python=3.8 -y
conda activate py38_diffusers
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge -y
cd diffusers-main/
pip install -e .
cd examples/
cd controlnet/
pip install -r requirements.txt
accelerate config defaultcd /ssd/xiedong/workplace/ControlNetpip install  tb-nightly -i https://mirrors.aliyun.com/pypi/simple  # 用aliyun镜像才能安装完
pip install -r req.txt  # 用清华镜像快一些

req.txt 如下:

gradio==3.16.2
albumentations==1.3.0
opencv-python
opencv-contrib-python==4.3.0.36
imageio==2.9.0
imageio-ffmpeg==0.4.2
pytorch-lightning==1.5.0
omegaconf==2.1.1
test-tube>=0.7.5
streamlit==1.12.1
einops==0.3.0
transformers==4.19.2
webdataset==0.2.5
kornia==0.6
open_clip_torch==2.0.2
invisible-watermark>=0.1.5
streamlit-drawable-canvas==0.8.0
torchmetrics==0.6.0
timm==0.6.12
addict==2.4.0
yapf==0.32.0
prettytable==3.6.0
safetensors==0.2.7
basicsr==1.4.2

可以选择性安装:

pip install xformers

试玩controlnet

执行:

python gradio_scribble2image_interactive.py

网络问题,可能一些脚本会有问题,我这里没问题:

在这里插入图片描述

访问http://127.0.0.1:7860/,可以得到:

在这里插入图片描述
执行过程:
在这里插入图片描述
gpu显存占用:8847MiB

训练

数据准备

我的数据是准备训练scribble:

fake_image2scribble.py

from share import *
import config
import osos.environ["CUDA_VISIBLE_DEVICES"] = '0'
import cv2
import einops
import gradio as gr
import numpy as np
import torch
import randomfrom pytorch_lightning import seed_everything
from annotator.util import resize_image, HWC3
from annotator.hed import HEDdetector, nms
from cldm.model import create_model, load_state_dict
from cldm.ddim_hacked import DDIMSamplerapply_hed = HEDdetector()def image2hed(input_image):input_image = HWC3(input_image)detected_map = apply_hed(resize_image(input_image, 512))detected_map = HWC3(detected_map)img = resize_image(input_image, 512)H, W, C = img.shapedetected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)detected_map = nms(detected_map, 127, 3.0)detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0)detected_map[detected_map > 4] = 255detected_map[detected_map < 255] = 0hed_result = 255 - detected_mapreturn hed_resultif __name__ == "__main__":target = r'/ssd/xiedong/datasets/back_img_nohaveback'save_img = r'/ssd/xiedong/datasets/back_img_nohaveback_scribble'if not os.path.exists(save_img):os.makedirs(save_img)for i in os.listdir(target):img = cv2.imread(os.path.join(target, i))hed_result = image2hed(img)cv2.imwrite(os.path.join(save_img, i), hed_result)print("done")

官网教程示意图如下,我这里准备用scribble作为source image,Prompt我也自己准备了。

在这里插入图片描述

官网的训练教程:
https://github.com/lllyasviel/ControlNet/blob/main/docs/train.md
官网的数据 fill50k数据:
https://huggingface.co/datasets/fusing/fill50k

无所谓,最终搞个json出来:

在这里插入图片描述
再写个dataset loader:

import json
import cv2
import numpy as npfrom torch.utils.data import Datasetclass MyDataset(Dataset):def __init__(self):self.data = []with open('./prompt.json', 'r') as f:self.data = json.load(f)def __len__(self):return len(self.data)def __getitem__(self, idx):item = self.data[idx]source_filename = item['source']target_filename = item['target']prompt = item['prompt']source = cv2.imread(source_filename)target = cv2.imread(target_filename)# Do not forget that OpenCV read images in BGR order.source = cv2.cvtColor(source, cv2.COLOR_BGR2RGB)target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)# Normalize source images to [0, 1].source = source.astype(np.float32) / 255.0# Normalize target images to [-1, 1].target = (target.astype(np.float32) / 127.5) - 1.0return dict(jpg=target, txt=prompt, hint=source)if __name__ == '__main__':# 打印第一个dataset = MyDataset()print(dataset[0])

选一个Stable diffusion模型

Then you need to decide which Stable Diffusion Model you want to control. In this example, we will just use standard SD1.5. You can download it from the official page of Stability. You want the file “v1-5-pruned.ckpt”.

(Or “v2-1_512-ema-pruned.ckpt” if you are using SD2.)

然后你需要连接一个控制网到SD模型。架构是

在这里插入图片描述
请注意,ControlNet内的所有权重也都是从SD复制的,因此没有任何层是从头开始训练的,并且您仍在微调整个模型。

我们为您提供了一个简单的脚本来轻松实现这一点。如果您的SD文件名为“./models/v1-5-pruned.ckpt”,并且您希望脚本将处理后的模型(SD+ControlNet)保存在位置“./models/control_sd15_ini.ckpt”,您只需运行:

【国内网络环境问题,可能要执行很多次,最快的解决办法我想你知道的。】
【./.cache/huggingface/hub/models–openai–clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/pytorch_model.bin】

python tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt

Or if you are using SD2:

python tool_add_control_sd21.py ./models/v2-1_512-ema-pruned.ckpt ./models/control_sd21_ini.ckpt

This is the correct output from my machine:

(py38_diffusers) gpu16: /ssd/xiedong/workplace/ControlNet $ python tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt
logging improved.
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loaded model config from [./models/cldm_v15.yaml]
These weights are newly added: logvar
These weights are newly added: control_model.zero_convs.0.0.weight
These weights are newly added: control_model.zero_convs.0.0.bias
These weights are newly added: control_model.zero_convs.1.0.weight
These weights are newly added: control_model.zero_convs.1.0.bias
These weights are newly added: control_model.zero_convs.2.0.weight
These weights are newly added: control_model.zero_convs.2.0.bias
These weights are newly added: control_model.zero_convs.3.0.weight
These weights are newly added: control_model.zero_convs.3.0.bias
These weights are newly added: control_model.zero_convs.4.0.weight
These weights are newly added: control_model.zero_convs.4.0.bias
These weights are newly added: control_model.zero_convs.5.0.weight
These weights are newly added: control_model.zero_convs.5.0.bias
These weights are newly added: control_model.zero_convs.6.0.weight
These weights are newly added: control_model.zero_convs.6.0.bias
These weights are newly added: control_model.zero_convs.7.0.weight
These weights are newly added: control_model.zero_convs.7.0.bias
These weights are newly added: control_model.zero_convs.8.0.weight
These weights are newly added: control_model.zero_convs.8.0.bias
These weights are newly added: control_model.zero_convs.9.0.weight
These weights are newly added: control_model.zero_convs.9.0.bias
These weights are newly added: control_model.zero_convs.10.0.weight
These weights are newly added: control_model.zero_convs.10.0.bias
These weights are newly added: control_model.zero_convs.11.0.weight
These weights are newly added: control_model.zero_convs.11.0.bias
These weights are newly added: control_model.input_hint_block.0.weight
These weights are newly added: control_model.input_hint_block.0.bias
These weights are newly added: control_model.input_hint_block.2.weight
These weights are newly added: control_model.input_hint_block.2.bias
These weights are newly added: control_model.input_hint_block.4.weight
These weights are newly added: control_model.input_hint_block.4.bias
These weights are newly added: control_model.input_hint_block.6.weight
These weights are newly added: control_model.input_hint_block.6.bias
These weights are newly added: control_model.input_hint_block.8.weight
These weights are newly added: control_model.input_hint_block.8.bias
These weights are newly added: control_model.input_hint_block.10.weight
These weights are newly added: control_model.input_hint_block.10.bias
These weights are newly added: control_model.input_hint_block.12.weight
These weights are newly added: control_model.input_hint_block.12.bias
These weights are newly added: control_model.input_hint_block.14.weight
These weights are newly added: control_model.input_hint_block.14.bias
These weights are newly added: control_model.middle_block_out.0.weight
These weights are newly added: control_model.middle_block_out.0.bias
Done.

开始训练

代码很简单,超参几乎都在./models/cldm_v15.yaml。

import pytorch_lightning as pl
from torch.utils.data import DataLoader
from scribble_datasets_en import MyDataset
from cldm.logger import ImageLogger
from cldm.model import create_model, load_state_dict# Configs
resume_path = './models/control_sd15_ini.ckpt'
batch_size = 4
logger_freq = 300
learning_rate = 1e-5
sd_locked = True
only_mid_control = False# First use cpu to load models. Pytorch Lightning will automatically move it to GPUs.
model = create_model('./models/cldm_v15.yaml').cpu()
model.load_state_dict(load_state_dict(resume_path, location='cpu'))
model.learning_rate = learning_rate
model.sd_locked = sd_locked
model.only_mid_control = only_mid_control# Misc
dataset = MyDataset()
dataloader = DataLoader(dataset, num_workers=1, batch_size=batch_size, shuffle=True)
logger = ImageLogger(batch_frequency=logger_freq)
trainer = pl.Trainer(gpus=1, precision=32, callbacks=[logger])# Train!
trainer.fit(model, dataloader)

此外:
sd_locked = True
only_mid_control = False
在这里插入图片描述
在这里插入图片描述
训练开始

ControlNet$ python train_scribble_en.py
No module 'xformers'. Proceeding without it.
ControlLDM: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.6.self_attn.vweight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm1, 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.engs.position_ids', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.1fc2.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.10.mlp.fc1.b'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vmodel.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.b'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias'ion_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.weight', 'visdel.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.e.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_mocoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_modeler.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.e.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.16attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.8.m.weight', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'visdel.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'visiel.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layersf_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.5.self_attn.out_pro', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.biassion_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vmodel.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.weight',on_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'visiol.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight',on_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.we 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proht', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.bivision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.7.layer_norm2.weight', 'viodel.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'visdel.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'visiel.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.enlayers.7.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder..20.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.15.self_attn..bias', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'visiol.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.10attn.out_proj.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.2fc1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.oj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.w, 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'visuaection.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.23attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layerp.fc2.bias', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.1.self_attn.ou.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_aproj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.16.layer_norght', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_pas', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_atroj.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.layer_nias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.layer_noight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn..weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.oj.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.8.self_attnj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias'ion_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'visdel.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias',on_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', '_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_modeler.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_modeler.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.8.self_attnj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.mlweight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_modeder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.enlayers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoyers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.fc2.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vmodel.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_modeler.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoders.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoders.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.enlayers.13.layer_norm2.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layelayer_norm2.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.17.mlp.fc2.weighision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_mncoder.layers.12.self_attn.out_proj.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encayers.14.mlp.fc2.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.17_norm2.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.19.self_attn.qbias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.wei'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', '_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.lay.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.l8.layer_norm1.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.norm2.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.bivision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.wei'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'visiel.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encayers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'logit_scale', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_mocoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.l4.layer_norm2.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layersp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.out_pro', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_mododer.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_mocoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoyers.6.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.mlweight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.outweight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2t', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias'ion_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.wei'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'text_projection.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoders.12.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.12.layer.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.as', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vmodel.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vmodel.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'visdel.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.22attn.k_proj.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.2_attn.out_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoders.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layersp.fc1.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.11.layer_nors', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.out_proht', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.0.self_attn.v_pas', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vmodel.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoders.2.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.lay.mlp.fc2.bias', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.5.layer_norm1.bivision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layeself_attn.v_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.la8.layer_norm2.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.1.self_aproj.weight', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.selfq_proj.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.ttn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.15attn.out_proj.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.lay.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoders.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm2.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing ForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequessification model from a BertForSequenceClassification model).
Loaded model config from [./models/cldm_v15.yaml]
Loaded state_dict from [./models/control_sd15_ini.ckpt]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:118: UserWarning: You defined a `tion_step` but have no `val_dataloader`. Skipping val loop.rank_zero_warn("You defined a `validation_step` but have no `val_dataloader`. Skipping val loop.")
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:280: LightningDeprecationWarning:`LightningModule.on_train_batch_start` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.rank_zero_deprecation(
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:287: LightningDeprecationWarning:`Callback.on_train_batch_end` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.rank_zero_deprecation(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [1]| Name              | Type               | Params
---------------------------------------------------------
0 | model             | DiffusionWrapper   | 859 M
1 | first_stage_model | AutoencoderKL      | 83.7 M
2 | cond_stage_model  | FrozenCLIPEmbedder | 123 M
3 | control_model     | ControlNet         | 361 M
---------------------------------------------------------
1.2 B     Trainable params
206 M     Non-trainable params
1.4 B     Total params
5,710.058 Total estimated model params size (MB)
/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:110: UserWarning: The dataloader, train_data, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 56 which is the number of cpus on thisne) in the `DataLoader` init to improve performance.rank_zero_warn(
Epoch 0:   0%|                                                                                                                               | 0/1056 [00:00<?, /ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the `batch_size` n ambiguous collection. The batch size we found is 4. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.warning_cache.warn(
Data shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
Epoch 0:  21%|| 224/1056 [04:53<18:11,  1.31s/it, loss=0.0611, v_num=2, train/loss_simple_step=0.0623, train/loss_vlb_step=0.000236, train/loss_step=0.0623, glEpoch 0:  28%|| 300/1056 [06:23<16:07,  1.28s/it, loss=0.0611, v_num=2, train/loss_simple_step=0.020, train/loss_vlb_step=7.54e-5, train/loss_step=0.020, globaData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00
Epoch 0:  57%|██████████▊        | 600/1056 [12:40<09:38,  1.27s/it, loss=0.0745, v_num=2, train/loss_simple_step=0.0428, train/loss_vlb_step=0.000152, train/loData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
Epoch 0:  58%|███▍  | 609/1056 [13:20<09:47,  1.31s/it, loss=0.0665, v_num=2, train/loss_simple_step=0.0532, train/loss_vlb_step=0.000262, train/loss_step=0.053Epoch 0:  67%|| 712/1056 [15:16<07:22,  1.29s/it, loss=0.0603, v_num=2, train/loss_simple_step=0.0232, train/loss_vlb_step=8.41e-5, train/loss_step=0.0232, gloEpoch 0:  85%|| 900/1056 [18:48<03:15,  1.25s/it, loss=0.0569, v_num=2, train/loss_simple_step=0.00959, train/loss_vlb_step=3.75e-5, train/loss_step=0.00959, gData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00
Epoch 0:  90%|| 953/1056 [20:17<02:11,  1.28s/it, loss=0.067, v_num=2, train/loss_simple_step=0.0193, train/loss_vlb_step=7.23e-5, train/loss_step=0.0193, globEpoch 0: 100%|| 1055/1056 [22:11<00:01,  1.26s/it, loss=0.0567, v_num=2, train/loss_simple_step=0.0556, train/loss_vlb_step=0.000276, train/loss_step=0.0556, g/ssd/xiedong/miniconda3/envs/py38_diffusers_1/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py:56: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.warning_cache.warn(
Epoch 1:   0%| | 0/1056 [00:00<?, ?it/s, loss=0.0593, v_num=2, train/loss_simple_step=0.126, train/loss_vlb_step=0.000617, train/loss_step=0.126, global_step=10Data shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:27<00:00,  1.80it/s]
Epoch 1:  28%|| 300/1056 [06:06<15:23,  1.22s/it, loss=0.0465, v_num=2, train/loss_simple_step=0.0711, train/loss_vlb_step=0.000249, train/loss_step=0.0711, glData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00,  1.78it/s]
Epoch 1:  57%|| 600/1056 [12:12<09:16,  1.22s/it, loss=0.0632, v_num=2, train/loss_simple_step=0.0698, train/loss_vlb_step=0.000314, train/loss_step=0.0698, glData shape for DDIM sampling is (4, 4, 64, 64), eta 0.0
Running DDIM Sampling with 50 timesteps
DDIM Sampler: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00,  1.78it/s]
Epoch 1:  61%|| 649/1056 [13:37<08:32,  1.26s/it, loss=0.0602, v_num=2, train/loss_simple_step=0.0739, train/loss_vlb_step=0.00045

训练文件被保存在 lightning_logs 目录。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/14985.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

MyBatis 与 Hibernate 有哪些不同?

ORM框架的选择与适用场景 MyBatis和Hibernate都是Java领域中流行的面向关系型数据库的ORM&#xff08;对象关系映射&#xff09;框架。它们的共同目标是简化开发人员操作数据库的工作&#xff0c;提供便捷的持久化操作。然而&#xff0c;两者在设计理念和适用场景上有所不同。…

Centos7安装wordpress图文教程

宝塔面板安装WordPress有两种方法&#xff1a; 自己手动安装&#xff08;推荐&#xff09;宝塔后台一键部署跳转提示 推荐使用手动安装&#xff0c;因为一键部署的WordPress版本不是最新的&#xff0c;而且自己上传的文件比较放心。 第一步&#xff0c;上传WordPress安装包 …

Redis消息队列

消息队列&#xff1a;字面意思就是存放消息的队列。使用队列的好处在于解耦 。最简单的消息队列模型包括3个角色&#xff1a; 消息队列&#xff1a;存储和管理消息&#xff0c;也被称为消息代理&#xff08;Message Broker&#xff09; 生产者&#xff1a;发送消息到消息队列 …

蚂蚁内容安全平台天鉴入选“北京市人工智能行业赋能典型案例”

近日&#xff0c;“2023全球数字经济大会”人工智能高峰论坛在京召开。会议发布了一批人工智能行业赋能典型案例&#xff0c;为行业提供重要的示范效应&#xff0c;以推动大模型应用加速赋能千行百业。其中&#xff0c;蚂蚁集团旗下数字藏品平台“鲸探”及内容安全平台“天鉴”…

el-breadcrumb面包屑详解

el-breadcrumb面包屑详解 封装面包屑组件 <template><div class"crumb"><el-breadcrumb separator"/"><template v-for"(item,index) in levelList"><el-breadcrumb-item :key"item_ index">{{item.na…

实现分类标签展示的魔力——gradio库中的Label模块

❤️觉得内容不错的话&#xff0c;欢迎点赞收藏加关注&#x1f60a;&#x1f60a;&#x1f60a;&#xff0c;后续会继续输入更多优质内容❤️ &#x1f449;有问题欢迎大家加关注私戳或者评论&#xff08;包括但不限于NLP算法相关&#xff0c;linux学习相关&#xff0c;读研读博…

前端面试刷题整理

第一题&#xff1a;es6 class语法 题目&#xff1a;现有三种菜单&#xff0c;button属性&#xff0c;select属性&#xff0c;model属性 class Mune{constructor(title,icon){this.title titlethis.icon icon}isDisabled(){return false}exec(){} } class Button extends Mun…

3.3.内存的学习,pinnedmemory,内存效率问题

目录 前言1. Memory总结 前言 杜老师推出的 tensorRT从零起步高性能部署 课程&#xff0c;之前有看过一遍&#xff0c;但是没有做笔记&#xff0c;很多东西也忘了。这次重新撸一遍&#xff0c;顺便记记笔记。 本次课程学习精简 CUDA 教程-内存模型&#xff0c;pinned memory&am…

双非本大二上岸大厂——念念不忘,必有回响

⭐️前言⭐️ 博主就读于一所普通的学校&#xff08;双非本&#xff09;&#xff0c;在大二下学期3月份开始网上投递简历&#xff0c;历时近百余天&#xff0c;投递简历500&#xff0c;面试近40余场&#xff0c;最终在6月份学期末&#xff0c;斩获了两个大厂offer&#xff08;北…

windows环境安装robotframework-ride

在Windows环境下&#xff0c;可以通过以下步骤安装Robot Framework RIDE&#xff1a; 安装Python 首先&#xff0c;需要在Windows环境下安装Python。建议使用Python 3.x版本&#xff0c;可以从官方网站下载并安装&#xff1a;https://www.python.org/downloads/windows/ 安装w…

剑指offer27.二叉树的镜像

这道题很简单&#xff0c;写了十多分钟就写出来了&#xff0c;一看题目就知道这道题肯定要用递归。先交换左孩子和右孩子&#xff0c;再用递归交换左孩子的左孩子和右孩子&#xff0c;交换右孩子的左孩子和右孩子&#xff0c;其中做一下空判断就行。以下是我的代码&#xff1a;…

传输方式的分类【图解TCP/IP(笔记五)】

文章目录 传输方式的分类面向有连接型和无连接型面向有连接型面向无连接型 电路交换与分组交换根据接收端数量分类单播&#xff08;Unicast&#xff09;广播&#xff08;Broadcast&#xff09;多播&#xff08;Multicast&#xff09;任播&#xff08;Anycast&#xff09; 传输方…