chabot项目介绍

项目介绍

  • 整体的目录如下所示:
    在这里插入图片描述
  • 上述的项目结构中出了model是必须的外,其他的都可以根据训练的代码参数传入进行调整,有些不需要一定存在
  • data
    • train.pkl:对原始训练语料进行tokenize之后的文件,存储一个list对象,list的每条数据表示一个多轮对话,表示一条训练数据
  • model:存放对话生成的模型
    - config.json:模型参数的配置文件
    - pytorch_model.bin:模型文件
  • vocab
    • vocab.txt:字典文件。默认的字典大小为13317,若需要使用自定义字典,需要将confog.json文件中的vocab_size字段设为相应的大小。
  • sample:存放人机闲聊生成的历史聊天记录
  • train.py:训练代码
  • interact.py:人机交互代码
  • preprocess.py:数据预处理代码

项目的整体运行流程

  • 第一步:数据模块, 根据后面的数据集地址介绍进行数据集的下载,里面有各个地方的数据集来源以及数据合并的代码
  • 第二步:将得到的数据通过preprocess.py文件进行训练数据处理,得到train.pkl文件,得到后再整个项目目录下创建data文件夹,并将得到的train.pkl文件移动到data文件夹下面
  • 第三步:去huggingface网站上面下载gpt2预训练模型下的文件,具体需要下载的文件如下所示:
    在这里插入图片描述
  • 第四步:运行train.py文件训练得到再收集的数据集上面的微调模型
  • 第五步:通过chatbot.py对模型进行人机交互和推理

数据集地址

  • https://github.com/codemayq/chinese-chatbot-corpus, 安装上面的readme运行就可以得到相应的数据集
  • 然后再运行上面的preprocess.py就可以得到相关的训练数据集train.pkl

训练代码train.py


import argparse
import math
import time
import torch
import torch.nn.functional as F
import torch.optim as optim
import logging
from datetime import datetime
import os
from torch.utils.data import Dataset, DataLoader
from os.path import join, exists
from torch.nn import CrossEntropyLoss
from tqdm import tqdm
from torch.nn import DataParallel
import transformers
import pickle
import sys
from pytorchtools import EarlyStopping
from sklearn.model_selection import train_test_split
from data_parallel import BalancedDataParallel
from transformers import GPT2TokenizerFast, GPT2LMHeadModel, GPT2Config
from transformers import BertTokenizerFast
import pandas as pd
import torch.nn.utils.rnn as rnn_utils
import numpy as np
from dataset import MyDatasetdef set_args():parser = argparse.ArgumentParser()parser.add_argument('--device', default='3', type=str, required=False, help='设置使用哪些显卡')parser.add_argument('--no_cuda', action='store_true', help='不使用GPU进行训练')parser.add_argument('--vocab_path', default='vocab/vocab.txt', type=str, required=False,help='词表路径')parser.add_argument('--model_config', default='config/config.json', type=str, required=False,help='设置模型参数')parser.add_argument('--train_path', default='data/train.pkl', type=str, required=False, help='训练集路径')parser.add_argument('--max_len', default=150, type=int, required=False, help='训练时,输入数据的最大长度')parser.add_argument('--log_path', default='data/train.log', type=str, required=False, help='训练日志存放位置')parser.add_argument('--log', default=True, help="是否记录日志")parser.add_argument('--ignore_index', default=-100, type=int, required=False, help='对于ignore_index的label token不计算梯度')# parser.add_argument('--input_len', default=200, type=int, required=False, help='输入的长度')parser.add_argument('--epochs', default=20, type=int, required=False, help='训练的最大轮次')parser.add_argument('--batch_size', default=64, type=int, required=False, help='训练的batch size')parser.add_argument('--gpu0_bsz', default=10, type=int, required=False, help='0号卡的batch size')parser.add_argument('--lr', default=2.6e-5, type=float, required=False, help='学习率')parser.add_argument('--eps', default=1.0e-09, type=float, required=False, help='衰减率')parser.add_argument('--log_step', default=1, type=int, required=False, help='多少步汇报一次loss')parser.add_argument('--gradient_accumulation_steps', default=4, type=int, required=False, help='梯度积累')parser.add_argument('--max_grad_norm', default=2.0, type=float, required=False)parser.add_argument('--save_model_path', default='model_new', type=str, required=False,help='模型输出路径')parser.add_argument('--pretrained_model', default='./pretrained_model', type=str, required=False,help='预训练的模型的路径')# parser.add_argument('--seed', type=int, default=None, help='设置种子用于生成随机数,以使得训练的结果是确定的')parser.add_argument('--num_workers', type=int, default=0, help="dataloader加载数据时使用的线程数量")parser.add_argument('--patience', type=int, default=0, help="用于early stopping,设为0时,不进行early stopping.early stop得到的模型的生成效果不一定会更好。")parser.add_argument('--warmup_steps', type=int, default=4000, help='warm up步数')# parser.add_argument('--label_smoothing', default=True, action='store_true', help='是否进行标签平滑')parser.add_argument('--val_num', type=int, default=8000, help='验证集大小')args = parser.parse_args()return argsdef create_logger(args):"""将日志输出到日志文件和控制台"""logger = logging.getLogger(__name__)logger.setLevel(logging.INFO)formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')# 创建一个handler,用于写入日志文件file_handler = logging.FileHandler(filename=args.log_path)file_handler.setFormatter(formatter)file_handler.setLevel(logging.INFO)logger.addHandler(file_handler)# 创建一个handler,用于将日志输出到控制台console = logging.StreamHandler()console.setLevel(logging.DEBUG)console.setFormatter(formatter)logger.addHandler(console)return loggerdef collate_fn(batch):input_ids = rnn_utils.pad_sequence(batch, batch_first=True, padding_value=0)labels = rnn_utils.pad_sequence(batch, batch_first=True, padding_value=-100)return input_ids, labels# def padding_batch(data_list, pad_id):
#     """
#     使用pad_id将data_list的每条数据,填充至data_list中最长的长度
#     :param data_list:
#     :param pad_id:
#     :return:
#     """
#     # 统计data_list中的最大长度
#     max_len = 0
#     for data in data_list:
#         max_len = max_len if max_len > len(data) else len(data)
#
#     # 对数据进行padding
#     new_data_list = []
#     for data in data_list:
#         new_data = data + [pad_id] * (max_len - len(data))
#         new_data_list.append(new_data)
#     return new_data_listdef load_dataset(logger, args):"""加载训练集和验证集"""logger.info("loading training dataset and validating dataset")train_path = args.train_pathwith open(train_path, "rb") as f:input_list = pickle.load(f)# 划分训练集与验证集val_num = args.val_numinput_list_train = input_list[val_num:]input_list_val = input_list[:val_num]# test# input_list_train = input_list_train[:24]# input_list_val = input_list_val[:24]train_dataset = MyDataset(input_list_train, args.max_len)val_dataset = MyDataset(input_list_val, args.max_len)return train_dataset, val_datasetdef train_epoch(model, train_dataloader, optimizer, scheduler, logger,epoch, args):model.train()device = args.device# pad_id = args.pad_id# sep_id = args.sep_idignore_index = args.ignore_indexepoch_start_time = datetime.now()total_loss = 0  # 记录下整个epoch的loss的总和# epoch_correct_num:每个epoch中,output预测正确的word的数量# epoch_total_num: 每个epoch中,output预测的word的总数量epoch_correct_num, epoch_total_num = 0, 0for batch_idx, (input_ids, labels) in enumerate(train_dataloader):# print(f"the input_ids is: {input_ids}, and the labels is : {labels} !!!")# 捕获cuda out of memory exceptiontry:input_ids = input_ids.to(device)labels = labels.to(device)outputs = model.forward(input_ids, labels=labels)logits = outputs.logitsloss = outputs.lossloss = loss.mean()# 统计该batch的预测token的正确数与总数batch_correct_num, batch_total_num = calculate_acc(logits, labels, ignore_index=ignore_index)# 统计该epoch的预测token的正确数与总数epoch_correct_num += batch_correct_numepoch_total_num += batch_total_num# 计算该batch的accuracybatch_acc = batch_correct_num / batch_total_numtotal_loss += loss.item()if args.gradient_accumulation_steps > 1:loss = loss / args.gradient_accumulation_stepsloss.backward()# 梯度裁剪torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)# 进行一定step的梯度累计之后,更新参数if (batch_idx + 1) % args.gradient_accumulation_steps == 0:# 更新参数optimizer.step()# 更新学习率scheduler.step()# 清空梯度信息optimizer.zero_grad()if (batch_idx + 1) % args.log_step == 0:logger.info("batch {} of epoch {}, loss {}, batch_acc {}, lr {}".format(batch_idx + 1, epoch + 1, loss.item() * args.gradient_accumulation_steps, batch_acc, scheduler.get_lr()))del input_ids, outputsexcept RuntimeError as exception:if "out of memory" in str(exception):logger.info("WARNING: ran out of memory")if hasattr(torch.cuda, 'empty_cache'):torch.cuda.empty_cache()else:logger.info(str(exception))raise exception# 记录当前epoch的平均loss与accuracyepoch_mean_loss = total_loss / len(train_dataloader)epoch_mean_acc = epoch_correct_num / epoch_total_numlogger.info("epoch {}: loss {}, predict_acc {}".format(epoch + 1, epoch_mean_loss, epoch_mean_acc))# save modellogger.info('saving model for epoch {}'.format(epoch + 1))model_path = join(args.save_model_path, 'epoch{}'.format(epoch + 1))if not os.path.exists(model_path):os.mkdir(model_path)model_to_save = model.module if hasattr(model, 'module') else modelmodel_to_save.save_pretrained(model_path)logger.info('epoch {} finished'.format(epoch + 1))epoch_finish_time = datetime.now()logger.info('time for one epoch: {}'.format(epoch_finish_time - epoch_start_time))return epoch_mean_lossdef validate_epoch(model, validate_dataloader, logger, epoch, args):logger.info("start validating")model.eval()device = args.device# pad_id = args.pad_id# sep_id = args.sep_idignore_index = args.ignore_indexepoch_start_time = datetime.now()total_loss = 0# 捕获cuda out of memory exceptiontry:with torch.no_grad():for batch_idx, (input_ids, labels) in enumerate(validate_dataloader):input_ids = input_ids.to(device)labels = labels.to(device)outputs = model.forward(input_ids, labels=labels)logits = outputs.logitsloss = outputs.lossloss = loss.mean()total_loss += loss.item()del input_ids, outputs# 记录当前epoch的平均lossepoch_mean_loss = total_loss / len(validate_dataloader)logger.info("validate epoch {}: loss {}".format(epoch+1, epoch_mean_loss))epoch_finish_time = datetime.now()logger.info('time for validating one epoch: {}'.format(epoch_finish_time - epoch_start_time))return epoch_mean_lossexcept RuntimeError as exception:if "out of memory" in str(exception):logger.info("WARNING: ran out of memory")if hasattr(torch.cuda, 'empty_cache'):torch.cuda.empty_cache()else:logger.info(str(exception))raise exceptiondef train(model, logger, train_dataset, validate_dataset, args):train_dataloader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, num_workers=args.num_workers, collate_fn=collate_fn,drop_last=True)validate_dataloader = DataLoader(validate_dataset, batch_size=args.batch_size, shuffle=True,num_workers=args.num_workers, collate_fn=collate_fn, drop_last=True)early_stopping = EarlyStopping(args.patience, verbose=True, save_path=args.save_model_path)t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.epochsoptimizer = transformers.AdamW(model.parameters(), lr=args.lr, eps=args.eps)# scheduler = transformers.WarmupLinearSchedule(optimizer, warmup_steps=args.warmup_steps, t_total=t_total)scheduler = transformers.get_linear_schedule_with_warmup(optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total)logger.info('starting training')# 用于记录每个epoch训练和验证的losstrain_losses, validate_losses = [], []# 记录验证集的最小lossbest_val_loss = 10000# 开始训练for epoch in range(args.epochs):# ========== train ========== #train_loss = train_epoch(model=model, train_dataloader=train_dataloader,optimizer=optimizer, scheduler=scheduler,logger=logger, epoch=epoch, args=args)train_losses.append(train_loss)# ========== validate ========== #validate_loss = validate_epoch(model=model, validate_dataloader=validate_dataloader,logger=logger, epoch=epoch, args=args)validate_losses.append(validate_loss)# 保存当前困惑度最低的模型,困惑度低,模型的生成效果不一定会越好if validate_loss < best_val_loss:best_val_loss = validate_losslogger.info('saving current best model for epoch {}'.format(epoch + 1))model_path = join(args.save_model_path, 'min_ppl_model'.format(epoch + 1))if not os.path.exists(model_path):os.mkdir(model_path)model_to_save = model.module if hasattr(model, 'module') else modelmodel_to_save.save_pretrained(model_path)#  如果patience=0,则不进行early stoppingif args.patience == 0:continueearly_stopping(validate_loss, model)if early_stopping.early_stop:logger.info("Early stopping")breaklogger.info('training finished')logger.info("train_losses:{}".format(train_losses))logger.info("validate_losses:{}".format(validate_losses))def caculate_loss(logit, target, pad_idx, smoothing=True):if smoothing:logit = logit[..., :-1, :].contiguous().view(-1, logit.size(2))target = target[..., 1:].contiguous().view(-1)eps = 0.1n_class = logit.size(-1)one_hot = torch.zeros_like(logit).scatter(1, target.view(-1, 1), 1)one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)log_prb = F.log_softmax(logit, dim=1)non_pad_mask = target.ne(pad_idx)loss = -(one_hot * log_prb).sum(dim=1)loss = loss.masked_select(non_pad_mask).mean()  # average laterelse:# loss = F.cross_entropy(predict_logit, target, ignore_index=pad_idx)logit = logit[..., :-1, :].contiguous().view(-1, logit.size(-1))labels = target[..., 1:].contiguous().view(-1)loss = F.cross_entropy(logit, labels, ignore_index=pad_idx)return lossdef calculate_acc(logit, labels, ignore_index=-100):logit = logit[..., :-1, :].contiguous().view(-1, logit.size(-1))labels = labels[..., 1:].contiguous().view(-1)_, logit = logit.max(dim=-1)  # 对于每条数据,返回最大的index# 进行非运算,返回一个tensor,若labels的第i个位置为pad_id,则置为0,否则为1non_pad_mask = labels.ne(ignore_index)n_correct = logit.eq(labels).masked_select(non_pad_mask).sum().item()n_word = non_pad_mask.sum().item()return n_correct, n_worddef main():# 初始化参数args = set_args()# 设置使用哪些显卡进行训练os.environ["CUDA_VISIBLE_DEVICES"] = args.deviceargs.cuda = not args.no_cudaif args.batch_size < 2048 and args.warmup_steps <= 4000:print('[Warning] The warmup steps may be not enough.\n' \'(sz_b, warmup) = (2048, 4000) is the official setting.\n' \'Using smaller batch w/o longer warmup may cause ' \'the warmup stage ends with only little data trained.')# 创建日志对象logger = create_logger(args)# 当用户使用GPU,并且GPU可用时args.cuda = torch.cuda.is_available() and not args.no_cudadevice = 'cuda:0' if args.cuda else 'cpu'args.device = devicelogger.info('using device:{}'.format(device))# 初始化tokenizertokenizer = BertTokenizerFast(vocab_file=args.vocab_path, sep_token="[SEP]", pad_token="[PAD]", cls_token="[CLS]")args.sep_id = tokenizer.sep_token_idargs.pad_id = tokenizer.pad_token_idargs.cls_id = tokenizer.cls_token_id# 创建模型的输出目录if not os.path.exists(args.save_model_path):os.mkdir(args.save_model_path)# 创建模型if args.pretrained_model:  # 加载预训练模型model = GPT2LMHeadModel.from_pretrained(args.pretrained_model)else:  # 初始化模型model_config = GPT2Config.from_json_file(args.model_config)model = GPT2LMHeadModel(config=model_config)model = model.to(device)logger.info('model config:\n{}'.format(model.config.to_json_string()))assert model.config.vocab_size == tokenizer.vocab_size# 并行训练模型if args.cuda and torch.cuda.device_count() > 1:model = DataParallel(model).cuda()# model = BalancedDataParallel(args.gpu0_bsz, model, dim=0).cuda()logger.info("use GPU {} to train".format(args.device))# 计算模型参数数量num_parameters = 0parameters = model.parameters()for parameter in parameters:num_parameters += parameter.numel()logger.info('number of model parameters: {}'.format(num_parameters))# 记录参数设置logger.info("args:{}".format(args))# 加载训练集和验证集# ========= Loading Dataset ========= #train_dataset, validate_dataset = load_dataset(logger, args)train(model, logger, train_dataset, validate_dataset, args)if __name__ == '__main__':main()

dataset.py文件


from torch.utils.data import Dataset
import torch
class MyDataset(Dataset):""""""def __init__(self, input_list, max_len):self.input_list = input_listself.max_len = max_lendef __getitem__(self, index):input_ids = self.input_list[index]input_ids = input_ids[:self.max_len]input_ids = torch.tensor(input_ids, dtype=torch.long)return input_idsdef __len__(self):return len(self.input_list)

训练数据处理代码preprocess.py


from tokenizers import BertWordPieceTokenizer
from transformers import BertTokenizer
from transformers import BertTokenizerFast
import argparse
import pandas as pd
import pickle
from tqdm import tqdm
from transformers import GPT2TokenizerFast, GPT2LMHeadModel
import logging
import numpy as npdef create_logger(log_path):"""将日志输出到日志文件和控制台"""logger = logging.getLogger(__name__)logger.setLevel(logging.INFO)formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')# 创建一个handler,用于写入日志文件file_handler = logging.FileHandler(filename=log_path)file_handler.setFormatter(formatter)file_handler.setLevel(logging.INFO)logger.addHandler(file_handler)# 创建一个handler,用于将日志输出到控制台console = logging.StreamHandler()console.setLevel(logging.DEBUG)console.setFormatter(formatter)logger.addHandler(console)return loggerdef preprocess():"""对原始语料进行tokenize,将每段对话处理成如下形式:"[CLS]utterance1[SEP]utterance2[SEP]utterance3[SEP]""""# 设置参数parser = argparse.ArgumentParser()parser.add_argument('--vocab_path', default='vocab/vocab.txt', type=str, required=False,help='词表路径')parser.add_argument('--log_path', default='data/preprocess.log', type=str, required=False, help='训练日志存放位置')parser.add_argument('--train_path', default='50w_qa_data', type=str, required=False, help='训练日志存放位置')parser.add_argument('--save_path', default='data/train.pkl', type=str, required=False, help='tokenize的训练数据集')args = parser.parse_args()# 初始化日志对象logger = create_logger(args.log_path)# 初始化tokenizertokenizer = BertTokenizerFast(vocab_file=args.vocab_path, sep_token="[SEP]", pad_token="[PAD]", cls_token="[CLS]")sep_id = tokenizer.sep_token_idcls_id = tokenizer.cls_token_idlogger.info("preprocessing data,data path:{}, save path:{}".format(args.train_path, args.save_path))# 读取训练数据集with open(args.train_path, 'rb') as f:data = f.read().decode("utf-8")# 需要区分linux和windows环境下的换行符if "\r\n" in data:train_data = data.split("\r\n\r\n")else:train_data = data.split("\n")logger.info("there are {} dialogue in dataset".format(len(train_data)))# 开始进行tokenize# 保存所有的对话数据,每条数据的格式为:"[CLS]utterance1[SEP]utterance2[SEP]utterance3[SEP]"dialogue_len = []  # 记录所有对话tokenize之后的长度,用于统计中位数与均值dialogue_list = []with open(args.save_path, "w", encoding="utf-8") as f:for index, dialogue in enumerate(tqdm(train_data)):if "\r\n" in data:utterances = dialogue.split("\r\n")else:utterances = dialogue.split("\t")input_ids = [cls_id]  # 每个dialogue以[CLS]开头for utterance in utterances:input_ids += tokenizer.encode(utterance, add_special_tokens=False)input_ids.append(sep_id)  # 每个utterance之后添加[SEP],表示utterance结束dialogue_len.append(len(input_ids))dialogue_list.append(input_ids)# len_mean = np.mean(dialogue_len)# len_median = np.median(dialogue_len)# len_max = np.max(dialogue_len)with open(args.save_path, "wb") as f:pickle.dump(dialogue_list, f)# logger.info("finish preprocessing data,the result is stored in {}".format(args.save_path))# logger.info("mean of dialogue len:{},median of dialogue len:{},max len:{}".format(len_mean, len_median, len_max))if __name__ == '__main__':preprocess()

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/591625.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

基于H2O AutoML与集成学习策略的房屋售价预测模型研究与实现

项目简述&#xff1a; 本项目采用H2O AutoML工具&#xff0c;针对加州房屋销售价格预测问题进行了深入研究与建模。项目以Kaggle提供的加州房屋 交易数据集为基础&#xff0c;通过数据清洗、特征工程、模型训练与评估等步骤&#xff0c;构建了一种基于集成学习策略的房价预测模…

好物视频素材在哪找?视频素材大全app下载

创作优质视频内容不仅仅是一种艺术&#xff0c;也是一种科学&#xff0c;需要对素材的深刻理解和精心挑选。掌握了这些高清无水印视频素材&#xff0c;您就拥有了创作引人入胜视频内容的强大工具。以下是更多精选的视频素材网站&#xff0c;旨在为您的视频项目提供更广阔的视野…

GDPU 竞赛技能实践 天码行空6

&#x1f4d6; 敌兵布阵 C国的死对头A国这段时间正在进行军事演习&#xff0c;所以C国间谍头子Derek和他手下Tidy又开始忙乎了。A国在海岸线沿直线布置了N个工兵营地,Derek和Tidy的任务就是要监视这些工兵营地的活动情况。由于采取了某种先进的监测手段&#xff0c;所以每个工…

C++中二叉搜索树的模拟实现(二叉搜索树是map,set的底层原理)

搜索二叉树 定义 搜索二叉树:左子树小于根,右子树大于根.搜索二叉树的中序序列是升序的.所以对于二叉树而言,它的左子树和右子数都是二叉搜索树 下图就是二叉搜索树 二叉搜索树的性质: 二叉搜索树的中序遍历出的数据是有序的,并且二叉树搜索树在查找某个数的时候,一般情况下…

基于muduo网络库实现的集群聊天服务器

目录 项目内容开发环境安装说明技术介绍项目目录数据库设计项目介绍启动服务器启动客户端注册账号登录成功一对一聊天业务创建群聊业务加入群聊业务群聊业务添加好友业务离线消息存储业务 特殊说明Gitee地址 &#xff01;&#xff01;&#xff01;项目是照着腾讯课堂施磊老师的…

文件操作讲解

目录 一.为什么使用文件 二.什么是文件 2.1程序文件 2.2数据文件 2.3文件名 三.文本文件和二进制文件 fwrite函数 fclose函数 四.文件的打开和关闭 4.1流和标准流 4.2文件指针 4.3文件的打开和关闭 五.文件的顺序读写 5.1文件的顺序读写函数 5.1.1fgetc函数…

OpenCV-python安装教程

先安装opencv-contrib-python pip install opencv-contrib-python 再换源安装opencv-python pip install opencv-python -i https://pypi.tuna.tsinghua.edu.cn/simple 如果出现 使用这个&#xff0c;3.6环境下不能安装opencv的最新版本 pip install opencv-python4.5.5.62…

基于SpringBoot+Vue的电影票管理系统的设计和实现【附源码】

1、系统演示视频&#xff08;演示视频&#xff09; 2、需要交流和学习请联系

机台数据传输共享存在哪些问题?机台数据管控怎么做?

一些金融机构、大型制造业以及晶圆制造厂里面&#xff0c;都会存在大量的机台设备&#xff0c;这些机台会产⽣庞⼤⽽属性不同的数据&#xff0c;这些数据需要定期的进行采集和利用。机台数据在传输分享过程中&#xff0c;会面临各种问题和调整&#xff0c;所以需要做好机台数据…

CSS——精灵图

CSS——精灵图 目录 CSS——精灵图什么是精灵图&#xff1f;导入精灵图裁剪精灵图使用精灵图方式1方式2 什么是精灵图&#xff1f; 精灵图&#xff08;Spritesheet&#xff09;是指将多个小图标、图像或动画合并到一个大图像中的技术。在网页设计和游戏开发中&#xff0c;精灵…

使用 CloudDM 操作 PostgrgSQL 数据库

CloudDM 简介 CloudDM 是 ClouGence 公司推出的一款一站式数据库管理工具&#xff0c;使用它可以方便地访问和管理 MySQL、Oracle、PostgreSQL、阿里云 RDS、Greenplum、TiDB、Redis、StarRocks、Doris、SelectDB、SQL SERVER、ClickHouse、OceanBase 、PolarDB-X 、IBM Db2 等…

AI结合机器人的入门级仿真环境有哪些?

由于使用真实的机器人开发和测试应用程序既昂贵又费时&#xff0c;因此仿真已成为机器人应用程序开发中越来越重要的部分。在部署到机器人之前在仿真中验证应用程序可以通过尽早发现潜在问题来缩短迭代时间。通过模拟&#xff0c;还可以更轻松地测试在现实世界中可能过于危险的…