科研绘图系列:python语言绘制SCI图合集

image

介绍

科研绘图系列:python语言绘制SCI图合集

加载python

import numpy as np
import pandas as pd import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as snsfrom statsmodels.stats.multitest import multipletests# Setup for local running - please delete this block
import sys
sys.path.append('C:\\Users\\ncaptier\\Documents\\GitHub\\multipit\\')from multipit.result_analysis.plot import plot_metrics
from utils import plot_average_perf, plot_benchmark, change_width, annotate, reshape_clustermap

数据下载

python语言绘图合集:

  • 百度网盘链接: 从百度网盘下载数据
  • 提取码: 科研绘图系列:python语言绘制SCI图合集

image

代码

加载python

import numpy as np
import pandas as pd import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as snsfrom statsmodels.stats.multitest import multipletests# Setup for local running - please delete this block
import sys
sys.path.append('C:\\Users\\ncaptier\\Documents\\GitHub\\multipit\\')from multipit.result_analysis.plot import plot_metrics
from utils import plot_average_perf, plot_benchmark, change_width, annotate, reshape_clustermap

Figures 2, s7 - s10

rna_os = {"XGboost": 0.663, "LR": 0.627, "RF": 0.624, "Cox": 0.542}
rna_pfs = {"XGboost": 0.637, "LR": 0.663, "RF": 0.591, "Cox": 0.569}clinical_os = {"XGboost": 0.579, "LR": 0.667, "RF": 0.652, "Cox": 0.631}
clinical_pfs = {"XGboost": 0.552, "LR": 0.575, "RF": 0.563, "Cox": 0.526}radiomics_os = {"XGboost": 0.574, "LR": None, "RF": 0.556, "Cox": 0.563}
radiomics_pfs = {"XGboost": 0.634, "LR": 0.568, "RF": 0.565, "Cox": 0.58}pathomics_os = {"XGboost": 0.547, "LR": 0.54, "RF": 0.561, "Cox": 0.538}
pathomics_pfs = {"XGboost": 0.588, "LR": 0.573, "RF": 0.534, "Cox": 0.524}shap_rna_xgboost = pd.read_csv("..\\source_data\\unimodal_shapvalues\\1year_death\\XGboost_RNA.csv", index_col=0)
shap_rna_LR = pd.read_csv("..\\source_data\\unimodal_shapvalues\\1year_death\\LR_RNA.csv", index_col=0)
shap_rna_Cox = pd.read_csv("..\\source_data\\unimodal_shapvalues\\OS\\Cox_RNA.csv", index_col=0)
shap_rna_RF = pd.read_csv("..\\source_data\\unimodal_shapvalues\\OS\\RF_RNA.csv", index_col=0)correlation_signs = pd.read_csv("..\\source_data\\unimodal_shapvalues\\shap_features_correlations\\RNA_signs_os.csv", index_col=0)
correlation_signs.head()
consensus_features = (correlation_signs.sum(axis=1) == -4) | (correlation_signs.sum(axis=1) == 4)# Note: For radiomics OS, replace -4 and 4 by -3 and 3 (only 3 algorithms out of 4 taken into account)df_rank = pd.concat([np.abs((shap_rna_xgboost.iloc[:, :-1].T)[consensus_features]).mean(axis=1).rank(ascending=True).rename("XGBoost"),np.abs((shap_rna_LR.iloc[:, :-1].T)[consensus_features]).mean(axis=1).rank(ascending=True).rename("LR"),np.abs((shap_rna_RF.iloc[:, :-1].T)[consensus_features]).mean(axis=1).rank(ascending=True).rename("RF"),np.abs((shap_rna_Cox.iloc[:, :-1].T)[consensus_features]).mean(axis=1).rank(ascending=True).rename("Cox")],axis=1
)df_rank.head()
weights = [val - 0.5 for val in rna_os.values()]final_importance = (df_rank.apply(lambda row: (1/sum(weights))*(weights[0]*row["XGBoost"] + weights[1]*row["LR"] + weights[2]*row["RF"] + weights[3]*row["Cox"]), axis=1).sort_values(ascending=False))final_importance = (final_importance/df_rank.shape[0]) * (correlation_signs[consensus_features]["XGboost"].loc[final_importance.index])
final_importance = final_importance.to_frame().rename(columns={0: "Consensus importance"})
final_importance["Impact"] = 1*(final_importance["Consensus importance"] > 0)
final_importance = final_importance.replace(to_replace = {"Impact": {0: "Lower risk", 1: "Increase risk"}})final_importance.head(5)
fig, ax = plt.subplots(figsize=(10, 8))sns.barplot(data=final_importance.reset_index(),orient="h",x="Consensus importance",y="index",hue="Impact",palette=["blue", "red"],dodge=False,ax=ax)ax.set(xlabel=None, ylabel=None)
ax.set_axisbelow(True)
ax.yaxis.grid(color="gray", linestyle="dashed")
ax.legend(fontsize=12)
ax.axvline(x=0, color="k")
ax.xaxis.set_tick_params(labelsize=12)
ax.yaxis.set_tick_params(labelsize=12)
ax.set_title("Consensus importance", fontsize=14)
ax.set_xlim(-1.05, 1.05)
plt.tight_layout()
sns.despine()

image

Figures 3, s14, s16 - s23

results_cl = pd.read_csv("..\\source_data\\multimodal_performance\\1year_death\\late_XGboost.csv", index_col=0)
results_cl.head(5)
fig = plot_metrics(results_cl,metrics="roc_auc",models = list(results_cl.columns[1:]),annotations = {"1 modality": (0, 3), "2 modalities": (4, 9), "3 modalities": (10, 13), "4 modalities": (14, 15)},title=None,ylim=(0.5, 0.86),y_text=0.85,ax=None)

image

results_surv = pd.read_csv("..\\source_data\\multimodal_performance\\OS\\late_RF.csv", index_col=0)
results_surv.head(5)fig = plot_metrics(results_surv,metrics='c_index',models = list(results_surv.columns[1:]),annotations = {"1 modality": (0, 3), "2 modalities": (4, 9), "3 modalities": (10, 13), "4 modalities": (14, 15)},title=None, ylim=(0.5, 0.77), y_text=0.77,ax=None)

Figure 4

marginal_contributions = pd.read_csv("..\\source_data\\marginal_contributions_latefus.csv", index_col=0)
marginal_contributions.head()
cmap_TN = sns.clustermap(marginal_contributions[marginal_contributions["results"]=="TN"][["C", "R", "RNA"]],cmap="bwr",center=0,yticklabels=False,xticklabels=False,vmin=-0.08,vmax=0.14)
reshape_clustermap(cmap_TN, cell_width=0.25, cell_height=0.015)
px = 1/plt.rcParams['figure.dpi'] fig, ax = plt.subplots(figsize=(100*px, 675*px))
ax.set_xlim(-0.15, 0)(marginal_contributions.loc[cmap_TN.data2d.index, "multi_pred"].iloc[::-1] - 0.5).plot.barh(width=0.85, ax=ax, color="k")sns.despine()

Figure 5

models = ['late_XGboost', 'late_LR', 'early_XGboost', 'early_select_XGboost','early_LR', 'early_select_LR', 'dyam', 'dyam_optim', 'dyam_select', 'dyam_optim_select']algorithms = ["XGboost", "LR", "XGboost", "XGboost", "LR", "LR", "Perceptron", "Perceptron", "Perceptron", "Perceptron"]#To save best multimodal models
list_best_1y , list_names_best_1y = [], []
list_best_6m, list_names_best_6m = [], []#To save unimodal models
list_clinical_1y, list_clinical_6m = [], []
list_radiomics_1y, list_radiomics_6m = [], []
list_pathomics_1y, list_pathomics_6m = [], []
list_RNA_1y, list_RNA_6m = [], []for mod in models:# Collect unimodal models and best multimodal models for 1-year death predictionresults_1y = pd.read_csv("..\\source_data\\multimodal_performance\\1year_death\\" + mod + ".csv", index_col=0)list_clinical_1y.append(results_1y["C"])list_radiomics_1y.append(results_1y["R"])list_pathomics_1y.append(results_1y["P"])list_RNA_1y.append(results_1y["RNA"])best_1y = results_1y.iloc[:, 1:].mean().idxmax()list_best_1y.append(results_1y[best_1y].rename(mod))list_names_best_1y.append(best_1y)# Collect unimodal models and best multimodal models for 6-months progression predictionresults_6m = pd.read_csv("..\\source_data\\multimodal_performance\\6months_progression\\" + mod + ".csv", index_col=0)list_clinical_6m.append(results_6m["C"])list_radiomics_6m.append(results_6m["R"])list_pathomics_6m.append(results_6m["P"])list_RNA_6m.append(results_6m["RNA"])best_6m = results_6m.iloc[:, 1:].mean().idxmax()list_best_6m.append(results_6m[best_6m].rename(mod))list_names_best_6m.append(best_6m)# Concatenate best multimodal models accross integration strategies
results_multimodal_1y = pd.concat(list_best_1y, axis=1)
results_multimodal_1y["metric"] = "1y death AUC"results_multimodal_6m = pd.concat(list_best_6m, axis=1)
results_multimodal_6m["metric"] = "6m progression AUC"results_multimodal = pd.concat([results_multimodal_1y, results_multimodal_6m], axis=0)# Select best unimodal models accross predictive algorithms
best_clinical_1y = np.argmax([mod.mean() for mod in list_clinical_1y])
best_radiomics_1y = np.argmax([mod.mean() for mod in list_radiomics_1y])
best_pathomics_1y = np.argmax([mod.mean() for mod in list_pathomics_1y])
best_RNA_1y = np.argmax([mod.mean() for mod in list_RNA_1y])
results_unimodal_1y = pd.concat([list_clinical_1y[best_clinical_1y],list_radiomics_1y[best_radiomics_1y],list_pathomics_1y[best_pathomics_1y],list_RNA_1y[best_RNA_1y]],axis=1)
results_unimodal_1y["metric"] = "1y death AUC"best_clinical_6m = np.argmax([mod.mean() for mod in list_clinical_6m])
best_radiomics_6m = np.argmax([mod.mean() for mod in list_radiomics_6m])
best_pathomics_6m = np.argmax([mod.mean() for mod in list_pathomics_6m])
best_RNA_6m = np.argmax([mod.mean() for mod in list_RNA_1y])
results_unimodal_6m = pd.concat([list_clinical_6m[best_clinical_6m],list_radiomics_6m[best_radiomics_6m],list_pathomics_6m[best_pathomics_6m],list_RNA_6m[best_RNA_6m]],axis=1)
results_unimodal_6m["metric"] = "6m progression AUC"results_unimodal = pd.concat([results_unimodal_1y, results_unimodal_6m], axis=0)_, ax = plt.subplots(figsize=(25, 10))annotations = {0: algorithms[best_clinical_1y], 1: algorithms[best_clinical_6m],2: algorithms[best_radiomics_1y], 3: algorithms[best_radiomics_6m],4: algorithms[best_pathomics_1y], 5: algorithms[best_pathomics_6m],6: algorithms[best_RNA_1y], 7: algorithms[best_RNA_6m]}fig = plot_benchmark(results_unimodal, metrics = ["1y death AUC", "6m progression AUC"], new_width = 0.15,annotations = annotations ,ylim = (0.5, 0.86), title = "Best unimodal models",ax=ax)plt.tight_layout()
_, ax = plt.subplots(figsize=(25, 10))annotations = {i: list_names_best_1y[i//2].split('+') if i%2 == 0 else list_names_best_6m[i//2].split('+') for i in range(20)}fig = plot_benchmark(results_multimodal, metrics = ["1y death AUC", "6m progression AUC"], new_width = 0.07,annotations = annotations ,ylim = (0.5, 0.86), title = "Best multimodal combination for different integration strategies",ax=ax)plt.tight_layout()

Figure 6, s15

models = ['late_XGboost', 'late_LR', 'early_XGboost', 'early_select_XGboost', 'early_LR', 'early_select_LR', 'dyam', 'dyam_select']list_average = []
list_std = []for mod in models:results = pd.read_csv("..\\source_data\\multimodal_performance\\1year_death\\" + mod + ".csv", index_col=0)results_agg = pd.DataFrame(index = results.index)results_agg["1 modality"] = results[["C", "R", "P", "RNA"]].mean(axis=1)results_agg["2 modalities"] = results[["C+R", "C+P", "C+RNA", "R+P", "P+RNA", "R+RNA"]].mean(axis=1)results_agg["3 modalities"] = results[["C+R+P", "C+R+RNA", "C+P+RNA", "R+P+RNA"]].mean(axis=1)results_agg["4 modalities"] = results["C+R+P+RNA"].copy()list_average.append(results_agg.mean().rename(mod)) list_std.append(results_agg.std().rename(mod))data = pd.concat(list_average, axis=1).T
data_std = pd.concat(list_std, axis=1).T.values.flatten(order="F")data
fig = plot_average_perf(data,data_std,markers = ["o", "*", "^", "s", "X","8", "D", "h", "P", ">"],sizes = [10, 16, 12, 10, 10, 10, 10, 10, 10, 12],ylim = (0.5, 0.81))plt.tight_layout()

Figure 7

list_sig = ['CRMA', 'CTLA4', 'CX3CL1', 'CXCL9', 'CYT', 'EIGS', 'ESCS', 'FTBRS', 'HLADRA', 'HRH1', 'IFNgamma', 'IMPRES', 'IRG', 'Immunopheno', 'MPS', 'PD1', 'PDL1','PDL2', 'Renal101', 'TIG', 'TLS', 'TME']list_gsea = ['APM', 'CECMup', 'CECMdown', 'IIS', 'IMS', 'IPRES', 'MFP', 'MIAS', 'PASSPRE', 'TIS']list_deconv = ['CD8T_CIBERSORT', 'CD8T_MCPcounter', 'CD8T_Xcell', 'Immuno_CIBERSORT']
results_1yeardeath = pd.read_csv("..\\source_data\\signatureRNA_performance\\1year_death\\signatures_RNA.csv", index_col=0)temp = pd.read_csv("..\\source_data\\multimodal_performance\\1year_death\\late_XGboost.csv", index_col=0)results_1yeardeath["best_RNA"] = temp["RNA"]
results_1yeardeath["best_multimodal"] = temp["C+R+RNA"]results_1yeardeath.head(5)
list_sig_sorted = list(results_1yeardeath[list_sig].mean().apply(lambda x: max(x, 1-x)).sort_values().index)
list_gsea_sorted = list(results_1yeardeath[list_gsea].mean().apply(lambda x: max(x, 1-x)).sort_values().index)
list_deconv_sorted = list(results_1yeardeath[list_deconv].mean().apply(lambda x: max(x, 1-x)).sort_values().index)results_1yeardeath = results_1yeardeath[["metric"] + list_sig_sorted + list_gsea_sorted + list_deconv_sorted + ["best_RNA", "best_multimodal"]]
color_dic = {}
for col in results_1yeardeath.columns[1:-2]:if results_1yeardeath[col].mean() < 0.5:results_1yeardeath[col] = 1 - results_1yeardeath[col]color_dic[col] = "blue"else:color_dic[col] = "red"color_dic["best_RNA"] = "red"
color_dic["best_multimodal"] = "red"
_, ax = plt.subplots(figsize=(20, 10))fig = plot_metrics(results_1yeardeath,metrics="roc_auc",models = list(results_1yeardeath.columns[1:]),colors = color_dic,ylim=(0.5, 0.86),annotations={"Marker genes \n methods": (0, 21), "GSEA \n methods": (22, 31), "Deconvolution \n methods": (32, 35), "Our ML\n methods": (37,39)},y_text = 0.82,ax = ax)t = plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")red_patch = mpatches.Patch(color='red', label='Poor prognosis')
blue_patch = mpatches.Patch(color='blue', label='Good prognosis')
ax.legend(handles=[red_patch, blue_patch], fontsize=16)
plt.tight_layout()

Figures 8, s24

pval_OS = pd.read_csv("..\\source_data\\multimodal_risk_stratification\\OS\\pvalues.csv", index_col=0).T
pval_OS["Predictive task"] = "OS"pval_PFS = pd.read_csv("..\\source_data\\multimodal_risk_stratification\\PFS\\pvalues.csv", index_col=0).T
pval_PFS["Predictive task"] = "PFS"pval_1y = pd.read_csv("..\\source_data\\multimodal_risk_stratification\\1year_death\\pvalues.csv", index_col=0).T
pval_1y["Predictive task"] = "1y-death"pval_1y_thr = pd.read_csv("..\\source_data\\multimodal_risk_stratification\\1year_death\\pvalues_threshold.csv", index_col=0).T
pval_1y_thr["Predictive task"] = "1y-death (threshold)"pval_6m = pd.read_csv("..\\source_data\\multimodal_risk_stratification\\6months_progression\\pvalues.csv", index_col=0).T
pval_6m["Predictive task"] = "6m-progression"pval_6m_thr = pd.read_csv("..\\source_data\\multimodal_risk_stratification\\6months_progression\\pvalues_threshold.csv", index_col=0).T
pval_6m_thr["Predictive task"] = "6m-progression (threshold)"
d = {col: "multimodal" for col in pval_OS.columns if len(col.split('+')) > 1}
d.update({"C": "clinical", "R": "Radiomic", "P": "Pathomic",})final_1 = (pd.concat([pval_OS, pval_PFS], axis=0).melt(id_vars = ["Predictive task"]).replace({"variable": d}))
final_2 = (pd.concat([pval_1y, pval_1y_thr, pval_6m, pval_6m_thr], axis=0).melt(id_vars = ["Predictive task"]).replace({"variable":d}))results = pd.concat([final_1, final_2]).reset_index(drop=True)corrected = multipletests(list(results["value"].fillna(1).values), alpha=0.05, method='fdr_bh')results["value"] = -np.log10(corrected[1])results.head()
fig, ax = plt.subplots(figsize=(10, 8))data  = results[results["variable"] == "multimodal"]sns.boxplot(data = data, x="variable", y="value", hue="Predictive task", ax=ax)
sns.stripplot(data = data, x="variable", y="value", hue="Predictive task", ax=ax, dodge=True, palette='dark:k', size=4)
sns.despine()handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[:6], labels[:6], fontsize=16, bbox_to_anchor = [0.62, 0.785])ax.set_ylim(-0.1, 7)
ax.set(xlabel=None)
ax.set_ylabel("-log10 pvalue", fontsize=16)
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed')
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(axis='x', labelsize=16)
fig, ax = plt.subplots(figsize=(20, 10))
sns.barplot(data = results, x="variable", y="value", hue="Predictive task", ax=ax, estimator=max, err_kws={'linewidth': 0})
sns.despine()
change_width(ax, 0.12)annotations = {0: "RF",          1: "early_select_RF",2: "RF",          3: "early_select_Cox",4: "Perceptron",  5: "early_select_XGBoost",6: "Perceptron",  7: "DyAM_select",8: "Perceptron",  9: "early_XGBoost",10: "Perceptron", 11: "DyAM_select"}annotations_bis = {0: "", 1: ["C", "R", "P"],2: "", 3: ["C", "P", "RNA"],4: "",  5: ["C", "P", "RNA"],6: "",  7: ["C", "RNA"],8: "",  9: ["C", "R", "P", "RNA"],10: "", 11: ["C", "P", "RNA"]}annotate(ax, annotations, rotation = "vertical")
annotate(ax, annotations_bis, position = lambda x: x/6, gap=0.35)ax.legend(fontsize=16)ax.set_ylim(0, 7)
ax.set(xlabel=None)
ax.set_ylabel("-log10 pvalue", fontsize=16)
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed')
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(axis='x', labelsize=16)plt.tight_layout()

image

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/868619.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Postman相关

postman打开控制台的快捷键alt+ctrl+c1、工具栏 New: 新建,可以新建Request请求,Collection请求集,环境等等 Import: 导入,可以导入别人导出的请求集 Runner: 运行一个请求集(批量执行) Invite: 邀请(需要注册,邀请进行协作) 同步图标: (需要注册,同步你的项目到云…

TorchOptimizer:基于贝叶斯优化的PyTorch Lightning超参数调优框架

超参数优化是深度学习模型开发过程中的一个核心技术难点。合适的超参数组合能够显著提升模型性能,但优化过程往往需要消耗大量计算资源和时间。本文介绍TorchOptimizer,这是一个基于贝叶斯优化方法的超参数优化框架,专门用于优化PyTorch Lightning模型的超参数配置。TorchOp…

深入理解ASP.NET Core 管道的工作原理

在 .NET Core 中,管道(Pipeline)是处理 HTTP 请求和响应的中间件组件的有序集合。每个中间件组件都可以对请求进行处理,并将其传递给下一个中间件组件,直到请求到达最终的处理程序。管道的概念类似于流水线,每个中间件组件都是流水线中的一个步骤。1. 管道的基本概念 在 …

svn检出has encountered a problem cannot checkout

报错信息:"svn检出has encountered a problem Cannot checkout" 表示 Subversion(SVN)在尝试检出(即下载)版本控制仓库的时候遇到了问题。 解释: 这个错误通常意味着 SVN 客户端无法完成检出操作,可能的原因有多种,包括网络问题、权限问题、URL错误、服务器不…

ABC388

好像已经很久没有写过题解了 C link对于每一个糕点,二分查找大于等于它大小的二倍的糕点的位置(可以用\(lower_{}bound\)函数),从这个位置到\(n\)就是可以和这个糕点配对的糕点。猜猜我是啥 #include<bits/stdc++.h>#define int long longusing namespace std;int n;…

零知识证明二(椭圆曲线配对)

本文章将V神关于椭圆曲线配对的文章进行了翻译。原文在此: https://medium.com/@VitalikButerin/exploring-elliptic-curve-pairings-c73c1864e627 1 简介 椭圆曲线配对是各种构造背后的关键密码原型之一,包括确定性阈值签名、zk-SNARKs和其他更简单形式的零知识证明。椭圆曲…

数字化工具助力外贸客户粘性提升

在全球化市场竞争日益激烈的背景下,外贸企业要想在红海中脱颖而出,必须深耕客户体验,提供精细化服务,增强客户粘性。只有以客户为中心,创新服务模式,才能在长期合作中实现双赢。 第一部分:客户粘性的价值与挑战 1. 客户粘性的核心价值 客户粘性是企业持续盈利和长期发展…

测试项目管理系统 - TPA

面对当今汽车行业高速迭代的研发节奏,测试业务的复杂性和高标准使得传统的手动管理方式面临巨大挑战。汽车测试涵盖多种类型,经纬恒润基于多年测试管理经验,推出了测试项目管理系统INTEWORK-TPA产品,TPA是INTEWORK系列产品中用于汽车电子系统测试项目管理的一整套软件解决方…

k8s volcano + deepspeed多机训练 + RDMA ROCE+ 用户权限安全方案

前提:nvidia、cuda、nvidia-fabricmanager等相关的组件已经在宿主机正确安装,如果没有安装可以参考我之前发的文章GPU A800 A100系列NVIDIA环境和PyTorch2.0基础环境配置【建议收藏】_a800多卡运行环境配置-CSDN博客文章浏览阅读1.1k次,点赞8次,收藏16次。Ant系列GPU支持 N…

飞驰云联荣获中国信通院2024年度首期“磐安”优秀案例

2024年12月24日,中国信息通信研究院(以下简称“信通院”)在京成功举办“2025中国信息通信院ICT深度观察报告会”,会上隆重发布了由信通院数字安全护航计划组织的2024年度首期“磐安”优秀案例评选结果,Ftrans飞驰云联凭借卓越的技术创新和案例应用价值,成功入选金融领域优…

Mysql身份认证过程

背景 最近有一些hersql的用户希望能支持mysql的caching_sha2_password认证方式,caching_sha2_password与常用的mysql_native_password认证过程差异还是比较大的,因此抽空研究了一下caching_sha2_password身份认证过程,并为hersql支持了caching_sha2_password的能力hersql是我…

Cloudflare Pages 搭建 DockerHub 镜像加速器

登录https://dash.cloudflare.com/ 进入Workers创建一个Worker名称随意,然后点击部署编辑代码访问https://github.com/cmliu/CF-Workers-docker.io/blob/main/_worker.js 复制代码将代码全部替换点部署刷新预览后显示搜索框部署成功 点返回,然后选择设置,点添加自定义域名填…