yolo增加mobileone

代码地址:GitHub - apple/ml-mobileone: This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone".

论文地址:https://arxiv.org/abs/2206.04040

MobileOne出自Apple,它的作者声称在iPhone 12上MobileOne的推理时间只有1毫秒,这也是MobileOne这个名字中One的含义。从MobileOne的快速落地可以看到重参数化在移动端的潜力:简单、高效、即插即用。

图3中的左侧部分构成了MobileOne的一个完整building block。它由上下两部分构成,其中上面部分基于深度卷积(Depthwise Convolution),下面部分基于点卷积(Pointwise Convolution)。深度卷积与点卷积的术语来自于MobileNet。深度卷积本质上是一个分组卷积,它的分组数g与输入通道相同。而点卷积是一个1×1卷积。

图3中的深度卷积模块由三条分支构成。最左侧分支是1×1卷积;中间分支是过参数化的3×3卷积,即k个3×3卷积;右侧部分是一个包含BN层的shortcut连接。这里的1×1卷积和3×3卷积都是深度卷积(也即分组卷积,分组数g等于输入通道数)。

图3中的点卷积模块由两条分支构成。左侧分支是过参数化的1×1卷积,由k个1×1卷积构成。右侧分支是一个包含BN层的跳跃连接。在训练阶段,MobileOne就是由这样的building block堆叠而成。当训练完成后,可以使用重参数化方法将图3中左侧所示的building block重参数化图3中右侧的结构。

 1、yolov5

创建yolov5s-mobileone.yaml

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:- [10,13, 16,30, 33,23]  # P3/8- [30,61, 62,45, 59,119]  # P4/16- [116,90, 156,198, 373,326]  # P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2[ -1, 1, MobileOne, [ 128, True, 2] ],  # 1-P2/4[ -1, 1, MobileOne, [ 256, True, 8] ],  # 2-P3/8[ -1, 1, MobileOne, [ 512, True, 10] ],  # 3-P4/16[ -1, 1, MobileOne, [ 1024, True, 1] ],  # 4-P5/32[-1, 1, SPPF, [1024, 5]],  # 5]# YOLOv5 v6.0 head
head:[[-1, 1, Conv, [512, 1, 1]], # 6[-1, 1, nn.Upsample, [None, 2, 'nearest']], # 7[[-1, 3], 1, Concat, [1]],  # cat backbone P4[ -1, 1, MobileOne, [ 512, False, 3] ],  # 9[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 2], 1, Concat, [1]],  # cat backbone P3[ -1, 1, MobileOne, [ 256, False, 3] ],  # 13 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 10], 1, Concat, [1]],  # cat head P4[ -1, 1, MobileOne, [ 512, False, 3] ],  # 16 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 6], 1, Concat, [1]],  # cat head P5[ -1, 1, MobileOne, [ 1024, False, 3] ],  # 19 (P5/32-large)[[13, 16, 19], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)]

common.py中增加

from typing import Optional, List, Tuple
import torch.nn.functional as Fclass SEBlock(nn.Module):""" Squeeze and Excite module.Pytorch implementation of `Squeeze-and-Excitation Networks` -https://arxiv.org/pdf/1709.01507.pdf"""def __init__(self,in_channels: int,rd_ratio: float = 0.0625) -> None:""" Construct a Squeeze and Excite Module.:param in_channels: Number of input channels.:param rd_ratio: Input channel reduction ratio."""super(SEBlock, self).__init__()self.reduce = nn.Conv2d(in_channels=in_channels,out_channels=int(in_channels * rd_ratio),kernel_size=1,stride=1,bias=True)self.expand = nn.Conv2d(in_channels=int(in_channels * rd_ratio),out_channels=in_channels,kernel_size=1,stride=1,bias=True)def forward(self, inputs: torch.Tensor) -> torch.Tensor:""" Apply forward pass. """b, c, h, w = inputs.size()x = F.avg_pool2d(inputs, kernel_size=[h, w])x = self.reduce(x)x = F.relu(x)x = self.expand(x)x = torch.sigmoid(x)x = x.view(-1, c, 1, 1)return inputs * xclass MobileOneBlock(nn.Module):""" MobileOne building block.This block has a multi-branched architecture at train-timeand plain-CNN style architecture at inference timeFor more details, please refer to our paper:`An Improved One millisecond Mobile Backbone` -https://arxiv.org/pdf/2206.04040.pdf"""def __init__(self,in_channels: int,out_channels: int,kernel_size: int,stride: int = 1,padding: int = 0,dilation: int = 1,groups: int = 1,inference_mode: bool = False,use_se: bool = False,num_conv_branches: int = 1) -> None:""" Construct a MobileOneBlock module.:param in_channels: Number of channels in the input.:param out_channels: Number of channels produced by the block.:param kernel_size: Size of the convolution kernel.:param stride: Stride size.:param padding: Zero-padding size.:param dilation: Kernel dilation factor.:param groups: Group number.:param inference_mode: If True, instantiates model in inference mode.:param use_se: Whether to use SE-ReLU activations.:param num_conv_branches: Number of linear conv branches."""super(MobileOneBlock, self).__init__()self.inference_mode = inference_modeself.groups = groupsself.stride = strideself.kernel_size = kernel_sizeself.in_channels = in_channelsself.out_channels = out_channelsself.num_conv_branches = num_conv_branches# Check if SE-ReLU is requestedif use_se:self.se = SEBlock(out_channels)else:self.se = nn.Identity()self.activation = nn.ReLU()if inference_mode:self.reparam_conv = nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=kernel_size,stride=stride,padding=padding,dilation=dilation,groups=groups,bias=True)else:# Re-parameterizable skip connectionself.rbr_skip = nn.BatchNorm2d(num_features=in_channels) \if out_channels == in_channels and stride == 1 else None# Re-parameterizable conv branchesrbr_conv = list()for _ in range(self.num_conv_branches):rbr_conv.append(self._conv_bn(kernel_size=kernel_size,padding=padding))self.rbr_conv = nn.ModuleList(rbr_conv)# Re-parameterizable scale branchself.rbr_scale = Noneif kernel_size > 1:self.rbr_scale = self._conv_bn(kernel_size=1,padding=0)def forward(self, x: torch.Tensor) -> torch.Tensor:""" Apply forward pass. """# Inference mode forward pass.if self.inference_mode:return self.activation(self.se(self.reparam_conv(x)))# Multi-branched train-time forward pass.# Skip branch outputidentity_out = 0if self.rbr_skip is not None:identity_out = self.rbr_skip(x)# Scale branch outputscale_out = 0if self.rbr_scale is not None:scale_out = self.rbr_scale(x)# Other branchesout = scale_out + identity_outfor ix in range(self.num_conv_branches):out += self.rbr_conv[ix](x)return self.activation(self.se(out))def reparameterize(self):""" Following works like `RepVGG: Making VGG-style ConvNets Great Again` -https://arxiv.org/pdf/2101.03697.pdf. We re-parameterize multi-branchedarchitecture used at training time to obtain a plain CNN-like structurefor inference."""if self.inference_mode:returnkernel, bias = self._get_kernel_bias()self.reparam_conv = nn.Conv2d(in_channels=self.rbr_conv[0].conv.in_channels,out_channels=self.rbr_conv[0].conv.out_channels,kernel_size=self.rbr_conv[0].conv.kernel_size,stride=self.rbr_conv[0].conv.stride,padding=self.rbr_conv[0].conv.padding,dilation=self.rbr_conv[0].conv.dilation,groups=self.rbr_conv[0].conv.groups,bias=True)self.reparam_conv.weight.data = kernelself.reparam_conv.bias.data = bias# Delete un-used branchesfor para in self.parameters():para.detach_()self.__delattr__('rbr_conv')self.__delattr__('rbr_scale')if hasattr(self, 'rbr_skip'):self.__delattr__('rbr_skip')self.inference_mode = Truedef _get_kernel_bias(self) -> Tuple[torch.Tensor, torch.Tensor]:""" Method to obtain re-parameterized kernel and bias.Reference: https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py#L83:return: Tuple of (kernel, bias) after fusing branches."""# get weights and bias of scale branchkernel_scale = 0bias_scale = 0if self.rbr_scale is not None:kernel_scale, bias_scale = self._fuse_bn_tensor(self.rbr_scale)# Pad scale branch kernel to match conv branch kernel size.pad = self.kernel_size // 2kernel_scale = torch.nn.functional.pad(kernel_scale,[pad, pad, pad, pad])# get weights and bias of skip branchkernel_identity = 0bias_identity = 0if self.rbr_skip is not None:kernel_identity, bias_identity = self._fuse_bn_tensor(self.rbr_skip)# get weights and bias of conv brancheskernel_conv = 0bias_conv = 0for ix in range(self.num_conv_branches):_kernel, _bias = self._fuse_bn_tensor(self.rbr_conv[ix])kernel_conv += _kernelbias_conv += _biaskernel_final = kernel_conv + kernel_scale + kernel_identitybias_final = bias_conv + bias_scale + bias_identityreturn kernel_final, bias_finaldef _fuse_bn_tensor(self, branch) -> Tuple[torch.Tensor, torch.Tensor]:""" Method to fuse batchnorm layer with preceeding conv layer.Reference: https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py#L95:param branch::return: Tuple of (kernel, bias) after fusing batchnorm."""if isinstance(branch, nn.Sequential):kernel = branch.conv.weightrunning_mean = branch.bn.running_meanrunning_var = branch.bn.running_vargamma = branch.bn.weightbeta = branch.bn.biaseps = branch.bn.epselse:assert isinstance(branch, nn.BatchNorm2d)if not hasattr(self, 'id_tensor'):input_dim = self.in_channels // self.groupskernel_value = torch.zeros((self.in_channels,input_dim,self.kernel_size,self.kernel_size),dtype=branch.weight.dtype,device=branch.weight.device)for i in range(self.in_channels):kernel_value[i, i % input_dim,self.kernel_size // 2,self.kernel_size // 2] = 1self.id_tensor = kernel_valuekernel = self.id_tensorrunning_mean = branch.running_meanrunning_var = branch.running_vargamma = branch.weightbeta = branch.biaseps = branch.epsstd = (running_var + eps).sqrt()t = (gamma / std).reshape(-1, 1, 1, 1)return kernel * t, beta - running_mean * gamma / stddef _conv_bn(self,kernel_size: int,padding: int) -> nn.Sequential:""" Helper method to construct conv-batchnorm layers.:param kernel_size: Size of the convolution kernel.:param padding: Zero-padding size.:return: Conv-BN module."""mod_list = nn.Sequential()mod_list.add_module('conv', nn.Conv2d(in_channels=self.in_channels,out_channels=self.out_channels,kernel_size=kernel_size,stride=self.stride,padding=padding,groups=self.groups,bias=False))mod_list.add_module('bn', nn.BatchNorm2d(num_features=self.out_channels))return mod_list

 在yolo.py中增加

       if m in (Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, C2f_Add,MobileOne):c1, c2 = ch[f], args[0]if c2 != no:  # if not outputc2 = make_divisible(c2 * gw, 8)args = [c1, c2, *args[1:]]if m in [BottleneckCSP, C3, C3TR, C3Ghost, C3x, C2f_Add]:args.insert(2, n)  # number of repeatsn = 1

同时在yolo.py的basemodel中添加

    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layersLOGGER.info('Fusing layers... ')for m in self.model.modules():if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update convdelattr(m, 'bn')  # remove batchnormm.forward = m.forward_fuse  # update forwardif hasattr(m, 'reparameterize'):m.reparameterize()self.info()return self

运行yolo.py

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/90279.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

动态数组 Vector(难度1)(V)

C数据结构与算法实现(目录) 前驱课程 C 精简教程 目录(必读) 堆数组 heap array 面相对象的堆数组 1 原始堆数组的缺点: 1) 原始堆数组 其长度是固定不变的。 2) 使用指针管理元素&#…

简易虚拟培训系统-UI控件的应用1

目录 前言 UI结构总体介绍 建立初步的系统UI结构 Image控件 前言 前面的文章介绍了关于Oculus设备与UI控件的关联,从本文开始采用小示例的方式介绍基本的UI控件在系统中的基本作用(仅介绍“基本作用”,详细的API教程可参考官方文档&#x…

从0到1学会Git(第一部分):Git的下载和初始化配置

1.Git是什么: 首先我们看一下百度百科的介绍:Git(读音为/gɪt/)是一个开源的分布式版本控制系统,可以有效、高速地处理从很小到非常大的项目版本管理。 也是Linus Torvalds为了帮助管理Linux内核开发而开发的一个开放源码的版本控制软件。 …

前端调用电脑摄像头

项目中需要前端调用,所以做了如下操作 先看一下效果吧 主要是基于vue3,通过canvas把画面转成base64的形式,然后是把base64转成 file文件,最后调用了一下上传接口 以下是代码 进入页面先调用一下摄像头 navigator.mediaDevices.ge…

NTP时钟同步服务器

目录 一、什么是NTP? 二、计算机时间分类 三、NTP如何工作? 四、NTP时钟同步方式(linux) 五、时间同步实现软件(既是客户端软件也是服务端软件) 六、chrony时钟同步软件介绍 七、/etc/chrony.conf配置文件介…

构建 NodeJS 影院预订微服务并使用 docker 部署(04/4)

一、说明 构建一个微服务的电影网站,需要Docker、NodeJS、MongoDB,这样的案例您见过吗?如果对此有兴趣,您就继续往下看吧。 我们前几章的快速回顾 第一篇文章介绍了微服务架构模式,并讨论了使用微服务的优缺点。第二篇…

RHCE——十二、Mysql服务

Mysql服务 一、什么是数据库1、数据:2、数据库: 二、mysql概述三、版本及下载四、yum仓库安装1、添加yum源2、安装3、后续配置 五、本地RPM包安装1、使用迅雷下载集合包2、上传数据3、安装 六、生产环境中使用通用二进制包安装1、作用2、软件包下载3、使…

结算日-洛谷

结算日 - 洛谷 解释&#xff1a; 1.用sum记录贝西走到某位置的累计的总钱&#xff0c;flag标记是否有欠债还不了的情况&#xff08;1为有&#xff09;&#xff0c;ans记录步数。 2.若sum<0&#xff0c;则欠债无法还&#xff0c;flag标记为1&#xff0c;并记录下此刻的位置…

方面级别情感分析之四元组预测

情感四元组预测现有方法 阅读本文之前我们默认你对情感分析有基本的认识。 如果没有请阅读文章(https://tech.tcl.com/post/646efb5b4ba0e7a6a2da6476) 情感分析四元组预测涉及四个情感元素: 方面术语a&#xff0c;意见术语(也叫观点术语)o&#xff0c; 方面类别ac&#xff0c…

Kotlin入门1. 语法基础

Kotlin入门1. 语法基础 一、简介二、在Idea创建一个示例项目三、基本语法1. 第一个程序2. 基本数据类型(1) 数字(2) 类型转换(3) 数学运算位运算 &#xff08;4&#xff09;可空类型 3. 函数4. 字符串(1) 字符串拼接(2) 字符串查找(3) 字符串替换(4) 字符串分割 5. null 安全的…

C++ 多重继承

所谓多重继承就是一个儿子有好几个爹&#xff0c;然后一个人继承了这几个爹的财产。只需注意构造顺序即可&#xff0c;反正析构的顺序也是一样的。 #include <iostream> #include <string.h> using namespace std;class base_a { public:base_a(const char *str){…

SpringBoot中自定义starter

SpringBoot自动装配原理&#xff1a; EnableAutoConfiguration注解开启自动装配功能&#xff0c;该注解通常放在应用的主类上。spring.factories文件位于META-INF目录下的配置文件中定义各个自动装配类的全限定名 当SpringBoot启动时&#xff0c;会加载classpath下所有的spri…