锐单电子商城 , 一站式电子元器件采购平台!
  • 电话:400-990-0325

经典网络模型---MobileNet三代模型架构之V3

时间:2024-01-05 09:37:01 hs3一组常开10a继电器

MobileNet V3

1)引入Squeeze- Excitation结构

2)非线性变换变化, h-swish替换swish

SE-net

SE-net整体结构合到任何网络模型中

S:操作

特征图采用全局平均池化,获得1*1*C的结果

特征图中的每个通道都相当于描述了一些特征,相当于操作后的全局

E:Excitation操作

要得到每个特征图的重要评分,还需要再来两个全连接层,整个结果也是1*1*C,相当于attnetion

class hsigmoid(nn.Module):     def forward(self, x):         out = F.relu6(x   3, inplace=True) / 6         return out  class SeModule(nn.Module):     def __init__(self, in_size, reduction=4):         super(SeModule, self).__init__()         self.se = nn.Sequential(             nn.AdaptiveAvgPool2d(1),             nn.Conv2d(in_size, in_size // reduction, kernel_size=1, stride=1, padding=0, bias=False),             nn.BatchNorm2d(in_size // reduction),             nn.ReLU(inplace=True),             nn.Conv2d(in_size // reduction, in_size, kernel_size=1, stride=1, padding=0, bias=False),             nn.BatchNorm2d(in_size),             hsigmoid()         )      def forward(self, x):         return x * self.se(x)

MobileNet V2和V3对比

import torch import torch.nn as nn import torch.nn.functional as F from torch.nn import init from base import BaseModel   class hswish(nn.Module):     def forward(self, x):         out = x * F.relu6(x   3, inplace=True) / 6         return out  class Block(nn.Module):     '''expand   depthwise   pointwise'''     def __init__(self, kernel_size, in_size, expand_size, out_size, nolinear, semodule, stride):         super(Block, self).__init__()         self.stride = stride         self.se = semodule          self.conv1 = nn.Conv2d(in_size, expand_size, kernel_size=1, stride=1, padding=0, bias=False)         self.bn1 = nn.BatchNorm2d(expand_size)         self.nolinear1 = nolinear         self.conv2 = nn.Conv2d(expand_size, expand_size, kernel_size=kernel_size, stride=stride, padding=kernel_size//2, groups=expand_size, bias=False)         self.bn2 = nn.BatchNorm2d(expand_size)         self.nolinear2 = nolinear         self.conv3 = nn.Conv2d(expand_size, out_size, kernel_size=1, stride=1, padding=0, bias=False)         self.bn3 = nn.BatchNorm2d(out_size)          self.shortcut = nn.Sequential()         if stride == 1 and in_size != out_size:             self.shortcut = nn.Sequential(                 nn.Conv2d(in_size, out_size, kernel_size=1, stride=1, padding=0, bias=False),                 nn.BatchNorm2d(out_size),             )      def forward(self, x):         out = self.nolinear1(self.bn1(self.conv1(x)))         out = self.nolinear2(self.bn2(self.conv2(out)))         out = self.bn3(self.conv3(out))         if self.se != None:             out = self.se(out)         out = out   self.shortcut(x) if self.stride==1 else out         return out   class MobileNetV3_Large(BaseModel):     def __init__(self, num_classes=1000):         super(MobileNetV3_Large, self).__init__()         self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1, bias=False)         self.bn1 = nn.BatchNorm2d(16)         self.hs1 = hswish()          self.bneck = nn.Sequential(             Block(3, 16, 16, 16, nn.ReLU(inplace=True), None, 1),             Block(3, 16, 64, 24, nn.ReLU(inplace=True), None, 2),             Block(3, 24, 72, 24, nn.ReLU(inplace=True), None, 1),             Block(5, 24, 72, 40, nn.ReLU(inplace=True), SeModule(40), 2),             Block(5, 40, 120, 40, nn.ReLU(inplace=True), SeModule(40), 1),             Block(5, 40, 120, 40, nn.ReLU(inplace=True), SeModule(40), 1),             Block(3, 40, 240, 80, hswish(), None, 2),             Block(3, 80, 200, 80, hswish(), None, 1),             Block(3, 80, 184, 80, hswish(), None, 1),             Block(3, 80, 184, 80, hswish(), None, 1),             Block(3, 80, 480, 112, hswish(), SeModule(112), 1),             Block(3, 112, 672, 112, hswish(), SeModule(112), 1),             Block(5, 112, 672, 160, hswish(), SeModule(160), 1),             Block(5, 160, 672, 160, hswish(), SeModule(160), 2),             Block(5, 160, 960, 160, hswish(), SeModule(160), 1),         )           self.conv2 = nn.Conv2d(160, 960, kernel_size=1, stride=1, padding=0, bias=False)         self.bn2 = nn.BatchNorm2d(960)         self.hs2 = hswish()         self.linear3 = nn.Linear(960, 1280)         self.bn3 = nn.BatchNorm1d(1280)         self.hs3 = hswish()         self.linear4 = nn.Linear(1280, num_classes)         self.init_params()      def init_params(self):         for m in self.modules():             if isinstance(m, nn.Conv2d):                 init.kaiming_normal_(m.weight, mode='fan_out')                 if m.bias is not None:                    init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                init.constant_(m.weight, 1)
                init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                init.normal_(m.weight, std=0.001)
                if m.bias is not None:
                    init.constant_(m.bias, 0)

    def forward(self, x):
        x = 1
        out = self.hs1(self.bn1(self.conv1(x)))
        out = self.bneck(out)
        out = self.hs2(self.bn2(self.conv2(out)))
        out = F.avg_pool2d(out, 7)
        out = out.view(out.size(0), -1)
        out = self.hs3(self.bn3(self.linear3(out)))
        out = self.linear4(out)
        return out

 效果对比

锐单商城拥有海量元器件数据手册IC替代型号,打造电子元器件IC百科大全!

相关文章