这个nn.LayerNorm
有点搞笑我觉得,有个参数normalized_shape
,输入想要归一化张量的最后几个维度,然后就将最后几个维度的元素看做一个整体进行归一化,如下
import torch
import torch.nn as nn# 定义输入张量 (batch_size, sequence_length, feature_dim)
x = torch.tensor([[[1.0, 2.0, 3.0, 4.0],[5.0, 6.0, 7.0, 8.0],[9.0, 10.0, 11.0, 12.0]],[[13.0, 14.0, 15.0, 16.0],[17.0, 18.0, 19.0, 20.0],[21.0, 22.0, 23.0, 24.0]]
])# 输出形状为 (2, 3, 4)
layer_norm = nn.LayerNorm(normalized_shape=4) # normalized_shape 指定特征维度 (feature_dim)# 应用 LayerNorm
normalized_x = layer_norm(x)
normalized_x
输出为
tensor([[[-1.3416, -0.4472, 0.4472, 1.3416],[-1.3416, -0.4472, 0.4472, 1.3416],[-1.3416, -0.4472, 0.4472, 1.3416]],[[-1.3416, -0.4472, 0.4472, 1.3416],[-1.3416, -0.4472, 0.4472, 1.3416],[-1.3416, -0.4472, 0.4472, 1.3416]]],grad_fn=<NativeLayerNormBackward0>)
layer_norm = nn.LayerNorm(normalized_shape=[3,4]) # normalized_shape 指定特征维度 (feature_dim)# 应用 LayerNorm
normalized_x = layer_norm(x)
normalized_x
输出为
tensor([[[-1.5933, -1.3036, -1.0139, -0.7242],[-0.4345, -0.1448, 0.1448, 0.4345],[ 0.7242, 1.0139, 1.3036, 1.5933]],[[-1.5933, -1.3036, -1.0139, -0.7242],[-0.4345, -0.1448, 0.1448, 0.4345],[ 0.7242, 1.0139, 1.3036, 1.5933]]],grad_fn=<NativeLayerNormBackward0>)
但是如果normalized_shape
参数不是4
或[3,4]
或[2,3,4]
就会报错(其他等价形式也可以,比如(4,)
或(3,4)
)