简要声明
- 学习相关网址
- [双语字幕]吴恩达深度学习deeplearning.ai
- Papers With Code
- Datasets
- 深度学习网络基于PyTorch学习架构,代码测试可跑。
- 本学习笔记单纯是为了能对学到的内容有更深入的理解,如果有错误的地方,恳请包容和指正。
参考文献
- PyTorch Tutorials [https://pytorch.org/tutorials/]
- PyTorch Docs [https://pytorch.org/docs/stable/index.html]
简要介绍
MLP (Multilayer Perceptron)
Dataset | MNIST |
---|---|
Input (feature maps) | 32×32 (28×28) |
CONV Layers | 0 |
FC Layers | 3 |
Activation | ReLU |
Output | 10 |
代码分析
函数库调用
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
处理数据
数据下载
# 从开放数据集中下载训练数据
train_data = datasets.MNIST(root="data",train=True,download=True,transform=ToTensor(),
)# 从开放数据集中下载测试数据
test_data = datasets.MNIST(root="data",train=False,download=True,transform=ToTensor(),
)print(f'Number of training examples: {len(train_data)}')
print(f'Number of testing examples: {len(test_data)}')
Number of training examples: 60000
Number of testing examples: 10000
数据加载器(可选)
batch_size = 64# 创建数据加载器
train_dataloader = DataLoader(train_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)for X, y in test_dataloader:print(f"Shape of X [N, C, H, W]: {X.shape}")print(f"Shape of y: {y.shape} {y.dtype}")break
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
创建模型
# 选择训练设备
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Using {device} device")
Using cuda device
class MLP(nn.Module):def __init__(self, input_dim, output_dim):super().__init__()self.input_layer = nn.Sequential(nn.Linear(input_dim, 250),nn.ReLU())self.hidden_layer = nn.Sequential(nn.Linear(250, 100),nn.ReLU())self.output_layer = nn.Sequential(nn.Linear(100, output_dim))def forward(self, x):x = x.view(x.size(0), -1)x = self.input_layer(x)x = self.hidden_layer(x)x = self.output_layer(x)return xmodel = MLP(28*28, 10).to(device)
print(model)
MLP(
(input_layer): Sequential(
(0): Linear(in_features=784, out_features=250, bias=True)
(1): ReLU()
)
(hidden_layer): Sequential(
(0): Linear(in_features=250, out_features=100, bias=True)
(1): ReLU()
)
(output_layer): Sequential(
(0): Linear(in_features=100, out_features=10, bias=True)
)
)
训练模型
选择损失函数和优化器
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
训练循环
def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)model.train()for batch, (X, y) in enumerate(dataloader):X, y = X.to(device), y.to(device)# Compute prediction errorpred = model(X)loss = loss_fn(pred, y)# Backpropagationloss.backward()optimizer.step()optimizer.zero_grad()if batch % 100 == 0:loss, current = loss.item(), (batch + 1) * len(X)print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
测试循环
def test(dataloader, model, loss_fn):size = len(dataloader.dataset)num_batches = len(dataloader)model.eval()test_loss, correct = 0, 0with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)pred = model(X)test_loss += loss_fn(pred, y).item()correct += (pred.argmax(1) == y).type(torch.float).sum().item()test_loss /= num_batchescorrect /= sizeprint(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
训练模型
epochs = 5
for t in range(epochs):print(f"Epoch {t+1}\n-------------------------------")train(train_dataloader, model, loss_fn, optimizer)test(test_dataloader, model, loss_fn)
print("Done!")
Epoch 1
loss: 2.304649 [ 64/60000]
loss: 0.350683 [ 6464/60000]
loss: 0.267444 [12864/60000]
loss: 0.305221 [19264/60000]
loss: 0.200744 [25664/60000]
loss: 0.316856 [32064/60000]
loss: 0.156469 [38464/60000]
loss: 0.280946 [44864/60000]
loss: 0.291244 [51264/60000]
loss: 0.199387 [57664/60000]
Test Error:
Accuracy: 94.7%, Avg loss: 0.169173
模型处理
保存模型
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
Saved PyTorch Model State to model.pth
加载模型
model = MLP(28*28, 10).to(device)
model.load_state_dict(torch.load("model.pth"))
重要函数
torch.cuda.is_available() | 返回一个布尔值,指示 CUDA 当前是否可用 |
---|---|
nn.Sequential | 用于存储 Module 的列表 |
nn.Linear | 线性变换 |
nn.ReLU | 修正线性单位函数 |
nn.CrossEntropyLoss | 交叉熵损失 |
torch.optim.Adam | Adam 算法 |
torch.save | 保存模型 |
torch.load | 加载模型 |