13个算法工程师必须掌握的PyTorch Tricks
人工智能与算法学习
共 8652字,需浏览 18分钟
·
2021-04-24 22:27
来自 | 知乎 作者丨z.defying
仅作学术分享,如有侵权,请联系删文。
目录
1、指定GPU编号
2、查看模型每层输出详情
3、梯度裁剪
4、扩展单张图片维度
5、one hot编码
6、防止验证模型时爆显存
7、学习率衰减
8、冻结某些层的参数
9、对不同层使用不同学习率
10、模型相关操作
11、Pytorch内置one hot函数
12、网络参数初始化
13、加载内置预训练模型
1、指定GPU编号
设置当前使用的GPU设备仅为0号设备,设备名称为
/gpu:0
:os.environ["CUDA_VISIBLE_DEVICES"] = "0"
设置当前使用的GPU设备为0,1号两个设备,名称依次为
/gpu:0
、/gpu:1
:os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
,根据顺序表示优先使用0号设备,然后使用1号设备。
2、查看模型每层输出详情
from torchsummary import summary
summary(your_model, input_size=(channels, H, W))
input_size
是根据你自己的网络模型的输入尺寸进行设置。 3、梯度裁剪(Gradient Clipping)
import torch.nn as nn
outputs = model(data)
loss= loss_fn(outputs, target)
optimizer.zero_grad()
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), max_norm=20, norm_type=2)
optimizer.step()
nn.utils.clip_grad_norm_
的参数:parameters – 一个基于变量的迭代器,会进行梯度归一化 max_norm – 梯度的最大范数 norm_type – 规定范数的类型,默认为L2
4、扩展单张图片维度
import cv2
import torch
image = cv2.imread(img_path)
image = torch.tensor(image)
print(image.size())
img = image.view(1, *image.size())
print(img.size())
# output:
# torch.Size([h, w, c])
# torch.Size([1, h, w, c])
import cv2
import numpy as np
image = cv2.imread(img_path)
print(image.shape)
img = image[np.newaxis, :, :, :]
print(img.shape)
# output:
# (h, w, c)
# (1, h, w, c)
import cv2
import torch
image = cv2.imread(img_path)
image = torch.tensor(image)
print(image.size())
img = image.unsqueeze(dim=0)
print(img.size())
img = img.squeeze(dim=0)
print(img.size())
# output:
# torch.Size([(h, w, c)])
# torch.Size([1, h, w, c])
# torch.Size([h, w, c])
tensor.unsqueeze(dim)
:扩展维度,dim指定扩展哪个维度。tensor.squeeze(dim)
:去除dim指定的且size为1的维度,维度大于1时,squeeze()不起作用,不指定dim时,去除所有size为1的维度。5、独热编码
import torch
class_num = 8
batch_size = 4
def one_hot(label):
"""
将一维列表转换为独热编码
"""
label = label.resize_(batch_size, 1)
m_zeros = torch.zeros(batch_size, class_num)
# 从 value 中取值,然后根据 dim 和 index 给相应位置赋值
onehot = m_zeros.scatter_(1, label, 1) # (dim,index,value)
return onehot.numpy() # Tensor -> Numpy
label = torch.LongTensor(batch_size).random_() % class_num # 对随机数取余
print(one_hot(label))
# output:
[[0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0.]]
6、防止验证模型时爆显存
with torch.no_grad():
# 使用model进行预测的代码
pass
Pytorch 训练时无用的临时变量可能会越来越多,导致 out of memory
,可以使用下面语句来清理这些不需要的变量。
Releases all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU application and visible innvidia-smi. torch.cuda.empty_cache()
torch.cuda.empty_cache()
的作用就是释放缓存分配器当前持有的且未占用的缓存显存,以便这些显存可以被其他GPU应用程序中使用,并且通过 nvidia-smi
命令可见。注意使用此命令不会释放tensors占用的显存。7、学习率衰减
import torch.optim as optim
from torch.optim import lr_scheduler
# 训练前的初始化
optimizer = optim.Adam(net.parameters(), lr=0.001)
scheduler = lr_scheduler.StepLR(optimizer, 10, 0.1) # # 每过10个epoch,学习率乘以0.1
# 训练过程中
for n in n_epoch:
scheduler.step()
...
optimizer.param_groups[0]['lr']
。scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda epoch:1/(epoch+1))
lr_scheduler.ReduceLROnPlateau()
提供了基于训练中某些测量值使学习率动态下降的方法,它的参数说明到处都可以查到。提醒一点就是参数 mode='min' 还是'max',取决于优化的的损失还是准确率,即使用
scheduler.step(loss)
还是scheduler.step(acc)
。8、冻结某些层的参数
net = Network() # 获取自定义网络结构
for name, value in net.named_parameters():
print('name: {0},\t grad: {1}'.format(name, value.requires_grad))
name: cnn.VGG_16.convolution1_1.weight, grad: True
name: cnn.VGG_16.convolution1_1.bias, grad: True
name: cnn.VGG_16.convolution1_2.weight, grad: True
name: cnn.VGG_16.convolution1_2.bias, grad: True
name: cnn.VGG_16.convolution2_1.weight, grad: True
name: cnn.VGG_16.convolution2_1.bias, grad: True
name: cnn.VGG_16.convolution2_2.weight, grad: True
name: cnn.VGG_16.convolution2_2.bias, grad: True
no_grad = [
'cnn.VGG_16.convolution1_1.weight',
'cnn.VGG_16.convolution1_1.bias',
'cnn.VGG_16.convolution1_2.weight',
'cnn.VGG_16.convolution1_2.bias'
]
net = Net.CTPN() # 获取网络结构
for name, value in net.named_parameters():
if name in no_grad:
value.requires_grad = False
else:
value.requires_grad = True
name: cnn.VGG_16.convolution1_1.weight, grad: False
name: cnn.VGG_16.convolution1_1.bias, grad: False
name: cnn.VGG_16.convolution1_2.weight, grad: False
name: cnn.VGG_16.convolution1_2.bias, grad: False
name: cnn.VGG_16.convolution2_1.weight, grad: True
name: cnn.VGG_16.convolution2_1.bias, grad: True
name: cnn.VGG_16.convolution2_2.weight, grad: True
name: cnn.VGG_16.convolution2_2.bias, grad: True
optimizer = optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.01)
9、对不同层使用不同学习率
net = Network() # 获取自定义网络结构
for name, value in net.named_parameters():
print('name: {}'.format(name))
# 输出:
# name: cnn.VGG_16.convolution1_1.weight
# name: cnn.VGG_16.convolution1_1.bias
# name: cnn.VGG_16.convolution1_2.weight
# name: cnn.VGG_16.convolution1_2.bias
# name: cnn.VGG_16.convolution2_1.weight
# name: cnn.VGG_16.convolution2_1.bias
# name: cnn.VGG_16.convolution2_2.weight
# name: cnn.VGG_16.convolution2_2.bias
conv1_params = []
conv2_params = []
for name, parms in net.named_parameters():
if "convolution1" in name:
conv1_params += [parms]
else:
conv2_params += [parms]
# 然后在优化器中进行如下操作:
optimizer = optim.Adam(
[
{"params": conv1_params, 'lr': 0.01},
{"params": conv2_params, 'lr': 0.001},
],
weight_decay=1e-3,
)
10、模型相关操作
11、Pytorch内置one_hot函数
torch.nn.functional.one_hot
。import torch.nn.functional as F
import torch
tensor = torch.arange(0, 5) % 3 # tensor([0, 1, 2, 0, 1])
one_hot = F.one_hot(tensor)
# 输出:
# tensor([[1, 0, 0],
# [0, 1, 0],
# [0, 0, 1],
# [1, 0, 0],
# [0, 1, 0]])
F.one_hot
会自己检测不同类别个数,生成对应独热编码。我们也可以自己指定类别数:tensor = torch.arange(0, 5) % 3 # tensor([0, 1, 2, 0, 1])
one_hot = F.one_hot(tensor, num_classes=5)
# 输出:
# tensor([[1, 0, 0, 0, 0],
# [0, 1, 0, 0, 0],
# [0, 0, 1, 0, 0],
# [1, 0, 0, 0, 0],
# [0, 1, 0, 0, 0]])
conda install pytorch torchvision \-c pytorch
神经网络的初始化是训练流程的重要基础环节,会对模型的性能、收敛性、收敛速度等产生重要的影响。
init.xavier_uniform(net1[0].weight)
for layer in net1.modules():
if isinstance(layer, nn.Linear): # 判断是否是线性层
param_shape = layer.weight.shape
layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape))
# 定义为均值为 0,方差为 0.5 的正态分布
13、加载内置预训练模型
torchvision.models
模块的子模块中包含以下模型:AlexNet VGG ResNet SqueezeNet DenseNet
import torchvision.models as models
resnet18 = models.resnet18()
alexnet = models.alexnet()
vgg16 = models.vgg16()
pretrained
,默认为False
,表示只导入模型的结构,其中的权重是随机初始化的。pretrained
为 True
,表示导入的是在ImageNet
数据集上预训练的模型。import torchvision.models as models
resnet18 = models.resnet18(pretrained=True)
alexnet = models.alexnet(pretrained=True)
vgg16 = models.vgg16(pretrained=True)
更多的模型可以查看:https://pytorch-cn.readthedocs.io/zh/latest/torchvision/torchvision-models/
评论