深度学习PyTorch库介绍
https://colab.research.google.com/
张量
import torch
创建张量
# Tensor with Single number
t1 = torch.tensor(5.)
print(t1)
Output : tensor(5.)
print(t1.dtype)
Output: torch.float32
# Tensor with 1D vector
t2 = torch.tensor([1, 2, 3., 4])
print(t2)
Output: tensor([1., 2., 3., 4.])
# Matrix
t3 = torch.tensor([[1., 2, 3],
[4, 5, 6],
[7, 8, 9]])
print(t3)
Output: tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
t4 = torch.tensor([
[[10. , 11, 12],
[13, 14, 15]],
[[16, 17, 18],
[19, 20, 21]]
])
print(t4)
Output: tensor([[[10., 11., 12.],
[13., 14., 15.]],
[[16., 17., 18.],
[19., 20., 21.]]])
print(t1.shape)
Output: torch.Size([])
print(t2.shape)
Output: torch.Size([4])
print(t3.shape)
Output: torch.Size([3, 3])
print(t4.shape)
Output: torch.Size([2, 2, 3])
张量运算与梯度计算
x = torch.tensor(3.)
w = torch.tensor(4. ,requires_grad=True)
z = torch.tensor(5. ,requires_grad=True)
x , w , z
Output: (tensor(3.), tensor(4., requires_grad=True), tensor(5., requires_grad=True))
y = x*w + z
print(y)
Output: tensor(17., grad_fn=<AddBackward0>)
自动计算导数
#Compute derivatives
y.backward()
print("dy/dx =", x.grad)
print("dy/dw =", w.grad)
print("dy/dz =", z.grad)
Output : dy/dx = None
dy/dw = tensor(3.)
dy/dz = tensor(1.)
y wrt x的导数的值为“None”,因为参数requires_grad需要设置为False y wrt w的导数的值是3,因为dy/dw=x=3 y wrt z的导数的值是1,因为dy/dz=1
带NumPy的PyTorch
#First create a numpy array
import numpy as np
x = np.array([1, 2., 3])
print(x)
Output: array([1., 2., 3.])
#Create a tensor from numpy array
y. = torch.from_numpy(x)
print(y)
Output: tensor([1., 2., 3.], dtype=torch.float64)
print(x.dtype)
print(y.dtype)
Output: float64
torch.float64
z = y.numpy()
print(z)
Output: array([1., 2., 3.])
AutoGrad:为张量运算计算梯度的能力是一种强大的能力,对于训练神经网络和执行反向传播是必不可少的。 GPU支持:在处理大量数据集和大型模型时,PyTorch张量操作在图形处理单元(GPU)中执行,这将使普通cpu所需的时间减少40倍到50倍。
✄------------------------------------------------
双一流高校研究生团队创建
专注于计算机视觉原创并分享相关知识☞
闻道有先后,术业有专攻,如是而已╮(╯_╰)╭
评论