什么是PyTorch?PyTorch是一个基于Python的科学计算包,它提供了一个具有最大灵活性和速度的深度学习研究平台。TensorsTensors类似于NumPy的n维数组,另外张量也可以用在GPU上来加速计算。让我们构建一个简单的张量并检查输出。首先让我们看看如何构建一个5×3未初始化的矩阵:importtorchx=torch.empty(5,3)print(x)输出如下:tensor([[2.7298e+32,4.5650e-41,2.7298e+32],[4.5650e-41,0.0000e+00,0.0000e+00],[0.0000e+00,0.0000e+00,0.0000e+00],[0.0000e+00,0.0000e+00,0.0000e+00],[0.0000e+00,0.0000e+00,0.0000e+00]])现在让我们构造一个随机初始化的矩阵:x=torch.rand(5,3)print(x)output:tensor([[1.1608e-01,9.8966e-01,1.2705e-01],[2.8599e-01,5.4429e-01,3.7764e-01],[5.8646e-01,1.0449e-02,4.2655e-01],[2.2087e-01,6.6702e-01,5.1910e-01,[1.8414e-01,2.0611e-01,9.4652e-04]])直接从数据构造张量:x=torch.tensor([5.5,3])print(x)输出:tensor([5.5000,3.0000])创建一个统一的长张量。x=torch.LongTensor(3,4)xtensor([[94006673833344,210453397554,206158430253,193273528374],[214748364849,210453397588,249108103216,223338299441],[210453397562,197568495665,206158430257,240518168626]])「浮动张量。“x=torch.FloatTensor(3,4)xtensor([[-3.1152e-18,3.0670e-41,3.5032e-44,0.0000e+00],[nan,3.0670e-41,1.7753e+28,1.0795e+27],[1.0899e+27,2.6223e+20,1.7465e+19,1.8888e+31]])"在范围内创建张量"torch.arange(10,dtype=torch.float)tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.])"重塑张量"x=torch.arange(10,dtype=torch.float)xtensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.])使用.view.x.view(2,5)tensor([[0.,1.,2.,3.,4.],[5.,6.,7.,8.,9.]])-1根据张量的大小自动识别尺寸。x.view(5,-1)tensor([[0.,1.],[2.,3.],[4.,5.],[6.,7.],[8.,9.]])"ChangeTensorAxis"改变张量轴:view和permuteview这两种方法改变张量的顺序,而permute只改变轴。x1=torch.tensor([[1.,2.,3.],[4.,5.,6.]])print("x1:\n",x1)print("\nx1.shape:\n",x1.shape)print("\nx1.view(3,-1):\n",x1.view(3,-1))print("\nx1.permute(1,0):\n",x1.permute(1,0))x1:tensor([[1.,2.,3.],[4.,5.,6.]])x1.shape:torch.Size([2,3])x1.view(3,-1):tensor([[1.,2.],[3.,4.],[5.,6.]])x1.permute(1,0):tensor([[1.,4.],[2.,5.],[3.,6.]])张量运算在下面的例子中我们将看加法运算:y=torch.rand(5,3)print(x+y)输出:tensor([[0.5429,1.7372,1.0293],[0.5418,0.6088,1.0718],[1.3894,0.5148,1.2892],[0.9626,0.7522,0.9633],[0.7547,0.9931,0.2709]])调整大小:如果你想重塑张量,你可以使用“torch.view”:x=torch.randn(4,4)y=x.view(16)#size-1isfromz=x.view(-1,8)print(x.size(),y.size(),z.size())从其他维度推断的输出:torch.Size([4,4])torch。Size([16])torch.Size([2,8])PyTorch和NumPy转换NumPy是Python编程语言的一个库,增加了对大型、多维数组和矩阵的支持,以及一些高级A数学函数的集合。将Torch中的张量转换为NumPy数组,反之亦然!TorchTensors和NumPy数组将共享它们的底层内存位置,改变一个就会改变另一个。“将Torch张量转换为NumPy数组:”a=torch.ones(5)print(a)输出:tensor([1.,1.,1.,1.,1.])b=a.numpy()print(b)output:[1.,1.,1.,1.,1.]让我们执行求和运算并检查值变化:a.add_(1)print(a)print(b)output:tensor([2.,2.,2.,2.,2.])[2.2.2.2.2.]“将NumPy数组转换为Torch张量:”importnumpyasnoa=np.ones(5)b=torch.from_numpy(a)np.add(a,1,out=a)print(a)print(b)输出:[2.2.2.2.2.]tensor([2.,2.,2.,2.,2.],dtype=torch.float64)所以,如您所见,就是这么简单!接下来在这个PyTorch教程博客上,让我们来看看PyTorch的AutoGrad模块。AutoGradautograd包为张量上的所有操作提供自动微分。它是一个运行定义的框架,这意味着您的反向传播由代码的运行方式定义,并且每次迭代都可能不同。torch.autograd.function(函数的反向传播)torch.autograd.functional(计算图的反向传播)torch.autograd.gradcheck(数值梯度检查)torch.autograd.anomaly_mode(自动推导过程中错误生成路径的检测)torch.autograd.grad_mode(设置是否需要梯度)model.eval()andtorch.no_grad()torch.autograd.profiler(providefunction-levelstatistics)"UseAutogradforbackpropagationbelowbelow."如果requires_grad=True,张量对象会跟踪它是如何创建的。x=torch.tensor([1.,2.,3.],requires_grad=True)print('x:',x)y=torch.tensor([10.,20.,30.],requires_grad=True)print('y:',y)z=x+yprint('\nz=x+y')print('z:',z)x:tensor([1.,2.,3.],requires_grad=True)y:tensor([10.,20.,30.],requires_grad=True)z=x+yz:tensor([11.,22.,33.],grad_fn=
