PyTorch中torch.nn.Linear實(shí)例詳解
前言
在學(xué)習(xí)transformer時(shí),遇到過(guò)非常頻繁的nn.Linear()函數(shù),這里對(duì)nn.Linear進(jìn)行一個(gè)詳解。
參考:https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html
1. nn.Linear的原理:
從名稱就可以看出來(lái),nn.Linear表示的是線性變換,原型就是初級(jí)數(shù)學(xué)里學(xué)到的線性函數(shù):y=kx+b
不過(guò)在深度學(xué)習(xí)中,變量都是多維張量,乘法就是矩陣乘法,加法就是矩陣加法,因此nn.Linear()運(yùn)行的真正的計(jì)算就是:
output = weight @ input + bias
@: 在python中代表矩陣乘法
input: 表示輸入的Tensor,可以有多個(gè)維度
weights: 表示可學(xué)習(xí)的權(quán)重,shape=(output_feature,in_feature)
bias: 表示科學(xué)習(xí)的偏置,shape=(output_feature)
in_feature: nn.Linear 初始化的第一個(gè)參數(shù),即輸入Tensor最后一維的通道數(shù)
out_feature: nn.Linear 初始化的第二個(gè)參數(shù),即返回Tensor最后一維的通道數(shù)
output: 表示輸入的Tensor,可以有多個(gè)維度
2. nn.Linear的使用:
常用頭文件:import torch.nn as nn
nn.Linear()的初始化:
nn.Linear(in_feature,out_feature,bias)
in_feature: int型, 在forward中輸入Tensor最后一維的通道數(shù)
out_feature: int型, 在forward中輸出Tensor最后一維的通道數(shù)
bias: bool型, Linear線性變換中是否添加bias偏置
nn.Linear()的執(zhí)行:(即執(zhí)行forward函數(shù))
out=nn.Linear(input)
input: 表示輸入的Tensor,可以有多個(gè)維度
output: 表示輸入的Tensor,可以有多個(gè)維度
舉例:
2維 Tensor
m = nn.Linear(20, 40) input = torch.randn(128, 20) output = m(input) print(output.size()) # [(128,40])
4維 Tensor:
m = nn.Linear(128, 64) input = torch.randn(512, 3,128,128) output = m(input) print(output.size()) # [(512, 3,128,64))
3. nn.Linear的源碼定義:
import math
import torch
import torch.nn as nn
from torch import Tensor
from torch.nn.parameter import Parameter, UninitializedParameter
from torch.nn import functional as F
from torch.nn import init
# from .lazy import LazyModuleMixin
class myLinear(nn.Module):
r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`
This module supports :ref:`TensorFloat32<tf32_on_ampere>`.
Args:
in_features: size of each input sample
out_features: size of each output sample
bias: If set to ``False``, the layer will not learn an additive bias.
Default: ``True``
Shape:
- Input: :math:`(*, H_{in})` where :math:`*` means any number of
dimensions including none and :math:`H_{in} = \text{in\_features}`.
- Output: :math:`(*, H_{out})` where all but the last dimension
are the same shape as the input and :math:`H_{out} = \text{out\_features}`.
Attributes:
weight: the learnable weights of the module of shape
:math:`(\text{out\_features}, \text{in\_features})`. The values are
initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
:math:`k = \frac{1}{\text{in\_features}}`
bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.
If :attr:`bias` is ``True``, the values are initialized from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{1}{\text{in\_features}}`
Examples::
>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
"""
__constants__ = ['in_features', 'out_features']
in_features: int
out_features: int
weight: Tensor
def __init__(self, in_features: int, out_features: int, bias: bool = True,
device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super(myLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
if bias:
self.bias = Parameter(torch.empty(out_features, **factory_kwargs))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self) -> None:
# Setting a=sqrt(5) in kaiming_uniform is the same as initializing with
# uniform(-1/sqrt(in_features), 1/sqrt(in_features)). For details, see
# https://github.com/pytorch/pytorch/issues/57109
print("333")
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0
init.uniform_(self.bias, -bound, bound)
def forward(self, input: Tensor) -> Tensor:
print("111")
print("self.weight.shape=(", )
return F.linear(input, self.weight, self.bias)
def extra_repr(self) -> str:
print("www")
return 'in_features={}, out_features={}, bias={}'.format(
self.in_features, self.out_features, self.bias is not None
)
# m = myLinear(20, 40)
# input = torch.randn(128, 40, 20)
# output = m(input)
# print(output.size())
m = myLinear(128, 64)
input = torch.randn(512, 3,128,128)
output = m(input)
print(output.size()) # [(512, 3,128,64))
4. nn.Linear的官方源碼:
import math
import torch
from torch import Tensor
from torch.nn.parameter import Parameter, UninitializedParameter
from .. import functional as F
from .. import init
from .module import Module
from .lazy import LazyModuleMixin
class Identity(Module):
r"""A placeholder identity operator that is argument-insensitive.
Args:
args: any argument (unused)
kwargs: any keyword argument (unused)
Shape:
- Input: :math:`(*)`, where :math:`*` means any number of dimensions.
- Output: :math:`(*)`, same shape as the input.
Examples::
>>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 20])
"""
def __init__(self, *args, **kwargs):
super(Identity, self).__init__()
def forward(self, input: Tensor) -> Tensor:
return input
class Linear(Module):
r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`
This module supports :ref:`TensorFloat32<tf32_on_ampere>`.
Args:
in_features: size of each input sample
out_features: size of each output sample
bias: If set to ``False``, the layer will not learn an additive bias.
Default: ``True``
Shape:
- Input: :math:`(*, H_{in})` where :math:`*` means any number of
dimensions including none and :math:`H_{in} = \text{in\_features}`.
- Output: :math:`(*, H_{out})` where all but the last dimension
are the same shape as the input and :math:`H_{out} = \text{out\_features}`.
Attributes:
weight: the learnable weights of the module of shape
:math:`(\text{out\_features}, \text{in\_features})`. The values are
initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
:math:`k = \frac{1}{\text{in\_features}}`
bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.
If :attr:`bias` is ``True``, the values are initialized from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{1}{\text{in\_features}}`
Examples::
>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
"""
__constants__ = ['in_features', 'out_features']
in_features: int
out_features: int
weight: Tensor
def __init__(self, in_features: int, out_features: int, bias: bool = True,
device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super(Linear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
if bias:
self.bias = Parameter(torch.empty(out_features, **factory_kwargs))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self) -> None:
# Setting a=sqrt(5) in kaiming_uniform is the same as initializing with
# uniform(-1/sqrt(in_features), 1/sqrt(in_features)). For details, see
# https://github.com/pytorch/pytorch/issues/57109
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0
init.uniform_(self.bias, -bound, bound)
def forward(self, input: Tensor) -> Tensor:
return F.linear(input, self.weight, self.bias)
def extra_repr(self) -> str:
return 'in_features={}, out_features={}, bias={}'.format(
self.in_features, self.out_features, self.bias is not None
)
# This class exists solely to avoid triggering an obscure error when scripting
# an improperly quantized attention layer. See this issue for details:
# https://github.com/pytorch/pytorch/issues/58969
# TODO: fail fast on quantization API usage error, then remove this class
# and replace uses of it with plain Linear
class NonDynamicallyQuantizableLinear(Linear):
def __init__(self, in_features: int, out_features: int, bias: bool = True,
device=None, dtype=None) -> None:
super().__init__(in_features, out_features, bias=bias,
device=device, dtype=dtype)
[docs]class Bilinear(Module):
r"""Applies a bilinear transformation to the incoming data:
:math:`y = x_1^T A x_2 + b`
Args:
in1_features: size of each first input sample
in2_features: size of each second input sample
out_features: size of each output sample
bias: If set to False, the layer will not learn an additive bias.
Default: ``True``
Shape:
- Input1: :math:`(*, H_{in1})` where :math:`H_{in1}=\text{in1\_features}` and
:math:`*` means any number of additional dimensions including none. All but the last dimension
of the inputs should be the same.
- Input2: :math:`(*, H_{in2})` where :math:`H_{in2}=\text{in2\_features}`.
- Output: :math:`(*, H_{out})` where :math:`H_{out}=\text{out\_features}`
and all but the last dimension are the same shape as the input.
Attributes:
weight: the learnable weights of the module of shape
:math:`(\text{out\_features}, \text{in1\_features}, \text{in2\_features})`.
The values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
:math:`k = \frac{1}{\text{in1\_features}}`
bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.
If :attr:`bias` is ``True``, the values are initialized from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
:math:`k = \frac{1}{\text{in1\_features}}`
Examples::
>>> m = nn.Bilinear(20, 30, 40)
>>> input1 = torch.randn(128, 20)
>>> input2 = torch.randn(128, 30)
>>> output = m(input1, input2)
>>> print(output.size())
torch.Size([128, 40])
"""
__constants__ = ['in1_features', 'in2_features', 'out_features']
in1_features: int
in2_features: int
out_features: int
weight: Tensor
def __init__(self, in1_features: int, in2_features: int, out_features: int, bias: bool = True,
device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super(Bilinear, self).__init__()
self.in1_features = in1_features
self.in2_features = in2_features
self.out_features = out_features
self.weight = Parameter(torch.empty((out_features, in1_features, in2_features), **factory_kwargs))
if bias:
self.bias = Parameter(torch.empty(out_features, **factory_kwargs))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self) -> None:
bound = 1 / math.sqrt(self.weight.size(1))
init.uniform_(self.weight, -bound, bound)
if self.bias is not None:
init.uniform_(self.bias, -bound, bound)
def forward(self, input1: Tensor, input2: Tensor) -> Tensor:
return F.bilinear(input1, input2, self.weight, self.bias)
def extra_repr(self) -> str:
return 'in1_features={}, in2_features={}, out_features={}, bias={}'.format(
self.in1_features, self.in2_features, self.out_features, self.bias is not None
)
class LazyLinear(LazyModuleMixin, Linear):
r"""A :class:`torch.nn.Linear` module where `in_features` is inferred.
In this module, the `weight` and `bias` are of :class:`torch.nn.UninitializedParameter`
class. They will be initialized after the first call to ``forward`` is done and the
module will become a regular :class:`torch.nn.Linear` module. The ``in_features`` argument
of the :class:`Linear` is inferred from the ``input.shape[-1]``.
Check the :class:`torch.nn.modules.lazy.LazyModuleMixin` for further documentation
on lazy modules and their limitations.
Args:
out_features: size of each output sample
bias: If set to ``False``, the layer will not learn an additive bias.
Default: ``True``
Attributes:
weight: the learnable weights of the module of shape
:math:`(\text{out\_features}, \text{in\_features})`. The values are
initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
:math:`k = \frac{1}{\text{in\_features}}`
bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.
If :attr:`bias` is ``True``, the values are initialized from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{1}{\text{in\_features}}`
"""
cls_to_become = Linear # type: ignore[assignment]
weight: UninitializedParameter
bias: UninitializedParameter # type: ignore[assignment]
def __init__(self, out_features: int, bias: bool = True,
device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
# bias is hardcoded to False to avoid creating tensor
# that will soon be overwritten.
super().__init__(0, 0, False)
self.weight = UninitializedParameter(**factory_kwargs)
self.out_features = out_features
if bias:
self.bias = UninitializedParameter(**factory_kwargs)
def reset_parameters(self) -> None:
if not self.has_uninitialized_params() and self.in_features != 0:
super().reset_parameters()
def initialize_parameters(self, input) -> None: # type: ignore[override]
if self.has_uninitialized_params():
with torch.no_grad():
self.in_features = input.shape[-1]
self.weight.materialize((self.out_features, self.in_features))
if self.bias is not None:
self.bias.materialize((self.out_features,))
self.reset_parameters()
# TODO: PartialLinear - maybe in sparse?
補(bǔ)充:許多細(xì)節(jié)需要聲明
1)nn.Linear是一個(gè)類,使用時(shí)進(jìn)行類的實(shí)例化
2)實(shí)例化的時(shí)候,nn.Linear需要輸入兩個(gè)參數(shù),in_features為上一層神經(jīng)元的個(gè)數(shù),out_features為這一層的神經(jīng)元個(gè)數(shù)
3)不需要定義w和b。所有nn.Module的子類,形如nn.XXX的層,都會(huì)在實(shí)例化的同時(shí)隨機(jī)生成w和b的初始值。所以實(shí)例化之后,我們就可以調(diào)用屬性weight和bias來(lái)查看生成的w和b。其中w是必然會(huì)生成的,b是我們可以控制是否要生成的。在nn.Linear類中,有參數(shù)bias,默認(rèn) bias = True。如果我們希望不擬合常量b,在實(shí)例化時(shí)將參數(shù)bias設(shè)置為False即可。
4)由于w和b是隨機(jī)生成的,所以同樣的代碼多次運(yùn)行后的結(jié)果是不一致的。如果我們希望控制隨機(jī)性,則可以使用torch中的random類。如:torch.random.manual_seed(420) #人為設(shè)置隨機(jī)數(shù)種子
5)由于不需要定義常量b,因此在特征張量中,不需要留出與常數(shù)項(xiàng)相乘的那一列,只需要輸入特征張量。
6)輸入層只有一層,并且輸入層的結(jié)構(gòu)(神經(jīng)元的個(gè)數(shù))由輸入的特征張量X決定,因此在PyTorch中構(gòu)筑神經(jīng)網(wǎng)絡(luò)時(shí),不需要定義輸入層。
7)實(shí)例化之后,將特征張量輸入到實(shí)例化后的類中。
總結(jié)
到此這篇關(guān)于PyTorch中torch.nn.Linear實(shí)例詳解的文章就介紹到這了,更多相關(guān)PyTorch torch.nn.Linear詳解內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
django數(shù)據(jù)模型中null和blank的區(qū)別說(shuō)明
這篇文章主要介紹了django數(shù)據(jù)模型中null和blank的區(qū)別說(shuō)明,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-09-09
多個(gè)geojson經(jīng)過(guò)坐標(biāo)系轉(zhuǎn)換后如何合并為一個(gè)shp數(shù)據(jù)
這篇文章主要介紹了多個(gè)geojson經(jīng)過(guò)坐標(biāo)系轉(zhuǎn)換后如何合并為一個(gè)shp數(shù)據(jù)問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-10-10
在Python中封裝GObject模塊進(jìn)行圖形化程序編程的教程
這篇文章主要介紹了在Python中封裝GObject模塊進(jìn)行圖形化程序編程的教程,本文來(lái)自于IBM官方網(wǎng)站技術(shù)文檔,需要的朋友可以參考下2015-04-04
讀寫(xiě)json中文ASCII亂碼問(wèn)題的解決方法
下面小編就為大家?guī)?lái)一篇讀寫(xiě)json中文ASCII亂碼問(wèn)題的解決方法。小編覺(jué)得挺不錯(cuò)的,現(xiàn)在就分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2016-11-11
詳解如何優(yōu)化和調(diào)整Python中Scrapy的性能
在本篇高級(jí)教程中,我們將深入探討如何優(yōu)化和調(diào)整Scrapy爬蟲(chóng)的性能,以及如何處理更復(fù)雜的抓取任務(wù),如登錄,處理Cookies和會(huì)話,以及避免爬蟲(chóng)被網(wǎng)站識(shí)別和封鎖,需要的朋友可以參考下2023-09-09
Python Vaex實(shí)現(xiàn)快速分析100G大數(shù)據(jù)量
Vaex是一個(gè)開(kāi)源的DataFrame庫(kù),它可以對(duì)表格數(shù)據(jù)集進(jìn)行可視化、探索、分析,甚至機(jī)器學(xué)習(xí),這些數(shù)據(jù)集和你的硬盤(pán)驅(qū)動(dòng)器一樣大。本文就來(lái)聊聊如何利用Vaex實(shí)現(xiàn)快速分析100G大數(shù)據(jù)量,需要的可以參考一下2023-03-03

