pytorch動(dòng)態(tài)網(wǎng)絡(luò)以及權(quán)重共享實(shí)例
pytorch 動(dòng)態(tài)網(wǎng)絡(luò)+權(quán)值共享
pytorch以動(dòng)態(tài)圖著稱,下面以一個(gè)栗子來實(shí)現(xiàn)動(dòng)態(tài)網(wǎng)絡(luò)和權(quán)值共享技術(shù):
# -*- coding: utf-8 -*-
import random
import torch
class DynamicNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
這里構(gòu)造了幾個(gè)向前傳播過程中用到的線性函數(shù)
"""
super(DynamicNet, self).__init__()
self.input_linear = torch.nn.Linear(D_in, H)
self.middle_linear = torch.nn.Linear(H, H)
self.output_linear = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
and reuse the middle_linear Module that many times to compute hidden layer
representations.
Since each forward pass builds a dynamic computation graph, we can use normal
Python control-flow operators like loops or conditional statements when
defining the forward pass of the model.
Here we also see that it is perfectly safe to reuse the same Module many
times when defining a computational graph. This is a big improvement from Lua
Torch, where each Module could be used only once.
這里中間層每次向前過程中都是隨機(jī)添加0-3層,而且中間層都是使用的同一個(gè)線性層,這樣計(jì)算時(shí),權(quán)值也是用的同一個(gè)。
"""
h_relu = self.input_linear(x).clamp(min=0)
for _ in range(random.randint(0, 3)):
h_relu = self.middle_linear(h_relu).clamp(min=0)
y_pred = self.output_linear(h_relu)
return y_pred
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Construct our model by instantiating the class defined above
model = DynamicNet(D_in, H, D_out)
# Construct our loss function and an Optimizer. Training this strange model with
# vanilla stochastic gradient descent is tough, so we use momentum
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
for t in range(500):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
這個(gè)程序?qū)嶋H上是一種RNN結(jié)構(gòu),在執(zhí)行過程中動(dòng)態(tài)的構(gòu)建計(jì)算圖
References: Pytorch Documentations.
以上這篇pytorch動(dòng)態(tài)網(wǎng)絡(luò)以及權(quán)重共享實(shí)例就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
pytest利用request?fixture實(shí)現(xiàn)個(gè)性化測(cè)試需求詳解
這篇文章主要為大家詳細(xì)介紹了pytest如何利用request?fixture實(shí)現(xiàn)個(gè)性化測(cè)試需求,文中的示例代碼講解詳細(xì),感興趣的小伙伴可以跟隨小編一起了解一下2023-09-09
Python保存dict字典類型數(shù)據(jù)到Mysql并自動(dòng)創(chuàng)建表與列
這篇文章主要介紹了Python保存dict字典類型數(shù)據(jù)到Mysql并自動(dòng)創(chuàng)建表與列,字典是另一種可變?nèi)萜髂P?,且可存?chǔ)任意類型對(duì)象,想了解更多內(nèi)容的小伙伴可以和小編一起進(jìn)入下面文章學(xué)習(xí)更多內(nèi)容,希望對(duì)你有所幫助2022-02-02
使用grappelli為django admin后臺(tái)添加模板
本文介紹了一款非常流行的Django模板系統(tǒng)--grappelli,以及如何給Django的admin后臺(tái)添加模板,非常的實(shí)用,這里推薦給大家。2014-11-11
基于Python實(shí)現(xiàn)二維圖像雙線性插值
雙線性插值,又稱為雙線性內(nèi)插。在數(shù)學(xué)上,雙線性插值是有兩個(gè)變量的插值函數(shù)的線性插值擴(kuò)展,其核心思想是在兩個(gè)方向分別進(jìn)行一次線性插值。本文將用Python實(shí)現(xiàn)二維圖像雙線性插值,感興趣的可以了解下2022-06-06
詳解Python如何實(shí)現(xiàn)輸出顏色字體到終端界面
在終端中,輸出的字體總是單一顏色的,黑底白字。但是在一些場(chǎng)景并不能很好的滿足輸出的需求。本文為大家介紹了Python如何實(shí)現(xiàn)輸出顏色字體到終端界面中,需要的可以參考一下2022-12-12

