Tensorflow實(shí)現(xiàn)AlexNet卷積神經(jīng)網(wǎng)絡(luò)及運(yùn)算時(shí)間評(píng)測(cè)
本文實(shí)例為大家分享了Tensorflow實(shí)現(xiàn)AlexNet卷積神經(jīng)網(wǎng)絡(luò)的具體實(shí)現(xiàn)代碼,供大家參考,具體內(nèi)容如下
之前已經(jīng)介紹過(guò)了AlexNet的網(wǎng)絡(luò)構(gòu)建了,這次主要不是為了訓(xùn)練數(shù)據(jù),而是為了對(duì)每個(gè)batch的前饋(Forward)和反饋(backward)的平均耗時(shí)進(jìn)行計(jì)算。在設(shè)計(jì)網(wǎng)絡(luò)的過(guò)程中,分類(lèi)的結(jié)果很重要,但是運(yùn)算速率也相當(dāng)重要。尤其是在跟蹤(Tracking)的任務(wù)中,如果使用的網(wǎng)絡(luò)太深,那么也會(huì)導(dǎo)致實(shí)時(shí)性不好。
from datetime import datetime
import math
import time
import tensorflow as tf
batch_size = 32
num_batches = 100
def print_activations(t):
print(t.op.name, '', t.get_shape().as_list())
def inference(images):
parameters = []
with tf.name_scope('conv1') as scope:
kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype = tf.float32, stddev = 1e-1), name = 'weights')
conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding = 'SAME')
biases = tf.Variable(tf.constant(0.0, shape = [64], dtype = tf.float32), trainable = True, name = 'biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name = scope)
print_activations(conv1)
parameters += [kernel, biases]
lrn1 = tf.nn.lrn(conv1, 4, bias = 1.0, alpha = 0.001 / 9, beta = 0.75, name = 'lrn1')
pool1 = tf.nn.max_pool(lrn1, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool1')
print_activations(pool1)
with tf.name_scope('conv2') as scope:
kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype = tf.float32, stddev = 1e-1), name = 'weights')
conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding = 'SAME')
biases = tf.Variable(tf.constant(0.0, shape = [192], dtype = tf.float32), trainable = True, name = 'biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name = scope)
parameters += [kernel, biases]
print_activations(conv2)
lrn2 = tf.nn.lrn(conv2, 4, bias = 1.0, alpha = 0.001 / 9, beta = 0.75, name = 'lrn2')
pool2 = tf.nn.max_pool(lrn2, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool2')
print_activations(pool2)
with tf.name_scope('conv3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384], dtype = tf.float32, stddev = 1e-1), name = 'weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding = 'SAME')
biases = tf.Variable(tf.constant(0.0, shape = [384], dtype = tf.float32), trainable = True, name = 'biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name = scope)
parameters += [kernel, biases]
print_activations(conv3)
with tf.name_scope('conv4') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256], dtype = tf.float32, stddev = 1e-1), name = 'weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding = 'SAME')
biases = tf.Variable(tf.constant(0.0, shape = [256], dtype = tf.float32), trainable = True, name = 'biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name = scope)
parameters += [kernel, biases]
print_activations(conv4)
with tf.name_scope('conv5') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype = tf.float32, stddev = 1e-1), name = 'weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding = 'SAME')
biases = tf.Variable(tf.constant(0.0, shape = [256], dtype = tf.float32), trainable = True, name = 'biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name = scope)
parameters += [kernel, biases]
print_activations(conv5)
pool5 = tf.nn.max_pool(conv5, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool5')
print_activations(pool5)
return pool5, parameters
def time_tensorflow_run(session, target, info_string):
num_steps_burn_in = 10
total_duration = 0.0
total_duration_squared = 0.0
for i in range(num_batches + num_steps_burn_in):
start_time = time.time()
_ = session.run(target)
duration = time.time() - start_time
if i >= num_steps_burn_in:
if not i % 10:
print('%s: step %d, duration = %.3f' %(datetime.now(), i - num_steps_burn_in, duration))
total_duration += duration
total_duration_squared += duration * duration
mn = total_duration / num_batches
vr = total_duration_squared / num_batches - mn * mn
sd = math.sqrt(vr)
print('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %(datetime.now(), info_string, num_batches, mn, sd))
def run_benchmark():
with tf.Graph().as_default():
image_size = 224
images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype = tf.float32, stddev = 1e-1))
pool5, parameters = inference(images)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
time_tensorflow_run(sess, pool5, "Forward")
objective = tf.nn.l2_loss(pool5)
grad = tf.gradients(objective, parameters)
time_tensorflow_run(sess, grad, "Forward-backward")
run_benchmark()
這里的代碼都是之前講過(guò)的,只是加了一個(gè)計(jì)算時(shí)間和現(xiàn)實(shí)網(wǎng)絡(luò)的卷積核的函數(shù),應(yīng)該很容易就看懂了,就不多贅述了。我在GTX TITAN X上前饋大概需要0.024s, 反饋大概需要0.079s。哈哈,自己動(dòng)手試一試哦。
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
Python使用lambda拋出異常實(shí)現(xiàn)方法解析
這篇文章主要介紹了Python使用lambda拋出異常實(shí)現(xiàn)方法解析,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2020-08-08
神經(jīng)網(wǎng)絡(luò)算法RNN實(shí)現(xiàn)時(shí)間序列預(yù)測(cè)
這篇文章主要為大家介紹了神經(jīng)網(wǎng)絡(luò)算法RNN實(shí)現(xiàn)時(shí)間序列預(yù)測(cè)示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-04-04
django中間件及自定義中間件的實(shí)現(xiàn)方法
中間件就是在目標(biāo)和結(jié)果之間進(jìn)行的額外處理過(guò)程,在Django中就是request和response之間進(jìn)行的處理,相對(duì)來(lái)說(shuō)實(shí)現(xiàn)起來(lái)比較簡(jiǎn)單,這篇文章主要介紹了django中間件以及自定義中間件?,需要的朋友可以參考下2023-06-06
PyTorch中的squeeze()和unsqueeze()解析與應(yīng)用案例
這篇文章主要介紹了PyTorch中的squeeze()和unsqueeze()解析與應(yīng)用案例,文章內(nèi)容介紹詳細(xì),需要的小伙伴可以參考一下,希望對(duì)你有所幫助2022-03-03
Python實(shí)現(xiàn)的FTP通信客戶端與服務(wù)器端功能示例
這篇文章主要介紹了Python實(shí)現(xiàn)的FTP通信客戶端與服務(wù)器端功能,涉及Python基于socket的端口監(jiān)聽(tīng)、文件傳輸?shù)认嚓P(guān)操作技巧,需要的朋友可以參考下2018-03-03
關(guān)于python selenium 運(yùn)行時(shí)彈出窗口問(wèn)題
最近在做一個(gè)網(wǎng)頁(yè)代填項(xiàng)目,用到了python的selenium知識(shí),經(jīng)過(guò)了各種嘗試與搜索最后終算是較完美的解決了,下面小編給大家?guī)?lái)了python selenium 運(yùn)行時(shí)彈出窗口問(wèn)題,感興趣的朋友一起看看吧2021-11-11

