一区二区三区在线-一区二区三区亚洲视频-一区二区三区亚洲-一区二区三区午夜-一区二区三区四区在线视频-一区二区三区四区在线免费观看

腳本之家,腳本語言編程技術及教程分享平臺!
分類導航

Python|VBS|Ruby|Lua|perl|VBA|Golang|PowerShell|Erlang|autoit|Dos|bat|

服務器之家 - 腳本之家 - Python - Tensorflow實現AlexNet卷積神經網絡及運算時間評測

Tensorflow實現AlexNet卷積神經網絡及運算時間評測

2021-02-24 00:23Felaim Python

這篇文章主要為大家詳細介紹了Tensorflow實現AlexNet卷積神經網絡及運算時間評測,具有一定的參考價值,感興趣的小伙伴們可以參考一下

本文實例為大家分享了Tensorflow實現AlexNet卷積神經網絡的具體實現代碼,供大家參考,具體內容如下

之前已經介紹過了AlexNet的網絡構建了,這次主要不是為了訓練數據,而是為了對每個batch的前饋(Forward)和反饋(backward)的平均耗時進行計算。在設計網絡的過程中,分類的結果很重要,但是運算速率也相當重要。尤其是在跟蹤(Tracking)的任務中,如果使用的網絡太深,那么也會導致實時性不好。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
from datetime import datetime
import math
import time
import tensorflow as tf
 
batch_size = 32
num_batches = 100
 
def print_activations(t):
 print(t.op.name, '', t.get_shape().as_list())
 
def inference(images):
 parameters = []
 
 with tf.name_scope('conv1') as scope:
  kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype = tf.float32, stddev = 1e-1), name = 'weights')
  conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding = 'SAME')
  biases = tf.Variable(tf.constant(0.0, shape = [64], dtype = tf.float32), trainable = True, name = 'biases')
  bias = tf.nn.bias_add(conv, biases)
  conv1 = tf.nn.relu(bias, name = scope)
  print_activations(conv1)
  parameters += [kernel, biases]
 
  lrn1 = tf.nn.lrn(conv1, 4, bias = 1.0, alpha = 0.001 / 9, beta = 0.75, name = 'lrn1')
  pool1 = tf.nn.max_pool(lrn1, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool1')
  print_activations(pool1)
 
 with tf.name_scope('conv2') as scope:
  kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype = tf.float32, stddev = 1e-1), name = 'weights')
  conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding = 'SAME')
  biases = tf.Variable(tf.constant(0.0, shape = [192], dtype = tf.float32), trainable = True, name = 'biases')
  bias = tf.nn.bias_add(conv, biases)
  conv2 = tf.nn.relu(bias, name = scope)
  parameters += [kernel, biases]
  print_activations(conv2)
 
  lrn2 = tf.nn.lrn(conv2, 4, bias = 1.0, alpha = 0.001 / 9, beta = 0.75, name = 'lrn2')
  pool2 = tf.nn.max_pool(lrn2, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool2')
  print_activations(pool2)
 
 with tf.name_scope('conv3') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384], dtype = tf.float32, stddev = 1e-1), name = 'weights')
  conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding = 'SAME')
  biases = tf.Variable(tf.constant(0.0, shape = [384], dtype = tf.float32), trainable = True, name = 'biases')
  bias = tf.nn.bias_add(conv, biases)
  conv3 = tf.nn.relu(bias, name = scope)
  parameters += [kernel, biases]
  print_activations(conv3)
 
 with tf.name_scope('conv4') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256], dtype = tf.float32, stddev = 1e-1), name = 'weights')
  conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding = 'SAME')
  biases = tf.Variable(tf.constant(0.0, shape = [256], dtype = tf.float32), trainable = True, name = 'biases')
  bias = tf.nn.bias_add(conv, biases)
  conv4 = tf.nn.relu(bias, name = scope)
  parameters += [kernel, biases]
  print_activations(conv4)
 
 with tf.name_scope('conv5') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype = tf.float32, stddev = 1e-1), name = 'weights')
  conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding = 'SAME')
  biases = tf.Variable(tf.constant(0.0, shape = [256], dtype = tf.float32), trainable = True, name = 'biases')
  bias = tf.nn.bias_add(conv, biases)
  conv5 = tf.nn.relu(bias, name = scope)
  parameters += [kernel, biases]
  print_activations(conv5)
 
  pool5 = tf.nn.max_pool(conv5, ksize = [1, 3, 3, 1], strides = [1, 2, 2, 1], padding = 'VALID', name = 'pool5')
  print_activations(pool5)
 
  return pool5, parameters
 
def time_tensorflow_run(session, target, info_string):
 num_steps_burn_in = 10
 total_duration = 0.0
 total_duration_squared = 0.0
 
 for i in range(num_batches + num_steps_burn_in):
  start_time = time.time()
  _ = session.run(target)
  duration = time.time() - start_time
  if i >= num_steps_burn_in:
   if not i % 10:
    print('%s: step %d, duration = %.3f' %(datetime.now(), i - num_steps_burn_in, duration))
   total_duration += duration
   total_duration_squared += duration * duration
 
 mn = total_duration / num_batches
 vr = total_duration_squared / num_batches - mn * mn
 sd = math.sqrt(vr)
 print('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %(datetime.now(), info_string, num_batches, mn, sd))
 
def run_benchmark():
 with tf.Graph().as_default():
  image_size = 224
  images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype = tf.float32, stddev = 1e-1))
  pool5, parameters = inference(images)
 
  init = tf.global_variables_initializer()
  sess = tf.Session()
  sess.run(init)
 
  time_tensorflow_run(sess, pool5, "Forward")
 
  objective = tf.nn.l2_loss(pool5)
  grad = tf.gradients(objective, parameters)
  time_tensorflow_run(sess, grad, "Forward-backward")
 
 
run_benchmark()

這里的代碼都是之前講過的,只是加了一個計算時間和現實網絡的卷積核的函數,應該很容易就看懂了,就不多贅述了。我在GTX TITAN X上前饋大概需要0.024s, 反饋大概需要0.079s。哈哈,自己動手試一試哦。

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持服務器之家。

原文鏈接:https://blog.csdn.net/Felaim/article/details/68923725

延伸 · 閱讀

精彩推薦
主站蜘蛛池模板: 久久国产热视频99rev6 | jj视频免费看 | 三级无删减高清在线影院 | 好姑娘在线视频观看免费 | gogo人体模特啪啪季玥图片 | 日本五级床片全都免费播放 | 吃瓜视频在线观看 | 日本欧美一二三区色视频 | 性美国人xxxxx18 | 日本黄色高清视频网站 | 日本美女视频韩国视频网站免费 | 暖暖日本高清 | 午夜精品在线 | sese在线 | 十大看黄网站 | 好男人资源免费播放在线观看 | 亚洲 日韩 在线 国产 视频 | 日韩成人精品在线 | 九九久久国产 | 亚洲国产精品久久久久久 | 狠狠色狠狠色综合曰曰 | 日本大学jalapsiki | 男女性gif抽搐出入视频 | 欧美成人一区二区 | 四虎影院在线免费播放 | 国产日韩欧美综合在线 | 免费观看视频高清在线 | 亚洲男人的天堂在线 | 美女大乳被捏羞羞漫画 | 猫咪色网| 五月天网站 | 欧美性f | 无人区乱码区1卡2卡三卡在线 | 香蕉久久网 | 国产成人精品日本亚洲网站 | 污网站免费观看在线高清 | 女人叉开腿让男人桶 | 99影视在线视频免费观看 | 男女激情视频1000辣妞范 | 国模一区二区三区视频一 | 嫩草香味|