TensorFlow - 递归神经网络

TensorFlow - 递归神经网络 首页 / TensorFlow入门教程 / TensorFlow - 递归神经网络

递归神经网络是一种面向深度学习的算法,它遵循顺序方法。在神经网络中,无涯教程始终假定每个输入和输出都独立于所有其他层。这些类型的神经网络称为递归,因为它们以顺序的方式执行数学计算。

表示递归神经网络的示意方法如下所述-

Recurrent Neural Networks

实现递归神经网络

在本节中,将学习如何使用TensorFlow实现递归神经网络。

步骤1   - TensorFlow包含各种库,用于递归神经网络模块的特定实现。

#Import necessary modules
from __future__ import print_function

import tensorflow as tf
from tensorflow.contrib import rnn
from tensorflow.examples.Learnfk.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)

如上所述,这些库有助于定义输入数据,这构成了递归神经网络实现的主要部分。

无涯教程网

步骤2   - 主要动机是使用递归神经网络对图像进行分类,其中将每个图像行都视为像素序列, MNIST图像形状专门定义为28 * 28像素。将定义输入参数以完成顺序模式。

n_input = 28 # MNIST data input with img shape 28*28
n_steps = 28
n_hidden = 128
n_classes = 10

# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes]
weights = {
   'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
   'out': tf.Variable(tf.random_normal([n_classes]))
}

步骤3   - 使用RNN中定义的函数计算输出,以获得最佳输出,在此,将每个数据形状与当前输入形状进行比较,并计算输出以保持准确率。

def RNN(x, weights, biases):
   x = tf.unstack(x, n_steps, 1)

   # Define a lstm cell with tensorflow
   lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)

   # Get lstm cell output
   outputs, states = rnn.static_rnn(lstm_cell, x, dtype = tf.float32)

   # Linear activation, using rnn inner loop last output
   return tf.matmul(outputs[-1], weights['out']) + biases['out']

pred = RNN(x, weights, biases)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = pred, labels = y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)

# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.global_variables_initializer()

步骤4   - 在这一步中,无涯教程将启动图形输出,这也有助于计算测试输出的准确性。

with tf.Session() as sess:
   sess.run(init)
   step = 1
   # Keep training until reach max iterations
   
   while step * batch_size < training_iters:
      batch_x, batch_y = mnist.train.next_batch(batch_size)
      batch_x = batch_x.reshape((batch_size, n_steps, n_input))
      sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
      
      if step % display_step == 0:
         # Calculate batch accuracy
         acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
         
         # Calculate batch loss
         loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
         
         print("Iter " + str(step*batch_size) + ", Minibatch Loss= " +\
            "{:.6f}".format(loss) + ", Training Accuracy= " +\
            "{:.5f}".format(acc))
      step += 1
   print("Optimization Finished!")
      test_len = 128
   test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
   
   test_label = mnist.test.labels[:test_len]
   print("Testing Accuracy:",\
      sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

下面的屏幕截图显示了生成的输出-

Recurrent Neural Networks Implementation OutputRecurrent Neural Networks Implementation Output TransFlow

祝学习愉快!(内容编辑有误?请选中要编辑内容 -> 右键 -> 修改 -> 提交!)

技术教程推荐

零基础学Java -〔臧萌〕

TypeScript开发实战 -〔梁宵〕

性能测试实战30讲 -〔高楼〕

说透敏捷 -〔宋宁〕

跟月影学可视化 -〔月影〕

张汉东的Rust实战课 -〔张汉东〕

如何读懂一首诗 -〔王天博〕

手把手带你搭建秒杀系统 -〔佘志东〕

人人都用得上的数字化思维课 -〔付晓岩〕

好记忆不如烂笔头。留下您的足迹吧 :)