You can fetch the value of cross_entropy
by adding it to the list of arguments to sess.run(...)
. For example, your for
-loop could be rewritten as follows:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss=" + loss_val
The same approach can be used to print the current value of a variable. Let”s say, in addition to the value of cross_entropy
, you wanted to print the value of a tf.Variable
called W
, you could do the following:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val, W_val = sess.run([train_step, cross_entropy, W],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = %s' % loss_val
print 'W = %s' % W_val