Tensorflow object detection config files documentation

As mentioned in the configuration documentation, configuration files are just Protocol Buffers objects described in the .proto files under research/object_detection/protos. The top level object is a TrainEvalPipelineConfig defined in pipeline.proto, and different files describe each of the elements. For example, data_augmentation_options are PreprocessingStep objects, defined in preprocessor.proto (which in turn can include a range of … Read more

Why do we use tf.name_scope()

They are not the same thing. import tensorflow as tf c1 = tf.constant(42) with tf.name_scope(‘s1’): c2 = tf.constant(42) print(c1.name) print(c2.name) prints Const:0 s1/Const:0 So as the name suggests, the scope functions create a scope for the names of the ops you create inside. This has an effect on how you refer to tensors, on reuse, … Read more

tensorflow warning – Found untraced functions such as lstm_cell_6_layer_call_and_return_conditional_losses

I think this warning can be safely ignored as you can find the same warning even in a tutorial given by tensorflow. I often see this warning when saving custom models such as graph NNs. You should be good to go as long as you don’t want to access those non-callable functions. However, if you’re … Read more

What’s the difference between tensorflow dynamic_rnn and rnn?

From RNNs in Tensorflow, a Practical Guide and Undocumented Features by Denny Britz, published in August 21, 2016. tf.nn.rnn creates an unrolled graph for a fixed RNN length. That means, if you call tf.nn.rnn with inputs having 200 time steps you are creating a static graph with 200 RNN steps. First, graph creation is slow. … Read more

Should I use @tf.function for all functions?

TLDR: It depends on your function and whether you are in production or development. Don’t use tf.function if you want to be able to debug your function easily, or if it falls under the limitations of AutoGraph or tf.v1 code compatibility. I would highly recommend watching the Inside TensorFlow talks about AutoGraph and Functions, not … Read more

Confused by the behavior of `tf.cond`

TL;DR: If you want tf.cond() to perform a side effect (like an assignment) in one of the branches, you must create the op that performs the side effect inside the function that you pass to tf.cond(). The behavior of tf.cond() is a little unintuitive. Because execution in a TensorFlow graph flows forward through the graph, … Read more

Confusion about keras Model: __call__ vs. call vs. predict methods

Adding to @Dmitry Kabanov, they are similar yet they aren’t exactly the same thing. If you care about performance, need to look in to critical differences between them. model.predict model(x) loops over the data in batches which means means that predict() calls can scale to very large arrays. happens in-memory and doesn’t scale not differentiable … Read more