Where should pre-processing and post-processing steps be executed when a TF model is served using TensorFlow serving?

I’m running over the same issue here, even if I’m not 100% sure yet on how to use the wordDict variable (I guess you use one too to map the words with its ids), the main pre-process and post-process functions are defined here:

https://www.tensorflow.org/programmers_guide/saved_model

as export_outputs and serving_input_receiver_fn.

  • exports_outputs

Needs to be defined in EstimatorSpec if you are using estimators. Here is an example for a classification algorithm

  predicted_classes = tf.argmax(logits, 1)
  categories_tensor = tf.convert_to_tensor(CATEGORIES, tf.string)
  export_outputs = { "categories": export_output.ClassificationOutput(classes=categories_tensor) }
  if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(
        mode=mode,
        predictions={
            'class': predicted_classes,
            'prob': tf.nn.softmax(logits)
        },
        export_outputs=export_outputs)
  • serving_input_receiver_fn

It needs to be defined on before exporting the trained estimator model, it assumes the input is a raw string and parses your input from there, you can write your own function but I’m unsure whenever you can use external variables. Here is a simple example for a classification algorithm:

def serving_input_receiver_fn():
    feature_spec = { "words": tf.FixedLenFeature(dtype=tf.int64, shape=[4]) }
    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()

  export_dir = classifier.export_savedmodel(export_dir_base=args.job_dir,
                                            serving_input_receiver_fn=serving_input_receiver_fn)

hope it helps.

Leave a Comment