tf.estimator.experimental.InMemoryEvaluatorHook

View source on GitHub

Hook to run evaluation in training without a checkpoint.

Inherits From: SessionRunHook

tf.estimator.experimental.InMemoryEvaluatorHook(
    estimator, input_fn, steps=None, hooks=None, name=None, every_n_iter=100
)

Example:

def train_input_fn():
  ...
  return train_dataset

def eval_input_fn():
  ...
  return eval_dataset

estimator = tf.estimator.DNNClassifier(...)

evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(
    estimator, eval_input_fn)
estimator.train(train_input_fn, hooks=[evaluator])

Current limitations of this approach are:

Args:

Raises:

Methods

after_create_session

View source

after_create_session(
    session, coord
)

Does first run which shows the eval metrics before training.

after_run

View source

after_run(
    run_context, run_values
)

Runs evaluator.

before_run

View source

before_run(
    run_context
)

Called before each call to run().

You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.

The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session.

At this point graph is finalized and you can not add ops.

Args:

Returns:

None or a SessionRunArgs object.

begin

View source

begin()

Build eval graph and restoring op.

end

View source

end(
    session
)

Runs evaluator for final model.