Class InMemoryEvaluatorHook
Inherits From: SessionRunHook
Hook to run evaluation in training without a checkpoint.
Example:
def train_input_fn():
...
return train_dataset
def eval_input_fn():
...
return eval_dataset
estimator = tf.estimator.DNNClassifier(...)
evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, eval_input_fn)
estimator.train(train_input_fn, hooks=[evaluator])
Current limitations of this approach are:
- It doesn't support multi-node distributed mode.
- It doesn't support saveable objects other than variables (such as boosted tree support)
- It doesn't support custom saver logic (such as ExponentialMovingAverage support)
__init__
__init__(
estimator,
input_fn,
steps=None,
hooks=None,
name=None,
every_n_iter=100
)
Initializes a InMemoryEvaluatorHook
.
Args:
estimator
: Atf.estimator.Estimator
instance to call evaluate.input_fn
: Equivalent to theinput_fn
arg toestimator.evaluate
. A function that constructs the input data for evaluation. See Createing input functions for more information. The function should construct and return one of the following:- A 'tf.data.Dataset' object: Outputs of
Dataset
object must be a tuple (features, labels) with same constraints as below. - A tuple (features, labels): Where
features
is aTensor
or a dictionary of string feature name toTensor
andlabels
is aTensor
or a dictionary of string label name toTensor
. Bothfeatures
andlabels
are consumed bymodel_fn
. They should satisfy the expectation ofmodel_fn
from inputs.
- A 'tf.data.Dataset' object: Outputs of
steps
: Equivalent to thesteps
arg toestimator.evaluate
. Number of steps for which to evaluate model. IfNone
, evaluates untilinput_fn
raises an end-of-input exception.hooks
: Equivalent to thehooks
arg toestimator.evaluate
. List ofSessionRunHook
subclass instances. Used for callbacks inside the evaluation call.name
: Equivalent to thename
arg toestimator.evaluate
. Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.every_n_iter
:int
, runs the evaluator once every N training iteration.
Raises:
ValueError
: ifevery_n_iter
is non-positive or it's not a single machine training
Methods
tf.estimator.experimental.InMemoryEvaluatorHook.after_create_session
after_create_session(
session,
coord
)
Does first run which shows the eval metrics before training.
tf.estimator.experimental.InMemoryEvaluatorHook.after_run
after_run(
run_context,
run_values
)
Runs evaluator.
tf.estimator.experimental.InMemoryEvaluatorHook.before_run
before_run(run_context)
Called before each call to run().
You can return from this call a SessionRunArgs
object indicating ops or
tensors to add to the upcoming run()
call. These ops/tensors will be run
together with the ops/tensors originally passed to the original run() call.
The run args you return can also contain feeds to be added to the run()
call.
The run_context
argument is a SessionRunContext
that provides
information about the upcoming run()
call: the originally requested
op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
Args:
run_context
: ASessionRunContext
object.
Returns:
None or a SessionRunArgs
object.
tf.estimator.experimental.InMemoryEvaluatorHook.begin
begin()
Build eval graph and restoring op.
tf.estimator.experimental.InMemoryEvaluatorHook.end
end(session)
Runs evaluator for final model.