Class EvalSpec
Configuration for the "eval" part for the train_and_evaluate
call.
EvalSpec
combines details of evaluation of the trained model as well as its
export. Evaluation consists of computing metrics to judge the performance of
the trained model. Export writes out the trained model on to external
storage.
__new__
@staticmethod
__new__(
cls,
input_fn,
steps=100,
name=None,
hooks=None,
exporters=None,
start_delay_secs=120,
throttle_secs=600
)
Creates a validated EvalSpec
instance.
Args:
input_fn
: A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following:- A 'tf.data.Dataset' object: Outputs of
Dataset
object must be a tuple (features, labels) with same constraints as below. - A tuple (features, labels): Where features is a
Tensor
or a dictionary of string feature name toTensor
and labels is aTensor
or a dictionary of string label name toTensor
.
- A 'tf.data.Dataset' object: Outputs of
steps
: Int. Positive number of steps for which to evaluate model. IfNone
, evaluates untilinput_fn
raises an end-of-input exception. SeeEstimator.evaluate
for details.name
: String. Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.hooks
: Iterable oftf.train.SessionRunHook
objects to run during evaluation.exporters
: Iterable ofExporter
s, or a single one, orNone
.exporters
will be invoked after each evaluation.start_delay_secs
: Int. Start evaluating after waiting for this many seconds.throttle_secs
: Int. Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum.
Returns:
A validated EvalSpec
object.
Raises:
ValueError
: If any of the input arguments is invalid.TypeError
: If any of the arguments is not of the expected type.