Class StatsAggregator
Defined in tensorflow/python/data/experimental/ops/stats_aggregator.py
.
A stateful resource that aggregates statistics from one or more iterators.
To record statistics, use one of the custom transformation functions defined
in this module when defining your tf.data.Dataset
. All statistics will be
aggregated by the StatsAggregator
that is associated with a particular
iterator (see below). For example, to record the latency of producing each
element by iterating over a dataset:
dataset = ...
dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes"))
To associate a StatsAggregator
with a tf.data.Dataset
object, use
the following pattern:
aggregator = tf.data.experimental.StatsAggregator()
dataset = ...
# Apply `StatsOptions` to associate `dataset` with `aggregator`.
options = dataset_ops.Options()
options.experimental_stats = tf.data.experimental.StatsOptions(aggregator)
dataset = dataset.with_options(options)
To get a protocol buffer summary of the currently aggregated statistics,
use the StatsAggregator.get_summary()
tensor. The easiest way to do this
is to add the returned tensor to the tf.GraphKeys.SUMMARIES
collection,
so that the summaries will be included with any existing summaries.
aggregator = tf.data.experimental.StatsAggregator()
# ...
stats_summary = aggregator.get_summary()
tf.add_to_collection(tf.GraphKeys.SUMMARIES, stats_summary)
__init__
__init__()
Creates a StatsAggregator
.
Methods
tf.data.experimental.StatsAggregator.get_summary
get_summary()
Returns a string tf.Tensor
that summarizes the aggregated statistics.
The returned tensor will contain a serialized tf.summary.Summary
protocol
buffer, which can be used with the standard TensorBoard logging facilities.
Returns:
A scalar string tf.Tensor
that summarizes the aggregated statistics.