tf.train.generate_checkpoint_state_proto(
save_dir,
model_checkpoint_path,
all_model_checkpoint_paths=None,
all_model_checkpoint_timestamps=None,
last_preserved_timestamp=None
)
Defined in tensorflow/python/training/checkpoint_management.py
.
Generates a checkpoint state proto.
Args:
save_dir
: Directory where the model was saved.model_checkpoint_path
: The checkpoint file.all_model_checkpoint_paths
: List of strings. Paths to all not-yet-deleted checkpoints, sorted from oldest to newest. If this is a non-empty list, the last element must be equal to model_checkpoint_path. These paths are also saved in the CheckpointState proto.all_model_checkpoint_timestamps
: A list of floats, indicating the number of seconds since the Epoch when each checkpoint was generated.last_preserved_timestamp
: A float, indicating the number of seconds since the Epoch when the last preserved checkpoint was written, e.g. due to akeep_checkpoint_every_n_hours
parameter (seetf.contrib.checkpoint.CheckpointManager
for an implementation).
Returns:
CheckpointState proto with model_checkpoint_path and all_model_checkpoint_paths updated to either absolute paths or relative paths to the current save_dir.
Raises:
ValueError
: Ifall_model_checkpoint_timestamps
was provided but its length does not matchall_model_checkpoint_paths
.