tf.compat.v1.layers.BatchNormalization

View source on GitHub

Batch Normalization layer from http://arxiv.org/abs/1502.03167.

Inherits From: BatchNormalization, Layer

tf.compat.v1.layers.BatchNormalization(
    axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True,
    beta_initializer=tf.zeros_initializer(),
    gamma_initializer=tf.ones_initializer(),
    moving_mean_initializer=tf.zeros_initializer(),
    moving_variance_initializer=tf.ones_initializer(), beta_regularizer=None,
    gamma_regularizer=None, beta_constraint=None, gamma_constraint=None,
    renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None,
    trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs
)

"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"

Sergey Ioffe, Christian Szegedy

Keras APIs handle BatchNormalization updates to the moving_mean and moving_variance as part of their fit() and evaluate() loops. However, if a custom training loop is used with an instance of Model, these updates need to be explicitly included. Here's a simple example of how it can be done:

# model is an instance of Model that contains BatchNormalization layer.
  update_ops = model.get_updates_for(None) + model.get_updates_for(features)
  train_op = optimizer.minimize(loss)
  train_op = tf.group([train_op, update_ops])

Arguments:

Attributes: