tf.keras.layers.LayerNormalization

View source on GitHub

Layer normalization layer (Ba et al., 2016).

Inherits From: Layer

tf.keras.layers.LayerNormalization(
    axis=-1, epsilon=0.001, center=True, scale=True, beta_initializer='zeros',
    gamma_initializer='ones', beta_regularizer=None, gamma_regularizer=None,
    beta_constraint=None, gamma_constraint=None, trainable=True, name=None, **kwargs
)

Normalize the activations of the previous layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1.

Arguments:

Input shape:

Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.

Output shape:

Same shape as input.

References: