tf.nn.selu

Computes scaled exponential linear: scale * alpha * (exp(features) - 1)

tf.nn.selu(
    features, name=None
)

if < 0, scale * features otherwise.

To be used together with initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN'). For correct dropout, use tf.contrib.nn.alpha_dropout.

See Self-Normalizing Neural Networks

Args:

Returns:

A Tensor. Has the same type as features.