tf.compat.v1.nn.static_rnn

View source on GitHub

Creates a recurrent neural network specified by RNNCell cell. (deprecated)

tf.compat.v1.nn.static_rnn(
    cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None
)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.RNN(cell, unroll=True), which is equivalent to this API

The simplest form of RNN network generated is:

state = cell.zero_state(...)
  outputs = []
  for input_ in inputs:
    output, state = cell(input_, state)
    outputs.append(output)
  return (outputs, state)

However, a few other options are available:

An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.

The dynamic calculation performed is, at time t for batch row b,

(output, state)(b, t) =
    (t >= sequence_length(b))
      ? (zeros(cell.output_size), states(b, sequence_length(b) - 1))
      : cell(input(b, t), state(b, t - 1))

Args:

Returns:

A pair (outputs, state) where:

Raises: