Class AttentionWrapper
Inherits From: RNNCell
Defined in tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
.
Wraps another RNNCell
with attention.
__init__
__init__(
cell,
attention_mechanism,
attention_layer_size=None,
alignment_history=False,
cell_input_fn=None,
output_attention=True,
initial_cell_state=None,
name=None,
attention_layer=None
)
Construct the AttentionWrapper
.
NOTE If you are using the BeamSearchDecoder
with a cell wrapped in
AttentionWrapper
, then you must ensure that:
- The encoder output has been tiled to
beam_width
viatf.contrib.seq2seq.tile_batch
(NOTtf.tile
). - The
batch_size
argument passed to thezero_state
method of this wrapper is equal totrue_batch_size * beam_width
. - The initial state created with
zero_state
above contains acell_state
value containing properly tiled final state from the encoder.
An example:
tiled_encoder_outputs = tf.contrib.seq2seq.tile_batch(
encoder_outputs, multiplier=beam_width)
tiled_encoder_final_state = tf.conrib.seq2seq.tile_batch(
encoder_final_state, multiplier=beam_width)
tiled_sequence_length = tf.contrib.seq2seq.tile_batch(
sequence_length, multiplier=beam_width)
attention_mechanism = MyFavoriteAttentionMechanism(
num_units=attention_depth,
memory=tiled_inputs,
memory_sequence_length=tiled_sequence_length)
attention_cell = AttentionWrapper(cell, attention_mechanism, ...)
decoder_initial_state = attention_cell.zero_state(
dtype, batch_size=true_batch_size * beam_width)
decoder_initial_state = decoder_initial_state.clone(
cell_state=tiled_encoder_final_state)
Args:
cell
: An instance ofRNNCell
.attention_mechanism
: A list ofAttentionMechanism
instances or a single instance.attention_layer_size
: A list of Python integers or a single Python integer, the depth of the attention (output) layer(s). If None (default), use the context as attention at each time step. Otherwise, feed the context and cell output into the attention layer to generate attention at each time step. If attention_mechanism is a list, attention_layer_size must be a list of the same length. If attention_layer is set, this must be None.alignment_history
: Python boolean, whether to store alignment history from all time steps in the final output state (currently stored as a time majorTensorArray
on which you must callstack()
).cell_input_fn
: (optional) Acallable
. The default is:lambda inputs, attention: array_ops.concat([inputs, attention], -1)
.output_attention
: Python bool. IfTrue
(default), the output at each time step is the attention value. This is the behavior of Luong-style attention mechanisms. IfFalse
, the output at each time step is the output ofcell
. This is the behavior of Bhadanau-style attention mechanisms. In both cases, theattention
tensor is propagated to the next time step via the state and is used there. This flag only controls whether the attention mechanism is propagated up to the next cell in an RNN stack or to the top RNN output.initial_cell_state
: The initial state value to use for the cell when the user callszero_state()
. Note that if this value is provided now, and the user uses abatch_size
argument ofzero_state
which does not match the batch size ofinitial_cell_state
, proper behavior is not guaranteed.name
: Name to use when creating ops.attention_layer
: A list oftf.layers.Layer
instances or a singletf.layers.Layer
instance taking the context and cell output as inputs to generate attention at each time step. If None (default), use the context as attention at each time step. If attention_mechanism is a list, attention_layer must be a list of the same length. If attention_layers_size is set, this must be None.
Raises:
TypeError
:attention_layer_size
is not None and (attention_mechanism
is a list butattention_layer_size
is not; or vice versa).ValueError
: ifattention_layer_size
is not None,attention_mechanism
is a list, and its length does not match that ofattention_layer_size
; ifattention_layer_size
andattention_layer
are set simultaneously.
Properties
activity_regularizer
Optional regularizer function for the output of this layer.
dtype
graph
input
Retrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.
Returns:
Input tensor or list of input tensors.
Raises:
AttributeError
: if the layer is connected to more than one incoming layers.
Raises:
RuntimeError
: If called in Eager mode.AttributeError
: If no inbound nodes are found.
input_mask
Retrieves the input mask tensor(s) of a layer.
Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.
Returns:
Input mask tensor (potentially None) or list of input mask tensors.
Raises:
AttributeError
: if the layer is connected to more than one incoming layers.
input_shape
Retrieves the input shape(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.
Returns:
Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).
Raises:
AttributeError
: if the layer has no defined input_shape.RuntimeError
: if called in Eager mode.
losses
Losses which are associated with this Layer
.
Variable regularization tensors are created when this property is accessed,
so it is eager safe: accessing losses
under a tf.GradientTape
will
propagate gradients back to the corresponding variables.
Returns:
A list of tensors.
name
non_trainable_variables
non_trainable_weights
output
Retrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.
Returns:
Output tensor or list of output tensors.
Raises:
AttributeError
: if the layer is connected to more than one incoming layers.RuntimeError
: if called in Eager mode.
output_mask
Retrieves the output mask tensor(s) of a layer.
Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.
Returns:
Output mask tensor (potentially None) or list of output mask tensors.
Raises:
AttributeError
: if the layer is connected to more than one incoming layers.
output_shape
Retrieves the output shape(s) of a layer.
Only applicable if the layer has one output, or if all outputs have the same shape.
Returns:
Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).
Raises:
AttributeError
: if the layer has no defined output shape.RuntimeError
: if called in Eager mode.
output_size
Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size
The state_size
property of AttentionWrapper
.
Returns:
An AttentionWrapperState
tuple containing shapes used by this object.
trainable_variables
trainable_weights
updates
variables
Returns the list of all layer variables/weights.
Alias of self.weights
.
Returns:
A list of variables.
weights
Returns the list of all layer variables/weights.
Returns:
A list of variables.
Methods
tf.contrib.seq2seq.AttentionWrapper.__call__
__call__(
inputs,
state,
scope=None
)
Run this RNN cell on inputs, starting from the given state.
Args:
inputs
:2-D
tensor with shape[batch_size, input_size]
.state
: ifself.state_size
is an integer, this should be a2-D Tensor
with shape[batch_size, self.state_size]
. Otherwise, ifself.state_size
is a tuple of integers, this should be a tuple with shapes[batch_size, s] for s in self.state_size
.scope
: VariableScope for the created subgraph; defaults to class name.
Returns:
A pair containing:
- Output: A
2-D
tensor with shape[batch_size, self.output_size]
. - New state: Either a single
2-D
tensor, or a tuple of tensors matching the arity and shapes ofstate
.
tf.contrib.seq2seq.AttentionWrapper.__deepcopy__
__deepcopy__(memo)
tf.contrib.seq2seq.AttentionWrapper.__setattr__
__setattr__(
name,
value
)
Implement setattr(self, name, value).
tf.contrib.seq2seq.AttentionWrapper.apply
apply(
inputs,
*args,
**kwargs
)
Apply the layer on a input.
This is an alias of self.__call__
.
Arguments:
inputs
: Input tensor(s).*args
: additional positional arguments to be passed toself.call
.**kwargs
: additional keyword arguments to be passed toself.call
.
Returns:
Output tensor(s).
tf.contrib.seq2seq.AttentionWrapper.build
build(_)
Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer
or Model
can override if they need a state-creation step in-between
layer instantiation and layer call.
This is typically used to create the weights of Layer
subclasses.
Arguments:
input_shape
: Instance ofTensorShape
, or list of instances ofTensorShape
if the layer expects a list of inputs (one instance per input).
tf.contrib.seq2seq.AttentionWrapper.compute_mask
compute_mask(
inputs,
mask=None
)
Computes an output mask tensor.
Arguments:
inputs
: Tensor or list of tensors.mask
: Tensor or list of tensors.
Returns:
None or a tensor (or list of tensors, one per output tensor of the layer).
tf.contrib.seq2seq.AttentionWrapper.compute_output_shape
compute_output_shape(input_shape)
Computes the output shape of the layer.
Assumes that the layer will be built to match that input shape provided.
Arguments:
input_shape
: Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
Returns:
An input shape tuple.
tf.contrib.seq2seq.AttentionWrapper.count_params
count_params()
Count the total number of scalars composing the weights.
Returns:
An integer count.
Raises:
ValueError
: if the layer isn't yet built (in which case its weights aren't yet defined).
tf.contrib.seq2seq.AttentionWrapper.from_config
from_config(
cls,
config
)
Creates a layer from its config.
This method is the reverse of get_config
,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights
).
Arguments:
config
: A Python dictionary, typically the output of get_config.
Returns:
A layer instance.
tf.contrib.seq2seq.AttentionWrapper.get_config
get_config()
Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity
information, nor the layer class name. These are handled
by Network
(one layer of abstraction above).
Returns:
Python dictionary.
tf.contrib.seq2seq.AttentionWrapper.get_initial_state
get_initial_state(
inputs=None,
batch_size=None,
dtype=None
)
tf.contrib.seq2seq.AttentionWrapper.get_input_at
get_input_at(node_index)
Retrieves the input tensor(s) of a layer at a given node.
Arguments:
node_index
: Integer, index of the node from which to retrieve the attribute. E.g.node_index=0
will correspond to the first time the layer was called.
Returns:
A tensor (or list of tensors if the layer has multiple inputs).
Raises:
RuntimeError
: If called in Eager mode.
tf.contrib.seq2seq.AttentionWrapper.get_input_mask_at
get_input_mask_at(node_index)
Retrieves the input mask tensor(s) of a layer at a given node.
Arguments:
node_index
: Integer, index of the node from which to retrieve the attribute. E.g.node_index=0
will correspond to the first time the layer was called.
Returns:
A mask tensor (or list of tensors if the layer has multiple inputs).
tf.contrib.seq2seq.AttentionWrapper.get_input_shape_at
get_input_shape_at(node_index)
Retrieves the input shape(s) of a layer at a given node.
Arguments:
node_index
: Integer, index of the node from which to retrieve the attribute. E.g.node_index=0
will correspond to the first time the layer was called.
Returns:
A shape tuple (or list of shape tuples if the layer has multiple inputs).
Raises:
RuntimeError
: If called in Eager mode.
tf.contrib.seq2seq.AttentionWrapper.get_losses_for
get_losses_for(inputs)
Retrieves losses relevant to a specific set of inputs.
Arguments:
inputs
: Input tensor or list/tuple of input tensors.
Returns:
List of loss tensors of the layer that depend on inputs
.
Raises:
RuntimeError
: If called in Eager mode.
tf.contrib.seq2seq.AttentionWrapper.get_output_at
get_output_at(node_index)
Retrieves the output tensor(s) of a layer at a given node.
Arguments:
node_index
: Integer, index of the node from which to retrieve the attribute. E.g.node_index=0
will correspond to the first time the layer was called.
Returns:
A tensor (or list of tensors if the layer has multiple outputs).
Raises:
RuntimeError
: If called in Eager mode.
tf.contrib.seq2seq.AttentionWrapper.get_output_mask_at
get_output_mask_at(node_index)
Retrieves the output mask tensor(s) of a layer at a given node.
Arguments:
node_index
: Integer, index of the node from which to retrieve the attribute. E.g.node_index=0
will correspond to the first time the layer was called.
Returns:
A mask tensor (or list of tensors if the layer has multiple outputs).
tf.contrib.seq2seq.AttentionWrapper.get_output_shape_at
get_output_shape_at(node_index)
Retrieves the output shape(s) of a layer at a given node.
Arguments:
node_index
: Integer, index of the node from which to retrieve the attribute. E.g.node_index=0
will correspond to the first time the layer was called.
Returns:
A shape tuple (or list of shape tuples if the layer has multiple outputs).
Raises:
RuntimeError
: If called in Eager mode.
tf.contrib.seq2seq.AttentionWrapper.get_updates_for
get_updates_for(inputs)
Retrieves updates relevant to a specific set of inputs.
Arguments:
inputs
: Input tensor or list/tuple of input tensors.
Returns:
List of update ops of the layer that depend on inputs
.
Raises:
RuntimeError
: If called in Eager mode.
tf.contrib.seq2seq.AttentionWrapper.get_weights
get_weights()
Returns the current weights of the layer.
Returns:
Weights values as a list of numpy arrays.
tf.contrib.seq2seq.AttentionWrapper.set_weights
set_weights(weights)
Sets the weights of the layer, from Numpy arrays.
Arguments:
weights
: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output ofget_weights
).
Raises:
ValueError
: If the provided weights list does not match the layer's specifications.
tf.contrib.seq2seq.AttentionWrapper.zero_state
zero_state(
batch_size,
dtype
)
Return an initial (zero) state tuple for this AttentionWrapper
.
NOTE Please see the initializer documentation for details of how
to call zero_state
if using an AttentionWrapper
with a
BeamSearchDecoder
.
Args:
batch_size
:0D
integer tensor: the batch size.dtype
: The internal state data type.
Returns:
An AttentionWrapperState
tuple containing zeroed out tensors and,
possibly, empty TensorArray
objects.
Raises:
ValueError
: (or, possibly at runtime, InvalidArgument), ifbatch_size
does not match the output size of the encoder passed to the wrapper object at initialization time.