tf.contrib.layers.embed_sequence(
ids,
vocab_size=None,
embed_dim=None,
unique=False,
initializer=None,
regularizer=None,
trainable=True,
scope=None,
reuse=None
)
Defined in tensorflow/contrib/layers/python/layers/encoders.py
.
Maps a sequence of symbols to a sequence of embeddings.
Typical use case would be reusing embeddings between an encoder and decoder.
Args:
ids
:[batch_size, doc_length]
Tensor
of typeint32
orint64
with symbol ids.vocab_size
: Integer number of symbols in vocabulary.embed_dim
: Integer number of dimensions for embedding matrix.unique
: IfTrue
, will first compute the unique set of indices, and then lookup each embedding once, repeating them in the output as needed.initializer
: An initializer for the embeddings, ifNone
default for current scope is used.regularizer
: Optional regularizer for the embeddings.trainable
: IfTrue
also add variables to the graph collectionGraphKeys.TRAINABLE_VARIABLES
(seetf.Variable
).scope
: Optional string specifying the variable scope for the op, required ifreuse=True
.reuse
: IfTrue
, variables inside the op will be reused.
Returns:
Tensor
of [batch_size, doc_length, embed_dim]
with embedded sequences.
Raises:
ValueError
: ifembed_dim
orvocab_size
are not specified whenreuse
isNone
orFalse
.