Library Documentation¶
This documentation covers Theano module-wise. This is suited to finding the Types and Ops that you can use to build and compile expression graphs.
tensor
– Types and Ops for Symbolic numpygradient
– Symbolic Differentiationconfig
– Theano Configurationprinting
– Graph Printing and Symbolic Print Statementd3viz
– d3viz: Interactive visualization of Theano compute graphscompile
– Transforming Expression Graphs to Functionssparse
– Symbolic Sparse Matricessparse
– Sparse Opsparse.sandbox
– Sparse Op Sandboxscalar
– Symbolic Scalar Types, Ops [doc TODO]gof
– Theano Internals [doc TODO]misc.pkl_utils
- Tools for serialization.scan
– Looping in Theanosandbox
– Experimental Codetyped_list
– Typed List
There are also some top-level imports that you might find more convenient:
-
theano.
function
(...)¶ Alias for
function.function()
-
theano.
function_dump
(...)¶ Alias for
theano.compile.function.function_dump()
Alias for
theano.compile.sharedvalue.shared()
-
class
theano.
Param
¶ Alias for
function.Param
-
theano.
dot
(x, y)¶ Works like
tensor.dot()
for both sparse and dense matrix products
-
theano.
clone
(output, replace=None, strict=True, share_inputs=True, copy_inputs=<object object at 0x3372bc0>)¶ Function that allows replacing subgraphs of a computational graph.
It returns a copy of the initial subgraph with the corresponding substitutions.
Parameters: - output (Theano Variables (or Theano expressions)) – Theano expression that represents the computational graph.
- replace (dict) – Dictionary describing which subgraphs should be replaced by what.
- share_inputs (bool) – If True, use the same inputs (and shared variables) as the original graph. If False, clone them. Note that cloned shared variables still use the same underlying storage, so they will always have the same value.
- copy_inputs – Deprecated, use share_inputs.
-
theano.
sparse_grad
(var)¶ This function return a new variable whose gradient will be stored in a sparse format instead of dense.
Currently only variable created by AdvancedSubtensor1 is supported. i.e. a_tensor_var[an_int_vector].
New in version 0.6rc4.