OpenCV
4.1.0
Open Source Computer Vision
|
Classical recurrent layer. More...
#include <opencv2/dnn/all_layers.hpp>
Public Member Functions | |
virtual void | setProduceHiddenOutput (bool produce=false)=0 |
If this flag is set to true then layer will produce \( h_t \) as second output. | |
virtual void | setWeights (const Mat &Wxh, const Mat &bh, const Mat &Whh, const Mat &Who, const Mat &bo)=0 |
Public Member Functions inherited from cv::dnn::Layer | |
Layer () | |
Layer (const LayerParams ¶ms) | |
Initializes only name, type and blobs fields. | |
virtual | ~Layer () |
virtual void | applyHalideScheduler (Ptr< BackendNode > &node, const std::vector< Mat * > &inputs, const std::vector< Mat > &outputs, int targetId) const |
Automatic Halide scheduling based on layer hyper-parameters. | |
virtual void | finalize (const std::vector< Mat * > &input, std::vector< Mat > &output) |
Computes and sets internal parameters according to inputs, outputs and blobs. | |
virtual void | finalize (InputArrayOfArrays inputs, OutputArrayOfArrays outputs) |
Computes and sets internal parameters according to inputs, outputs and blobs. | |
void | finalize (const std::vector< Mat > &inputs, std::vector< Mat > &outputs) |
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. | |
std::vector< Mat > | finalize (const std::vector< Mat > &inputs) |
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. | |
virtual void | forward (std::vector< Mat * > &input, std::vector< Mat > &output, std::vector< Mat > &internals) |
Given the input blobs, computes the output blobs . | |
virtual void | forward (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals) |
Given the input blobs, computes the output blobs . | |
void | forward_fallback (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals) |
Given the input blobs, computes the output blobs . | |
virtual int64 | getFLOPS (const std::vector< MatShape > &inputs, const std::vector< MatShape > &outputs) const |
virtual bool | getMemoryShapes (const std::vector< MatShape > &inputs, const int requiredOutputs, std::vector< MatShape > &outputs, std::vector< MatShape > &internals) const |
virtual void | getScaleShift (Mat &scale, Mat &shift) const |
Returns parameters of layers with channel-wise multiplication and addition. | |
virtual Ptr< BackendNode > | initHalide (const std::vector< Ptr< BackendWrapper > > &inputs) |
Returns Halide backend node. | |
virtual Ptr< BackendNode > | initInfEngine (const std::vector< Ptr< BackendWrapper > > &inputs) |
virtual Ptr< BackendNode > | initVkCom (const std::vector< Ptr< BackendWrapper > > &inputs) |
virtual int | inputNameToIndex (String inputName) |
Returns index of input blob into the input array. | |
virtual int | outputNameToIndex (const String &outputName) |
Returns index of output blob in output array. | |
void | run (const std::vector< Mat > &inputs, std::vector< Mat > &outputs, std::vector< Mat > &internals) |
Allocates layer and computes output. | |
virtual bool | setActivation (const Ptr< ActivationLayer > &layer) |
Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. | |
void | setParamsFrom (const LayerParams ¶ms) |
Initializes only name, type and blobs fields. | |
virtual bool | supportBackend (int backendId) |
Ask layer if it support specific backend for doing computations. | |
virtual Ptr< BackendNode > | tryAttach (const Ptr< BackendNode > &node) |
Implement layers fusing. | |
virtual bool | tryFuse (Ptr< Layer > &top) |
Try to fuse current layer with a next one. | |
virtual void | unsetAttached () |
"Deattaches" all the layers, attached to particular layer. | |
Public Member Functions inherited from cv::Algorithm | |
Algorithm () | |
virtual | ~Algorithm () |
virtual void | clear () |
Clears the algorithm state. | |
virtual bool | empty () const |
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read. | |
virtual String | getDefaultName () const |
virtual void | read (const FileNode &fn) |
Reads algorithm parameters from a file storage. | |
virtual void | save (const String &filename) const |
virtual void | write (FileStorage &fs) const |
Stores algorithm parameters in a file storage. | |
void | write (const Ptr< FileStorage > &fs, const String &name=String()) const |
simplified API for language bindingsThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. | |
Static Public Member Functions | |
static Ptr< RNNLayer > | create (const LayerParams ¶ms) |
Additional Inherited Members | |
Public Attributes inherited from cv::dnn::Layer | |
std::vector< Mat > | blobs |
List of learned parameters must be stored here to allow read them by using Net::getParam(). | |
String | name |
Name of the layer instance, can be used for logging or other internal purposes. | |
int | preferableTarget |
prefer target for layer forwarding | |
String | type |
Type name which was used for creating layer by layer factory. | |
Protected Member Functions inherited from cv::Algorithm | |
void | writeFormat (FileStorage &fs) const |
Classical recurrent layer.
Accepts two inputs \(x_t\) and \(h_{t-1}\) and compute two outputs \(o_t\) and \(h_t\).
input[0] should have shape [T
, N
, data_dims
] where T
and N
is number of timestamps and number of independent samples of \(x_t\) respectively.
output[0] will have shape [T
, N
, \(N_o\)], where \(N_o\) is number of rows in \( W_{xo} \) matrix.
If setProduceHiddenOutput() is set to true then output
[1] will contain a Mat with shape [T
, N
, \(N_h\)], where \(N_h\) is number of rows in \( W_{hh} \) matrix.
|
static |
Creates instance of RNNLayer
|
pure virtual |
If this flag is set to true then layer will produce \( h_t \) as second output.
Shape of the second output is the same as first output.
|
pure virtual |
Setups learned weights.
Recurrent-layer behavior on each step is defined by current input \( x_t \), previous state \( h_t \) and learned weights as follows:
\begin{eqnarray*} h_t &= tanh&(W_{hh} h_{t-1} + W_{xh} x_t + b_h), \\ o_t &= tanh&(W_{ho} h_t + b_o), \end{eqnarray*}
Wxh | is \( W_{xh} \) matrix |
bh | is \( b_{h} \) vector |
Whh | is \( W_{hh} \) matrix |
Who | is \( W_{xo} \) matrix |
bo | is \( b_{o} \) vector |