OpenCV  4.1.0
Open Source Computer Vision
Public Types | Public Member Functions | Static Public Member Functions | List of all members
cv::dnn::Net Class Reference

This class allows to create and manipulate comprehensive artificial neural networks. More...

#include <opencv2/dnn/dnn.hpp>

Public Types

typedef DictValue LayerId
 Container for strings and integers.
 

Public Member Functions

 Net ()
 Default constructor.
 
 ~Net ()
 Destructor frees the net only if there aren't references to the net anymore.
 
int addLayer (const String &name, const String &type, LayerParams &params)
 Adds new layer to the net.
 
int addLayerToPrev (const String &name, const String &type, LayerParams &params)
 Adds new layer and connects its first input to the first output of previously added layer.
 
void connect (String outPin, String inpPin)
 Connects output of the first layer to input of the second layer.
 
void connect (int outLayerId, int outNum, int inpLayerId, int inpNum)
 Connects #outNum output of the first layer to #inNum input of the second layer.
 
bool empty () const
 
void enableFusion (bool fusion)
 Enables or disables layer fusion in the network.
 
Mat forward (const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName.
 
void forward (OutputArrayOfArrays outputBlobs, const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName.
 
void forward (OutputArrayOfArrays outputBlobs, const std::vector< String > &outBlobNames)
 Runs forward pass to compute outputs of layers listed in outBlobNames.
 
void forward (std::vector< std::vector< Mat > > &outputBlobs, const std::vector< String > &outBlobNames)
 Runs forward pass to compute outputs of layers listed in outBlobNames.
 
int64 getFLOPS (const std::vector< MatShape > &netInputShapes) const
 Computes FLOP for whole loaded model with specified input shapes.
 
int64 getFLOPS (const MatShape &netInputShape) const
 
int64 getFLOPS (const int layerId, const std::vector< MatShape > &netInputShapes) const
 
int64 getFLOPS (const int layerId, const MatShape &netInputShape) const
 
Ptr< LayergetLayer (LayerId layerId)
 Returns pointer to layer with specified id or name which the network use.
 
int getLayerId (const String &layer)
 Converts string name of the layer to the integer identifier.
 
std::vector< Ptr< Layer > > getLayerInputs (LayerId layerId)
 Returns pointers to input layers of specific layer.
 
std::vector< StringgetLayerNames () const
 
int getLayersCount (const String &layerType) const
 Returns count of layers of specified type.
 
void getLayerShapes (const MatShape &netInputShape, const int layerId, std::vector< MatShape > &inLayerShapes, std::vector< MatShape > &outLayerShapes) const
 Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.
 
void getLayerShapes (const std::vector< MatShape > &netInputShapes, const int layerId, std::vector< MatShape > &inLayerShapes, std::vector< MatShape > &outLayerShapes) const
 
void getLayersShapes (const std::vector< MatShape > &netInputShapes, std::vector< int > &layersIds, std::vector< std::vector< MatShape > > &inLayersShapes, std::vector< std::vector< MatShape > > &outLayersShapes) const
 Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.
 
void getLayersShapes (const MatShape &netInputShape, std::vector< int > &layersIds, std::vector< std::vector< MatShape > > &inLayersShapes, std::vector< std::vector< MatShape > > &outLayersShapes) const
 
void getLayerTypes (std::vector< String > &layersTypes) const
 Returns list of types for layer used in model.
 
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, size_t &weights, size_t &blobs) const
 Computes bytes number which are required to store all weights and intermediate blobs for model.
 
void getMemoryConsumption (const MatShape &netInputShape, size_t &weights, size_t &blobs) const
 
void getMemoryConsumption (const int layerId, const std::vector< MatShape > &netInputShapes, size_t &weights, size_t &blobs) const
 
void getMemoryConsumption (const int layerId, const MatShape &netInputShape, size_t &weights, size_t &blobs) const
 
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, std::vector< int > &layerIds, std::vector< size_t > &weights, std::vector< size_t > &blobs) const
 Computes bytes number which are required to store all weights and intermediate blobs for each layer.
 
void getMemoryConsumption (const MatShape &netInputShape, std::vector< int > &layerIds, std::vector< size_t > &weights, std::vector< size_t > &blobs) const
 
Mat getParam (LayerId layer, int numParam=0)
 Returns parameter blob of the layer.
 
int64 getPerfProfile (std::vector< double > &timings)
 Returns overall time for inference and timings (in ticks) for layers. Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers.
 
std::vector< int > getUnconnectedOutLayers () const
 Returns indexes of layers with unconnected outputs.
 
std::vector< StringgetUnconnectedOutLayersNames () const
 Returns names of layers with unconnected outputs.
 
void setHalideScheduler (const String &scheduler)
 Compile Halide layers.
 
void setInput (InputArray blob, const String &name="", double scalefactor=1.0, const Scalar &mean=Scalar())
 Sets the new input value for the network.
 
void setInputsNames (const std::vector< String > &inputBlobNames)
 Sets outputs names of the network input pseudo layer.
 
void setParam (LayerId layer, int numParam, const Mat &blob)
 Sets the new value for the learned param of the layer.
 
void setPreferableBackend (int backendId)
 Ask network to use specific computation backend where it supported.
 
void setPreferableTarget (int targetId)
 Ask network to make computations on specific target device.
 

Static Public Member Functions

static Net readFromModelOptimizer (const String &xml, const String &bin)
 Create a network from Intel's Model Optimizer intermediate representation.
 

Detailed Description

This class allows to create and manipulate comprehensive artificial neural networks.

Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.

Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.

This class supports reference counting of its instances, i. e. copies point to the same instance.

Examples:
samples/dnn/colorization.cpp, samples/dnn/openpose.cpp, and samples/dnn/text_detection.cpp.

Member Typedef Documentation

Container for strings and integers.

Constructor & Destructor Documentation

cv::dnn::Net::Net ( )
Python:
<dnn_Net object>=cv.dnn_Net()

Default constructor.

cv::dnn::Net::~Net ( )

Destructor frees the net only if there aren't references to the net anymore.

Member Function Documentation

int cv::dnn::Net::addLayer ( const String name,
const String type,
LayerParams params 
)

Adds new layer to the net.

Parameters
nameunique name of the adding layer.
typetypename of the adding layer (type must be registered in LayerRegister).
paramsparameters which will be used to initialize the creating layer.
Returns
unique identifier of created layer, or -1 if a failure will happen.
int cv::dnn::Net::addLayerToPrev ( const String name,
const String type,
LayerParams params 
)

Adds new layer and connects its first input to the first output of previously added layer.

See Also
addLayer()
void cv::dnn::Net::connect ( String  outPin,
String  inpPin 
)
Python:
None=cv.dnn_Net.connect(outPin, inpPin)

Connects output of the first layer to input of the second layer.

Parameters
outPindescriptor of the first layer output.
inpPindescriptor of the second layer input.

Descriptors have the following template <layer_name>[.input_number]:

  • the first part of the template layer_name is sting name of the added layer. If this part is empty then the network input pseudo layer will be used;
  • the second optional part of the template input_number is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.
See Also
setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()
void cv::dnn::Net::connect ( int  outLayerId,
int  outNum,
int  inpLayerId,
int  inpNum 
)
Python:
None=cv.dnn_Net.connect(outPin, inpPin)

Connects #outNum output of the first layer to #inNum input of the second layer.

Parameters
outLayerIdidentifier of the first layer
outNumnumber of the first layer output
inpLayerIdidentifier of the second layer
inpNumnumber of the second layer input
bool cv::dnn::Net::empty ( ) const
Python:
retval=cv.dnn_Net.empty()

Returns true if there are no layers in the network.

void cv::dnn::Net::enableFusion ( bool  fusion)
Python:
None=cv.dnn_Net.enableFusion(fusion)

Enables or disables layer fusion in the network.

Parameters
fusiontrue to enable the fusion, false to disable. The fusion is enabled by default.
Mat cv::dnn::Net::forward ( const String outputName = String())
Python:
retval=cv.dnn_Net.forward([, outputName])
outputBlobs=cv.dnn_Net.forward([, outputBlobs[, outputName]])
outputBlobs=cv.dnn_Net.forward(outBlobNames[, outputBlobs])
outputBlobs=cv.dnn_Net.forwardAndRetrieve(outBlobNames)

Runs forward pass to compute output of layer with name outputName.

Parameters
outputNamename for layer which output is needed to get
Returns
blob for first output of specified layer.

By default runs forward pass for the whole network.

Examples:
samples/dnn/colorization.cpp, and samples/dnn/openpose.cpp.
void cv::dnn::Net::forward ( OutputArrayOfArrays  outputBlobs,
const String outputName = String() 
)
Python:
retval=cv.dnn_Net.forward([, outputName])
outputBlobs=cv.dnn_Net.forward([, outputBlobs[, outputName]])
outputBlobs=cv.dnn_Net.forward(outBlobNames[, outputBlobs])
outputBlobs=cv.dnn_Net.forwardAndRetrieve(outBlobNames)

Runs forward pass to compute output of layer with name outputName.

Parameters
outputBlobscontains all output blobs for specified layer.
outputNamename for layer which output is needed to get

If outputName is empty, runs forward pass for the whole network.

void cv::dnn::Net::forward ( OutputArrayOfArrays  outputBlobs,
const std::vector< String > &  outBlobNames 
)
Python:
retval=cv.dnn_Net.forward([, outputName])
outputBlobs=cv.dnn_Net.forward([, outputBlobs[, outputName]])
outputBlobs=cv.dnn_Net.forward(outBlobNames[, outputBlobs])
outputBlobs=cv.dnn_Net.forwardAndRetrieve(outBlobNames)

Runs forward pass to compute outputs of layers listed in outBlobNames.

Parameters
outputBlobscontains blobs for first outputs of specified layers.
outBlobNamesnames for layers which outputs are needed to get
void cv::dnn::Net::forward ( std::vector< std::vector< Mat > > &  outputBlobs,
const std::vector< String > &  outBlobNames 
)
Python:
retval=cv.dnn_Net.forward([, outputName])
outputBlobs=cv.dnn_Net.forward([, outputBlobs[, outputName]])
outputBlobs=cv.dnn_Net.forward(outBlobNames[, outputBlobs])
outputBlobs=cv.dnn_Net.forwardAndRetrieve(outBlobNames)

Runs forward pass to compute outputs of layers listed in outBlobNames.

Parameters
outputBlobscontains all output blobs for each layer specified in outBlobNames.
outBlobNamesnames for layers which outputs are needed to get
int64 cv::dnn::Net::getFLOPS ( const std::vector< MatShape > &  netInputShapes) const
Python:
retval=cv.dnn_Net.getFLOPS(netInputShapes)
retval=cv.dnn_Net.getFLOPS(netInputShape)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShapes)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShape)

Computes FLOP for whole loaded model with specified input shapes.

Parameters
netInputShapesvector of shapes for all net inputs.
Returns
computed FLOP.
int64 cv::dnn::Net::getFLOPS ( const MatShape netInputShape) const
Python:
retval=cv.dnn_Net.getFLOPS(netInputShapes)
retval=cv.dnn_Net.getFLOPS(netInputShape)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShapes)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

int64 cv::dnn::Net::getFLOPS ( const int  layerId,
const std::vector< MatShape > &  netInputShapes 
) const
Python:
retval=cv.dnn_Net.getFLOPS(netInputShapes)
retval=cv.dnn_Net.getFLOPS(netInputShape)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShapes)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

int64 cv::dnn::Net::getFLOPS ( const int  layerId,
const MatShape netInputShape 
) const
Python:
retval=cv.dnn_Net.getFLOPS(netInputShapes)
retval=cv.dnn_Net.getFLOPS(netInputShape)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShapes)
retval=cv.dnn_Net.getFLOPS(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Ptr<Layer> cv::dnn::Net::getLayer ( LayerId  layerId)
Python:
retval=cv.dnn_Net.getLayer(layerId)

Returns pointer to layer with specified id or name which the network use.

Examples:
samples/dnn/colorization.cpp.
int cv::dnn::Net::getLayerId ( const String layer)
Python:
retval=cv.dnn_Net.getLayerId(layer)

Converts string name of the layer to the integer identifier.

Returns
id of the layer, or -1 if the layer wasn't found.
std::vector<Ptr<Layer> > cv::dnn::Net::getLayerInputs ( LayerId  layerId)

Returns pointers to input layers of specific layer.

std::vector<String> cv::dnn::Net::getLayerNames ( ) const
Python:
retval=cv.dnn_Net.getLayerNames()
int cv::dnn::Net::getLayersCount ( const String layerType) const
Python:
retval=cv.dnn_Net.getLayersCount(layerType)

Returns count of layers of specified type.

Parameters
layerTypetype.
Returns
count of layers
void cv::dnn::Net::getLayerShapes ( const MatShape netInputShape,
const int  layerId,
std::vector< MatShape > &  inLayerShapes,
std::vector< MatShape > &  outLayerShapes 
) const

Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.

Parameters
netInputShapeshape input blob in net input layer.
layerIdid for layer.
inLayerShapesoutput parameter for input layers shapes; order is the same as in layersIds
outLayerShapesoutput parameter for output layers shapes; order is the same as in layersIds
void cv::dnn::Net::getLayerShapes ( const std::vector< MatShape > &  netInputShapes,
const int  layerId,
std::vector< MatShape > &  inLayerShapes,
std::vector< MatShape > &  outLayerShapes 
) const

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void cv::dnn::Net::getLayersShapes ( const std::vector< MatShape > &  netInputShapes,
std::vector< int > &  layersIds,
std::vector< std::vector< MatShape > > &  inLayersShapes,
std::vector< std::vector< MatShape > > &  outLayersShapes 
) const
Python:
layersIds, inLayersShapes, outLayersShapes=cv.dnn_Net.getLayersShapes(netInputShapes)
layersIds, inLayersShapes, outLayersShapes=cv.dnn_Net.getLayersShapes(netInputShape)

Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.

Parameters
netInputShapesshapes for all input blobs in net input layer.
layersIdsoutput parameter for layer IDs.
inLayersShapesoutput parameter for input layers shapes; order is the same as in layersIds
outLayersShapesoutput parameter for output layers shapes; order is the same as in layersIds
void cv::dnn::Net::getLayersShapes ( const MatShape netInputShape,
std::vector< int > &  layersIds,
std::vector< std::vector< MatShape > > &  inLayersShapes,
std::vector< std::vector< MatShape > > &  outLayersShapes 
) const
Python:
layersIds, inLayersShapes, outLayersShapes=cv.dnn_Net.getLayersShapes(netInputShapes)
layersIds, inLayersShapes, outLayersShapes=cv.dnn_Net.getLayersShapes(netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void cv::dnn::Net::getLayerTypes ( std::vector< String > &  layersTypes) const
Python:
layersTypes=cv.dnn_Net.getLayerTypes()

Returns list of types for layer used in model.

Parameters
layersTypesoutput parameter for returning types.
void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > &  netInputShapes,
size_t &  weights,
size_t &  blobs 
) const
Python:
weights, blobs=cv.dnn_Net.getMemoryConsumption(netInputShape)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShapes)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShape)

Computes bytes number which are required to store all weights and intermediate blobs for model.

Parameters
netInputShapesvector of shapes for all net inputs.
weightsoutput parameter to store resulting bytes for weights.
blobsoutput parameter to store resulting bytes for intermediate blobs.
void cv::dnn::Net::getMemoryConsumption ( const MatShape netInputShape,
size_t &  weights,
size_t &  blobs 
) const
Python:
weights, blobs=cv.dnn_Net.getMemoryConsumption(netInputShape)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShapes)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void cv::dnn::Net::getMemoryConsumption ( const int  layerId,
const std::vector< MatShape > &  netInputShapes,
size_t &  weights,
size_t &  blobs 
) const
Python:
weights, blobs=cv.dnn_Net.getMemoryConsumption(netInputShape)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShapes)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void cv::dnn::Net::getMemoryConsumption ( const int  layerId,
const MatShape netInputShape,
size_t &  weights,
size_t &  blobs 
) const
Python:
weights, blobs=cv.dnn_Net.getMemoryConsumption(netInputShape)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShapes)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > &  netInputShapes,
std::vector< int > &  layerIds,
std::vector< size_t > &  weights,
std::vector< size_t > &  blobs 
) const
Python:
weights, blobs=cv.dnn_Net.getMemoryConsumption(netInputShape)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShapes)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShape)

Computes bytes number which are required to store all weights and intermediate blobs for each layer.

Parameters
netInputShapesvector of shapes for all net inputs.
layerIdsoutput vector to save layer IDs.
weightsoutput parameter to store resulting bytes for weights.
blobsoutput parameter to store resulting bytes for intermediate blobs.
void cv::dnn::Net::getMemoryConsumption ( const MatShape netInputShape,
std::vector< int > &  layerIds,
std::vector< size_t > &  weights,
std::vector< size_t > &  blobs 
) const
Python:
weights, blobs=cv.dnn_Net.getMemoryConsumption(netInputShape)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShapes)
weights, blobs=cv.dnn_Net.getMemoryConsumption(layerId, netInputShape)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Mat cv::dnn::Net::getParam ( LayerId  layer,
int  numParam = 0 
)
Python:
retval=cv.dnn_Net.getParam(layer[, numParam])

Returns parameter blob of the layer.

Parameters
layername or id of the layer.
numParamindex of the layer parameter in the Layer::blobs array.
See Also
Layer::blobs
int64 cv::dnn::Net::getPerfProfile ( std::vector< double > &  timings)
Python:
retval, timings=cv.dnn_Net.getPerfProfile()

Returns overall time for inference and timings (in ticks) for layers. Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers.

Parameters
timingsvector for tick timings for all layers.
Returns
overall ticks for model inference.
std::vector<int> cv::dnn::Net::getUnconnectedOutLayers ( ) const
Python:
retval=cv.dnn_Net.getUnconnectedOutLayers()

Returns indexes of layers with unconnected outputs.

std::vector<String> cv::dnn::Net::getUnconnectedOutLayersNames ( ) const
Python:
retval=cv.dnn_Net.getUnconnectedOutLayersNames()

Returns names of layers with unconnected outputs.

static Net cv::dnn::Net::readFromModelOptimizer ( const String xml,
const String bin 
)
static
Python:
retval=cv.dnn.Net_readFromModelOptimizer(xml, bin)

Create a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]xmlXML configuration file with network's topology.
[in]binBinary file with trained weights. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.
void cv::dnn::Net::setHalideScheduler ( const String scheduler)
Python:
None=cv.dnn_Net.setHalideScheduler(scheduler)

Compile Halide layers.

Parameters
[in]schedulerPath to YAML file with scheduling directives.
See Also
setPreferableBackend

Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.

void cv::dnn::Net::setInput ( InputArray  blob,
const String name = "",
double  scalefactor = 1.0,
const Scalar mean = Scalar() 
)
Python:
None=cv.dnn_Net.setInput(blob[, name[, scalefactor[, mean]]])

Sets the new input value for the network.

Parameters
blobA new blob. Should have CV_32F or CV_8U depth.
nameA name of input layer.
scalefactorAn optional normalization scale.
meanAn optional mean subtraction values.
See Also
connect(String, String) to know format of the descriptor.

If scale or mean values are specified, a final input blob is computed as:

\[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\]

Examples:
samples/dnn/colorization.cpp, and samples/dnn/openpose.cpp.
void cv::dnn::Net::setInputsNames ( const std::vector< String > &  inputBlobNames)
Python:
None=cv.dnn_Net.setInputsNames(inputBlobNames)

Sets outputs names of the network input pseudo layer.

Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.

void cv::dnn::Net::setParam ( LayerId  layer,
int  numParam,
const Mat blob 
)
Python:
None=cv.dnn_Net.setParam(layer, numParam, blob)

Sets the new value for the learned param of the layer.

Parameters
layername or id of the layer.
numParamindex of the layer parameter in the Layer::blobs array.
blobthe new value.
See Also
Layer::blobs
Note
If shape of the new blob differs from the previous shape, then the following forward pass may fail.
void cv::dnn::Net::setPreferableBackend ( int  backendId)
Python:
None=cv.dnn_Net.setPreferableBackend(backendId)

Ask network to use specific computation backend where it supported.

Parameters
[in]backendIdbackend identifier.
See Also
Backend

If OpenCV is compiled with Intel's Inference Engine library, DNN_BACKEND_DEFAULT means DNN_BACKEND_INFERENCE_ENGINE. Otherwise it equals to DNN_BACKEND_OPENCV.

void cv::dnn::Net::setPreferableTarget ( int  targetId)
Python:
None=cv.dnn_Net.setPreferableTarget(targetId)

Ask network to make computations on specific target device.

Parameters
[in]targetIdtarget identifier.
See Also
Target

List of supported combinations backend / target:

DNN_BACKEND_OPENCV DNN_BACKEND_INFERENCE_ENGINE DNN_BACKEND_HALIDE
DNN_TARGET_CPU + + +
DNN_TARGET_OPENCL + + +
DNN_TARGET_OPENCL_FP16 + +
DNN_TARGET_MYRIAD +
DNN_TARGET_FPGA +
Examples:
samples/dnn/colorization.cpp.

The documentation for this class was generated from the following file: