This class encapsulates a queue of asynchronous calls.
More...
#include <opencv2/core/cuda.hpp>
|
| Stream () |
| creates a new asynchronous stream
|
|
| Stream (const Ptr< GpuMat::Allocator > &allocator) |
| creates a new asynchronous stream with custom allocator
|
|
void | enqueueHostCallback (StreamCallback callback, void *userData) |
| Adds a callback to be called on the host after all currently enqueued items in the stream have completed.
|
|
| operator bool_type () const |
| returns true if stream object is not default (!= 0)
|
|
bool | queryIfComplete () const |
| Returns true if the current stream queue is finished. Otherwise, it returns false.
|
|
void | waitEvent (const Event &event) |
| Makes a compute stream wait on an event.
|
|
void | waitForCompletion () |
| Blocks the current CPU thread until all operations in the stream are complete.
|
|
This class encapsulates a queue of asynchronous calls.
- Note
- Currently, you may face problems if an operation is enqueued twice with different data. Some functions use the constant GPU memory, and next call may update the memory before the previous one has been finished. But calling different operations asynchronously is safe because each operation has its own constant buffer. Memory copy/upload/download/set operations to the buffers you hold are also safe.
-
The Stream class is not thread-safe. Please use different Stream objects for different CPU threads.
void thread1()
{
cv::cuda::func1(..., stream1);
}
void thread2()
{
cv::cuda::func2(..., stream2);
}
- Note
- By default all CUDA routines are launched in Stream::Null() object, if the stream is not specified by user. In multi-threading environment the stream objects must be passed explicitly (see previous note).
typedef void(* cv::cuda::Stream::StreamCallback)(int status, void *userData) |
cv::cuda::Stream::Stream |
( |
| ) |
|
Python: |
---|
| <cuda_Stream object> | = | cv.cuda_Stream( | | ) |
| <cuda_Stream object> | = | cv.cuda_Stream( | allocator | ) |
creates a new asynchronous stream
Python: |
---|
| <cuda_Stream object> | = | cv.cuda_Stream( | | ) |
| <cuda_Stream object> | = | cv.cuda_Stream( | allocator | ) |
creates a new asynchronous stream with custom allocator
void cv::cuda::Stream::enqueueHostCallback |
( |
StreamCallback |
callback, |
|
|
void * |
userData |
|
) |
| |
Adds a callback to be called on the host after all currently enqueued items in the stream have completed.
- Note
- Callbacks must not make any CUDA API calls. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.
static Stream& cv::cuda::Stream::Null |
( |
| ) |
|
|
static |
Python: |
---|
| retval | = | cv.cuda.Stream_Null( | | ) |
return Stream object for default CUDA stream
cv::cuda::Stream::operator bool_type |
( |
| ) |
const |
returns true if stream object is not default (!= 0)
bool cv::cuda::Stream::queryIfComplete |
( |
| ) |
const |
Python: |
---|
| retval | = | cv.cuda_Stream.queryIfComplete( | | ) |
Returns true if the current stream queue is finished. Otherwise, it returns false.
void cv::cuda::Stream::waitEvent |
( |
const Event & |
event | ) |
|
Python: |
---|
| None | = | cv.cuda_Stream.waitEvent( | event | ) |
Makes a compute stream wait on an event.
void cv::cuda::Stream::waitForCompletion |
( |
| ) |
|
Python: |
---|
| None | = | cv.cuda_Stream.waitForCompletion( | | ) |
Blocks the current CPU thread until all operations in the stream are complete.
friend class DefaultDeviceInitializer |
|
friend |
The documentation for this class was generated from the following file: