chainer.functions.deconvolution_nd¶
-
chainer.functions.
deconvolution_nd
(x, W, b=None, stride=1, pad=0, outsize=None, dilate=1, groups=1)[source]¶ N-dimensional deconvolution function.
This is an implementation of N-dimensional deconvolution which generalizes two-dimensional one. In most of deep learning frameworks and papers, this function is called transposed convolution. But because of historical reasons (e.g. paper by Ziller Deconvolutional Networks) and backward compatibility, this function is called deconvolution in Chainer.
It takes three variables: the input
x
, the filter weightW
, and the bias vectorb
.Notation: here is a notation for dimensionalities.
N is the number of spatial dimensions.
n is the batch size.
cI and cO are the number of the input and output channels, respectively.
d1,d2,...,dN are the size of each axis of the input’s spatial dimensions, respectively.
k1,k2,...,kN are the size of each axis of the filters, respectively.
p1,p2,...,pN are the size of each axis of the spatial padding size, respectively.
s1,s2,...,sN are the stride of each axis of filter application, respectively.
If
outsize
option isNone
, the output size (l1,l2,...,lN) is determined by the following equations with the items in the above list:ln=sn(dn−1)+kn−2pn (n=1,...,N)If
outsize
option is given, the output size is determined byoutsize
. In this case, theoutsize
(l1,l2,...,lN) must satisfy the following equations:dn=⌊(ln+2pn−kn)/sn⌋+1 (n=1,...,N)Deconvolution links can use a feature of cuDNN called autotuning, which selects the most efficient CNN algorithm for images of fixed-size, can provide a significant performance boost for fixed neural nets. To enable, set chainer.using_config(‘autotune’, True)
- Parameters
x (
Variable
or N-dimensional array) – Input variable of shape (n,cI,d1,d2,...,dN).W (
Variable
or N-dimensional array) – Weight variable of shape (cI,cO,k1,k2,...,kN).b (None or
Variable
or N-dimensional array) – One-dimensional bias variable with length cO (optional).stride (
int
ortuple
ofint
s) – Stride of filter applications (s1,s2,...,sN).stride=s
is equivalent to(s, s, ..., s)
.pad (
int
ortuple
ofint
s) – Spatial padding width for input arrays (p1,p2,...,pN).pad=p
is equivalent to(p, p, ..., p)
.outsize (None or
tuple
ofint
s) – Expected output size of deconvolutional operation. It should be a tuple of ints (l1,l2,...,lN). Default value isNone
and the outsize is estimated by input size, stride and pad.dilate (
int
ortuple
ofint
s) – Dilation factor of filter applications.dilate=d
anddilate=(d, d, ..., d)
are equivalent.groups (
int
) – The number of groups to use grouped convolution. The default is one, where grouped convolution is not used.
- Returns
Output variable of shape (n,cO,l1,l2,...,lN).
- Return type
See also
links.DeconvolutionND
,deconvolution_2d()
Example
Example1: the case when
outsize
is not given.>>> n = 10 >>> c_i, c_o = 3, 1 >>> d1, d2, d3 = 5, 10, 15 >>> k1, k2, k3 = 10, 10, 10 >>> p1, p2, p3 = 5, 5, 5 >>> x = np.random.uniform(0, 1, (n, c_i, d1, d2, d3)).astype(np.float32) >>> x.shape (10, 3, 5, 10, 15) >>> W = np.random.uniform(0, 1, (c_i, c_o, k1, k2, k3)).astype(np.float32) >>> W.shape (3, 1, 10, 10, 10) >>> b = np.random.uniform(0, 1, (c_o)).astype(np.float32) >>> b.shape (1,) >>> s1, s2, s3 = 2, 4, 6 >>> y = F.deconvolution_nd(x, W, b, stride=(s1, s2, s3), pad=(p1, p2, p3)) >>> y.shape (10, 1, 8, 36, 84) >>> l1 = s1 * (d1 - 1) + k1 - 2 * p1 >>> l2 = s2 * (d2 - 1) + k2 - 2 * p2 >>> l3 = s3 * (d3 - 1) + k3 - 2 * p3 >>> y.shape == (n, c_o, l1, l2, l3) True
Example2: the case when
outsize
is given.>>> n = 10 >>> c_i, c_o = 3, 1 >>> d1, d2, d3 = 5, 10, 15 >>> k1, k2, k3 = 10, 10, 10 >>> p1, p2, p3 = 5, 5, 5 >>> x = np.random.uniform(0, 1, (n, c_i, d1, d2, d3)).astype(np.float32) >>> x.shape (10, 3, 5, 10, 15) >>> W = np.random.uniform(0, 1, (c_i, c_o, k1, k2, k3)).astype(np.float32) >>> W.shape (3, 1, 10, 10, 10) >>> b = np.random.uniform(0, 1, (c_o)).astype(np.float32) >>> b.shape (1,) >>> s1, s2, s3 = 2, 4, 6 >>> l1, l2, l3 = 9, 38, 87 >>> d1 == int((l1 + 2 * p1 - k1) / s1) + 1 True >>> d2 == int((l2 + 2 * p2 - k2) / s2) + 1 True >>> d3 == int((l3 + 2 * p3 - k3) / s3) + 1 True >>> y = F.deconvolution_nd(x, W, b, stride=(s1, s2, s3), pad=(p1, p2, p3), outsize=(l1, l2, l3)) >>> y.shape (10, 1, 9, 38, 87) >>> y.shape == (n, c_o, l1, l2, l3) True