Turn off cudnn for this layer.
Whether to pick convolution algo by running performance test. Leads to higher startup time but may give faster speed. Options are: 'off': no tuning 'limited_workspace': run test and pick the fastest algorithm that doesn't exceed workspace limit. 'fastest': pick the fastest algorithm and ignore workspace limit. If set to None (default), behavior is determined by environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT: 0 for off, 1 for limited workspace (default), 2 for fastest.
convolution dilate: (h, w) or (d, h, w)
Set layout for input, output and weight. Empty for default layout: NCHW for 2d and NCDHW for 3d.
Whether to disable bias parameter.
Number of group partitions. Equivalent to slicing input into num_group partitions, apply convolution on each, then concatenate the results
pad for convolution: (h, w) or (d, h, w)
convolution stride: (h, w) or (d, h, w)
Maximum temporary workspace allowed for convolution (MB).This parameter determines the effective batch size of the convolution kernel, which may be smaller than the given batch size. Also, the workspace will be automatically enlarged to make sure that we can run the kernel with batch_size=1