Similar to FusedResizeAndPadConv2d, this op allows for an optimized
implementation where the spatial padding transformation stage is fused with the
im2col lookup, but in this case without the bilinear filtering required for
resizing. Fusing the padding prevents the need to write out the intermediate
results as whole tensors, reducing memory pressure, and we can get some latency
gains by merging the transformation calculations.
The data_format attribute for Conv2D isn't supported by this op, and 'NHWC'
order is used instead.
Internally this op uses a single per-graph scratch buffer, which means that it
will block if multiple versions are being run in parallel. This is because this
operator is primarily an optimization to minimize memory usage.
Args
input
A Tensor. Must be one of the following types: half, bfloat16, float32, float64.
4-D with shape [batch, in_height, in_width, in_channels].
paddings
A Tensor of type int32.
A two-column matrix specifying the padding sizes. The number of
rows must be the same as the rank of input.
filter
A Tensor. Must have the same type as input. 4-D with shape
[filter_height, filter_width, in_channels, out_channels].
mode
A string from: "REFLECT", "SYMMETRIC".
strides
A list of ints.
1-D of length 4. The stride of the sliding window for each dimension
of input. Must be in the same order as the dimension specified with format.
padding
A string from: "SAME", "VALID".
The type of padding algorithm to use.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-01-23 UTC."],[],[]]