A tf.string tensor containing one or more filenames.
record_defaults
A list of default values for the CSV fields. Each item in
the list is either a valid CSV DType (float32, float64, int32, int64,
string), or a Tensor object with one of the above types. One per
column of CSV data, with either a scalar Tensor default value for the
column if it is optional, or DType or empty Tensor if required. If
both this and select_columns are specified, these must have the same
lengths, and column_defaults is assumed to be sorted in order of
increasing column index.
compression_type
(Optional.) A tf.string scalar evaluating to one of
"" (no compression), "ZLIB", or "GZIP". Defaults to no
compression.
buffer_size
(Optional.) A tf.int64 scalar denoting the number of bytes
to buffer while reading files. Defaults to 4MB.
header
(Optional.) A tf.bool scalar indicating whether the CSV file(s)
have header line(s) that should be skipped when parsing. Defaults to
False.
field_delim
(Optional.) A tf.string scalar containing the delimiter
character that separates fields in a record. Defaults to ",".
use_quote_delim
(Optional.) A tf.bool scalar. If False, treats
double quotation marks as regular characters inside of string fields
(ignoring RFC 4180, Section 2, Bullet 5). Defaults to True.
na_value
(Optional.) A tf.string scalar indicating a value that will
be treated as NA/NaN.
select_cols
(Optional.) A sorted list of column indices to select from
the input data. If specified, only this subset of columns will be
parsed. Defaults to parsing all columns.
Attributes
element_spec
The type specification of an element of this dataset.
output_classes
Returns the class of each component of an element of this dataset. (deprecated)
output_shapes
Returns the shape of each component of an element of this dataset. (deprecated)
output_types
Returns the type of each component of an element of this dataset. (deprecated)
Applies a transformation function to this dataset.
apply enables chaining of custom Dataset transformations, which are
represented as functions that take one Dataset argument and return a
transformed Dataset.
Combines consecutive elements of this dataset into batches.
The components of the resulting element will have an additional outer
dimension, which will be batch_size (or N % batch_size for the last
element if batch_size does not divide the number of input elements N
evenly and drop_remainder is False). If your program depends on the
batches having the same outer dimension, you should set the drop_remainder
argument to True to prevent the smaller batch from being produced.
Args
batch_size
A tf.int64 scalar tf.Tensor, representing the number of
consecutive elements of this dataset to combine in a single batch.
drop_remainder
(Optional.) A tf.bool scalar tf.Tensor, representing
whether the last batch should be dropped in the case it has fewer than
batch_size elements; the default behavior is not to drop the smaller
batch.
A tf.string scalar tf.Tensor, representing the name of a
directory on the filesystem to use for caching elements in this Dataset.
If a filename is not provided, the dataset will be cached in memory.
Creates a Dataset by concatenating the given dataset with this dataset.
a=Dataset.range(1,4)# ==> [ 1, 2, 3 ]b=Dataset.range(4,8)# ==> [ 4, 5, 6, 7 ]# The input dataset and dataset to be concatenated should have the same# nested structures and output types.# c = Dataset.range(8, 14).batch(2) # ==> [ [8, 9], [10, 11], [12, 13] ]# d = Dataset.from_tensor_slices([14.0, 15.0, 16.0])# a.concatenate(c) and a.concatenate(d) would result in error.a.concatenate(b)# ==> [ 1, 2, 3, 4, 5, 6, 7 ]
# NOTE: The following examples use `{ ... }` to represent the# contents of a dataset.a={1,2,3}b={(7,8),(9,10)}# The nested structure of the `datasets` argument determines the# structure of elements in the resulting dataset.a.enumerate(start=5))=={(5,1),(6,2),(7,3)}b.enumerate()=={(0,(7,8)),(1,(9,10))}
Args
start
A tf.int64 scalar tf.Tensor, representing the start value for
enumeration.
Filters this dataset according to predicate. (deprecated)
Args
predicate
A function mapping a nested structure of tensors (having shapes
and types defined by self.output_shapes and self.output_types) to a
scalar tf.bool tensor.
Returns
Dataset
The Dataset containing the elements of this dataset for which
predicate is True.
Maps map_func across this dataset and flattens the result.
Use flat_map if you want to make sure that the order of your dataset
stays the same. For example, to flatten a dataset of batches into a
dataset of their elements:
Creates a Dataset whose elements are generated by generator.
The generator argument must be a callable object that returns
an object that supports the iter() protocol (e.g. a generator function).
The elements generated by generator must be compatible with the given
output_types and (optional) output_shapes arguments.
A callable object that returns an object that supports the
iter() protocol. If args is not specified, generator must take no
arguments; otherwise it must take as many arguments as there are values
in args.
output_types
A nested structure of tf.DType objects corresponding to
each component of an element yielded by generator.
output_shapes
(Optional.) A nested structure of tf.TensorShape objects
corresponding to each component of an element yielded by generator.
args
(Optional.) A tuple of tf.Tensor objects that will be evaluated
and passed to generator as NumPy-array arguments.
Creates a Dataset whose elements are slices of the given tensors.
Note that if tensors contains a NumPy array, and eager execution is not
enabled, the values will be embedded in the graph as one or more
tf.constant operations. For large datasets (> 1 GB), this can waste
memory and run into byte limits of graph serialization. If tensors
contains one or more large NumPy arrays, consider the alternative described
in this guide.
Args
tensors
A dataset element, with each component having the same size in
the 0th dimension.
Creates a Dataset with a single element, comprising the given tensors.
Note that if tensors contains a NumPy array, and eager execution is not
enabled, the values will be embedded in the graph as one or more
tf.constant operations. For large datasets (> 1 GB), this can waste
memory and run into byte limits of graph serialization. If tensors
contains one or more large NumPy arrays, consider the alternative described
in this
guide.
Maps map_func across this dataset, and interleaves the results.
For example, you can use Dataset.interleave() to process many input files
concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records from# each file.filenames=["/var/data/file1.txt","/var/data/file2.txt",...]dataset=(Dataset.from_tensor_slices(filenames).interleave(lambdax:TextLineDataset(x).map(parse_fn,num_parallel_calls=1),cycle_length=4,block_length=16))
The cycle_length and block_length arguments control the order in which
elements are produced. cycle_length controls the number of input elements
that are processed concurrently. If you set cycle_length to 1, this
transformation will handle one input element at a time, and will produce
identical results to tf.data.Dataset.flat_map. In general,
this transformation will apply map_func to cycle_length input elements,
open iterators on the returned Dataset objects, and cycle through them
producing block_length consecutive elements from each iterator, and
consuming the next input element each time it reaches the end of an
iterator.
A function mapping a dataset element to a dataset.
cycle_length
(Optional.) The number of input elements that will be
processed concurrently. If not specified, the value will be derived from
the number of available CPU cores. If the num_parallel_calls argument
is set to tf.data.experimental.AUTOTUNE, the cycle_length argument
also identifies the maximum degree of parallelism.
block_length
(Optional.) The number of consecutive elements to produce
from each input element before cycling to another input element.
num_parallel_calls
(Optional.) If specified, the implementation creates a
threadpool, which is used to fetch inputs from cycle elements
asynchronously and in parallel. The default behavior is to fetch inputs
from cycle elements synchronously with no parallelism. If the value
tf.data.experimental.AUTOTUNE is used, then the number of parallel
calls is set dynamically based on available CPU.
A dataset of all files matching one or more glob patterns.
Example:
If we had the following files on our filesystem:
/path/to/dir/a.txt
/path/to/dir/b.py
/path/to/dir/c.py
If we pass "/path/to/dir/*.py" as the directory, the dataset
would produce:
/path/to/dir/b.py
/path/to/dir/c.py
Args
file_pattern
A string, a list of strings, or a tf.Tensor of string type
(scalar or vector), representing the filename glob (i.e. shell wildcard)
pattern(s) that will be matched.
shuffle
(Optional.) If True, the file names will be shuffled randomly.
Defaults to True.
(Optional.) If non-empty, the returned iterator will be
shared under the given name across multiple sessions that share the same
devices (e.g. when using a remote server).
Maps map_func across the elements of this dataset.
This transformation applies map_func to each element of this dataset, and
returns a new dataset containing the transformed elements, in the same
order as they appeared in the input.
The input signature of map_func is determined by the structure of each
element in this dataset. For example:
# NOTE: The following examples use `{ ... }` to represent the# contents of a dataset.# Each element is a `tf.Tensor` object.a={1,2,3,4,5}# `map_func` takes a single argument of type `tf.Tensor` with the same# shape and dtype.result=a.map(lambdax:...)# Each element is a tuple containing two `tf.Tensor` objects.b={(1,"foo"),(2,"bar"),(3,"baz")}# `map_func` takes two arguments of type `tf.Tensor`.result=b.map(lambdax_int,y_str:...)# Each element is a dictionary mapping strings to `tf.Tensor` objects.c={{"a":1,"b":"foo"},{"a":2,"b":"bar"},{"a":3,"b":"baz"}}# `map_func` takes a single argument of type `dict` with the same keys as# the elements.result=c.map(lambdad:...)
The value or values returned by map_func determine the structure of each
element in the returned dataset.
# `map_func` returns a scalar `tf.Tensor` of type `tf.float32`.deff(...):returntf.constant(37.0)result=dataset.map(f)result.output_classes==tf.Tensorresult.output_types==tf.float32result.output_shapes==[]# scalar# `map_func` returns two `tf.Tensor` objects.defg(...):returntf.constant(37.0),tf.constant(["Foo","Bar","Baz"])result=dataset.map(g)result.output_classes==(tf.Tensor,tf.Tensor)result.output_types==(tf.float32,tf.string)result.output_shapes==([],[3])# Python primitives, lists, and NumPy arrays are implicitly converted to# `tf.Tensor`.defh(...):return37.0,["Foo","Bar","Baz"],np.array([1.0,2.0]dtype=np.float64)result=dataset.map(h)result.output_classes==(tf.Tensor,tf.Tensor,tf.Tensor)result.output_types==(tf.float32,tf.string,tf.float64)result.output_shapes==([],[3],[2])# `map_func` can return nested structures.defi(...):return{"a":37.0,"b":[42,16]},"foo"result.output_classes==({"a":tf.Tensor,"b":tf.Tensor},tf.Tensor)result.output_types==({"a":tf.float32,"b":tf.int32},tf.string)result.output_shapes==({"a":[],"b":[2]},[])
map_func can accept as arguments and return any type of dataset element.
Note that irrespective of the context in which map_func is defined (eager
vs. graph), tf.data traces the function and executes it as a graph. To use
Python code inside of the function you have two options:
1) Rely on AutoGraph to convert Python code into an equivalent graph
computation. The downside of this approach is that AutoGraph can convert
some but not all Python code.
2) Use tf.py_function, which allows you to write arbitrary Python code but
will generally result in worse performance than 1). For example:
d=tf.data.Dataset.from_tensor_slices(['hello','world'])# transform a string tensor to upper case string using a Python functiondefupper_case_fn(t:tf.Tensor)-> str:returnt.numpy().decode('utf-8').upper()d.map(lambdax:tf.py_function(func=upper_case_fn,inp=[x],Tout=tf.string))# ==> [ "HELLO", "WORLD" ]
Args
map_func
A function mapping a dataset element to another dataset element.
num_parallel_calls
(Optional.) A tf.int32 scalar tf.Tensor,
representing the number elements to process asynchronously in parallel.
If not specified, elements will be processed sequentially. If the value
tf.data.experimental.AUTOTUNE is used, then the number of parallel
calls is set dynamically based on available CPU.
Maps map_func across the elements of this dataset. (deprecated)
Args
map_func
A function mapping a nested structure of tensors (having shapes
and types defined by self.output_shapes and self.output_types) to
another nested structure of tensors.
num_parallel_calls
(Optional.) A tf.int32 scalar tf.Tensor,
representing the number elements to process asynchronously in parallel.
If not specified, elements will be processed sequentially. If the value
tf.data.experimental.AUTOTUNE is used, then the number of parallel
calls is set dynamically based on available CPU.
Combines consecutive elements of this dataset into padded batches.
This transformation combines multiple consecutive elements of the input
dataset into a single element.
Like tf.data.Dataset.batch, the components of the resulting element will
have an additional outer dimension, which will be batch_size (or
N % batch_size for the last element if batch_size does not divide the
number of input elements N evenly and drop_remainder is False). If
your program depends on the batches having the same outer dimension, you
should set the drop_remainder argument to True to prevent the smaller
batch from being produced.
Unlike tf.data.Dataset.batch, the input elements to be batched may have
different shapes, and this transformation will pad each component to the
respective shape in padding_shapes. The padding_shapes argument
determines the resulting shape for each dimension of each component in an
output element:
If the dimension is a constant (e.g. tf.compat.v1.Dimension(37)), the
component
will be padded out to that length in that dimension.
If the dimension is unknown (e.g. tf.compat.v1.Dimension(None)), the
component
will be padded out to the maximum length of all elements in that
dimension.
A tf.int64 scalar tf.Tensor, representing the number of
consecutive elements of this dataset to combine in a single batch.
padded_shapes
A nested structure of tf.TensorShape or tf.int64 vector
tensor-like objects representing the shape to which the respective
component of each input element should be padded prior to batching. Any
unknown dimensions (e.g. tf.compat.v1.Dimension(None) in a
tf.TensorShape or -1 in a tensor-like object) will be padded to the
maximum size of that dimension in each batch.
padding_values
(Optional.) A nested structure of scalar-shaped
tf.Tensor, representing the padding values to use for the respective
components. Defaults are 0 for numeric types and the empty string for
string types.
drop_remainder
(Optional.) A tf.bool scalar tf.Tensor, representing
whether the last batch should be dropped in the case it has fewer than
batch_size elements; the default behavior is not to drop the smaller
batch.
The transformation calls reduce_func successively on every element of
the input dataset until the dataset is exhausted, aggregating information in
its internal state. The initial_state argument is used for the initial
state and the final state is returned as the result.
For example:
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1)
produces 5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y)
produces 10
Args
initial_state
An element representing the initial state of the
transformation.
reduce_func
A function that maps (old_state, input_element) to
new_state. It must take two arguments and return a new element
The structure of new_state must match the structure of
initial_state.
Returns
A dataset element corresponding to the final state of the transformation.
(Optional.) A tf.int64 scalar tf.Tensor, representing the
number of times the dataset should be repeated. The default behavior (if
count is None or -1) is for the dataset be repeated indefinitely.
Be sure to shard before you use any randomizing operator (such as
shuffle).
Generally it is best if the shard operator is used early in the dataset
pipeline. For example, when reading from a set of TFRecord files, shard
before converting the dataset to input samples. This avoids reading every
file on every worker. The following is an example of an efficient
sharding strategy within a complete pipeline:
if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't
guaranteed to be caught upon dataset creation. (e.g. providing in a
placeholder tensor bypasses the early checking, and will instead result
in an error during a session.run call.)
This dataset fills a buffer with buffer_size elements, then randomly
samples elements from this buffer, replacing the selected elements with new
elements. For perfect shuffling, a buffer size greater than or equal to the
full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but buffer_size is
set to 1,000, then shuffle will initially select a random element from
only the first 1,000 elements in the buffer. Once an element is selected,
its space in the buffer is replaced by the next (i.e. 1,001-st) element,
maintaining the 1,000 element buffer.
Args
buffer_size
A tf.int64 scalar tf.Tensor, representing the number of
elements from this dataset from which the new dataset will sample.
Creates a Dataset that skips count elements from this dataset.
Args
count
A tf.int64 scalar tf.Tensor, representing the number of
elements of this dataset that should be skipped to form the new dataset.
If count is greater than the size of this dataset, the new dataset
will contain no elements. If count is -1, skips the entire dataset.
Creates a Dataset with at most count elements from this dataset.
Args
count
A tf.int64 scalar tf.Tensor, representing the number of
elements of this dataset that should be taken to form the new dataset.
If count is -1, or if count is greater than the size of this
dataset, the new dataset will contain all elements of this dataset.
Splits elements of a dataset into multiple elements.
For example, if elements of the dataset are shaped [B, a0, a1, ...],
where B may vary for each input element, then for each element in the
dataset, the unbatched dataset will contain B consecutive elements
of shape [a0, a1, ...].
# NOTE: The following example uses `{ ... }` to represent the contents# of a dataset.ds={['a','b','c'],['a','b'],['a','b','c','d']}ds.unbatch()=={'a','b','c','a','b','a','b','c','d'}
Combines (nests of) input elements into a dataset of (nests of) windows.
A "window" is a finite dataset of flat elements of size size (or possibly
fewer if there are not enough input elements to fill the window and
drop_remainder evaluates to false).
The stride argument determines the stride of the input elements, and the
shift argument determines the shift of the window.
For example, letting {...} to represent a Dataset:
The options are "global" in the sense they apply to the entire dataset.
If options are set multiple times, they are merged as long as different
options do not use different non-default values.
Creates a Dataset by zipping together the given datasets.
This method has similar semantics to the built-in zip() function
in Python, with the main difference being that the datasets
argument can be an arbitrary nested structure of Dataset objects.
For example:
a=Dataset.range(1,4)# ==> [ 1, 2, 3 ]b=Dataset.range(4,7)# ==> [ 4, 5, 6 ]c=Dataset.range(7,13).batch(2)# ==> [ [7, 8], [9, 10], [11, 12] ]d=Dataset.range(13,15)# ==> [ 13, 14 ]# The nested structure of the `datasets` argument determines the# structure of elements in the resulting dataset.Dataset.zip((a,b))# ==> [ (1, 4), (2, 5), (3, 6) ]Dataset.zip((b,a))# ==> [ (4, 1), (5, 2), (6, 3) ]# The `datasets` argument may contain an arbitrary number of# datasets.Dataset.zip((a,b,c))# ==> [ (1, 4, [7, 8]),# (2, 5, [9, 10]),# (3, 6, [11, 12]) ]# The number of elements in the resulting dataset is the same as# the size of the smallest dataset in `datasets`.Dataset.zip((a,d))# ==> [ (1, 13), (2, 14) ]
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[]]