interfaz pública CallableOptionsOrBuilder
Subclases indirectas conocidas |
Métodos públicos
booleano abstracto | contieneFeedDevices (clave de cadena) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
booleano abstracto | contieneFetchDevices (clave de cadena) map<string, string> fetch_devices = 7; |
cadena abstracta | getFeed (índice int) Tensors to be fed in the callable. |
resumen com.google.protobuf.ByteString | getFeedBytes (índice int) Tensors to be fed in the callable. |
resumen entero | obtenerFeedCount () Tensors to be fed in the callable. |
Mapa abstracto<Cadena, Cadena> | getFeedDevices () Utilice getFeedDevicesMap() en su lugar. |
resumen entero | getFeedDevicesCount () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
Mapa abstracto<Cadena, Cadena> | getFeedDevicesMap () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
cadena abstracta | getFeedDevicesOrDefault (clave de cadena, valor predeterminado de cadena) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
cadena abstracta | getFeedDevicesOrThrow (clave de cadena) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
Lista abstracta<Cadena> | obtener lista de alimentación () Tensors to be fed in the callable. |
cadena abstracta | getFetch (índice int) Fetches. |
resumen com.google.protobuf.ByteString | getFetchBytes (índice int) Fetches. |
resumen entero | getFetchCount () Fetches. |
Mapa abstracto<Cadena, Cadena> | getFetchDevices () Utilice getFetchDevicesMap() en su lugar. |
resumen entero | getFetchDevicesCount () map<string, string> fetch_devices = 7; |
Mapa abstracto<Cadena, Cadena> | getFetchDevicesMap () map<string, string> fetch_devices = 7; |
cadena abstracta | getFetchDevicesOrDefault (clave de cadena, valor predeterminado de cadena) map<string, string> fetch_devices = 7; |
cadena abstracta | getFetchDevicesOrThrow (clave de cadena) map<string, string> fetch_devices = 7; |
Lista abstracta<Cadena> | obtenerFetchList () Fetches. |
booleano abstracto | getFetchSkipSync () By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. |
Opciones de ejecución abstractas | getRunOptions () Options that will be applied to each run. |
resumen RunOptionsOrBuilder | getRunOptionsOrBuilder () Options that will be applied to each run. |
cadena abstracta | getTarget (índice int) Target Nodes. |
resumen com.google.protobuf.ByteString | getTargetBytes (índice int) Target Nodes. |
resumen entero | getTargetCount () Target Nodes. |
Lista abstracta<Cadena> | obtener lista de objetivos () Target Nodes. |
conexión tensor abstracta | getTensorConnection (índice int) Tensors to be connected in the callable. |
resumen entero | getTensorConnectionCount () Tensors to be connected in the callable. |
Lista abstracta <TensorConnection> | getTensorConnectionList () Tensors to be connected in the callable. |
TensorConnectionOrBuilder abstracto | getTensorConnectionOrBuilder (índice int) Tensors to be connected in the callable. |
Lista abstracta<? extiende TensorConnectionOrBuilder > | getTensorConnectionOrBuilderList () Tensors to be connected in the callable. |
booleano abstracto | tiene opciones de ejecución () Options that will be applied to each run. |
Métodos públicos
booleano abstracto público contieneFeedDevices (clave de cadena)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
booleano abstracto público contieneFetchDevices (clave de cadena)
map<string, string> fetch_devices = 7;
Cadena abstracta pública getFeed (índice int)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
resumen público com.google.protobuf.ByteString getFeedBytes (índice int)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
resumen público int getFeedCount ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
resumen público int getFeedDevicesCount ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
Mapa abstracto público<Cadena, Cadena> getFeedDevicesMap ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
cadena abstracta pública getFeedDevicesOrDefault (clave de cadena, valor predeterminado de cadena)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
Cadena abstracta pública getFeedDevicesOrThrow (clave de cadena)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
Lista abstracta pública<Cadena> getFeedList ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
Cadena abstracta pública getFetch (índice int)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
resumen público com.google.protobuf.ByteString getFetchBytes (índice int)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
resumen público int getFetchCount ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
resumen público int getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
Mapa abstracto público<Cadena, Cadena> getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
cadena abstracta pública getFetchDevicesOrDefault (clave de cadena, valor predeterminado de cadena)
map<string, string> fetch_devices = 7;
Cadena abstracta pública getFetchDevicesOrThrow (clave de cadena)
map<string, string> fetch_devices = 7;
Lista abstracta pública<Cadena> getFetchList ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
getFetchSkipSync booleano abstracto público ()
By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit. If this options is set to true, the caller is responsible for ensuring that the values in the fetched tensors have been produced before they are used. The caller can do this by invoking `Device::Sync()` on the underlying device(s), or by feeding the tensors back to the same Session using `feed_devices` with the same corresponding device name.
bool fetch_skip_sync = 8;
resumen público RunOptions getRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
resumen público RunOptionsOrBuilder getRunOptionsOrBuilder ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
Cadena abstracta pública getTarget (índice int)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
resumen público com.google.protobuf.ByteString getTargetBytes (índice int)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
resumen público int getTargetCount ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
Lista abstracta pública<Cadena> getTargetList ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
resumen público TensorConnection getTensorConnection (índice int)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
resumen público int getTensorConnectionCount ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
Lista abstracta pública < TensorConnection > getTensorConnectionList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
resumen público TensorConnectionOrBuilder getTensorConnectionOrBuilder (índice int)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
Lista de resúmenes públicos <? extiende TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
hasRunOptions booleano abstracto público ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;