อินเทอร์เฟซสาธารณะ CallableOptionsOrBuilder
คลาสย่อยทางอ้อมที่รู้จัก |
วิธีการสาธารณะ
บูลีนนามธรรม | containsFeedDevices (คีย์สตริง) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
บูลีนนามธรรม | มี FetchDevices (คีย์สตริง) map<string, string> fetch_devices = 7; |
สตริงที่เป็นนามธรรม | getFeed (ดัชนี int) Tensors to be fed in the callable. |
นามธรรม com.google.protobuf.ByteString | getFeedBytes (ดัชนี int) Tensors to be fed in the callable. |
บทคัดย่อ | getFeedCount () Tensors to be fed in the callable. |
แผนที่นามธรรม <String, String> | รับอุปกรณ์ฟีด () ใช้ getFeedDevicesMap() แทน |
บทคัดย่อ | getFeedDevicesCount () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
แผนที่นามธรรม <String, String> | getFeedDevicesMap () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
สตริงที่เป็นนามธรรม | getFeedDevicesOrDefault (คีย์สตริง, สตริง defaultValue) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
สตริงที่เป็นนามธรรม | getFeedDevicesOrThrow (คีย์สตริง) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
รายการนามธรรม<สตริง> | รับฟีดลิสต์ () Tensors to be fed in the callable. |
สตริงที่เป็นนามธรรม | getFetch (ดัชนี int) Fetches. |
นามธรรม com.google.protobuf.ByteString | getFetchBytes (ดัชนี int) Fetches. |
บทคัดย่อ | getFetchCount () Fetches. |
แผนที่นามธรรม <String, String> | getFetchDevices () ใช้ getFetchDevicesMap() แทน |
บทคัดย่อ | getFetchDevicesCount () map<string, string> fetch_devices = 7; |
แผนที่นามธรรม <String, String> | getFetchDevicesMap () map<string, string> fetch_devices = 7; |
สตริงที่เป็นนามธรรม | getFetchDevicesOrDefault (คีย์สตริง, สตริง defaultValue) map<string, string> fetch_devices = 7; |
สตริงที่เป็นนามธรรม | getFetchDevicesOrThrow (คีย์สตริง) map<string, string> fetch_devices = 7; |
รายการนามธรรม<สตริง> | getFetchList () Fetches. |
บูลีนนามธรรม | getFetchSkipSync () By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. |
RunOptions นามธรรม | getRunOptions () Options that will be applied to each run. |
RunOptionsOrBuilder แบบนามธรรม | getRunOptionsOrBuilder () Options that will be applied to each run. |
สตริงที่เป็นนามธรรม | getTarget (ดัชนี int) Target Nodes. |
นามธรรม com.google.protobuf.ByteString | getTargetBytes (ดัชนี int) Target Nodes. |
บทคัดย่อ | getTargetCount () Target Nodes. |
รายการนามธรรม<สตริง> | รับเป้าหมายรายการ () Target Nodes. |
การเชื่อมต่อเทนเซอร์ แบบนามธรรม | getTensorConnection (ดัชนี int) Tensors to be connected in the callable. |
บทคัดย่อ | getTensorConnectionCount () Tensors to be connected in the callable. |
รายการนามธรรม < TensorConnection > | getTensorConnectionList () Tensors to be connected in the callable. |
นามธรรม TensorConnectionOrBuilder | getTensorConnectionOrBuilder (ดัชนี int) Tensors to be connected in the callable. |
รายการนามธรรม<? ขยาย TensorConnectionOrBuilder > | getTensorConnectionOrBuilderList () Tensors to be connected in the callable. |
บูลีนนามธรรม | hasRunOptions () Options that will be applied to each run. |
วิธีการสาธารณะ
บูลีนนามธรรมสาธารณะ มี FeedDevices (คีย์สตริง)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
บูลีนนามธรรมสาธารณะ มี FetchDevices (คีย์สตริง)
map<string, string> fetch_devices = 7;
สตริงนามธรรมสาธารณะ getFeed (ดัชนี int)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
บทคัดย่อสาธารณะ com.google.protobuf.ByteString getFeedBytes (ดัชนี int)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
บทคัดย่อสาธารณะ int getFeedCount ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
บทคัดย่อสาธารณะ int getFeedDevicesCount ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
แผนที่นามธรรมสาธารณะ <String, String> getFeedDevicesMap ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
สตริงนามธรรมสาธารณะ getFeedDevicesOrDefault (คีย์สตริง, สตริง defaultValue)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
สตริงนามธรรมสาธารณะ getFeedDevicesOrThrow (คีย์สตริง)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
รายการนามธรรมสาธารณะ getFeedList ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
สตริงนามธรรมสาธารณะ getFetch (ดัชนี int)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
บทคัดย่อสาธารณะ com.google.protobuf.ByteString getFetchBytes (ดัชนี int)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
บทคัดย่อสาธารณะ int getFetchCount ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
บทคัดย่อสาธารณะ int getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
แผนที่นามธรรมสาธารณะ <String, String> getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
สตริงนามธรรมสาธารณะ getFetchDevicesOrDefault (คีย์สตริง, สตริง defaultValue)
map<string, string> fetch_devices = 7;
สตริงนามธรรมสาธารณะ getFetchDevicesOrThrow (คีย์สตริง)
map<string, string> fetch_devices = 7;
รายการนามธรรมสาธารณะ getFetchList ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
บูลีนนามธรรมสาธารณะ getFetchSkipSync ()
By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit. If this options is set to true, the caller is responsible for ensuring that the values in the fetched tensors have been produced before they are used. The caller can do this by invoking `Device::Sync()` on the underlying device(s), or by feeding the tensors back to the same Session using `feed_devices` with the same corresponding device name.
bool fetch_skip_sync = 8;
RunOptions นามธรรมสาธารณะ getRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
RunOptionsOrBuilder นามธรรมสาธารณะ getRunOptionsOrBuilder ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
สตริงนามธรรมสาธารณะ getTarget (ดัชนี int)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
นามธรรมสาธารณะ com.google.protobuf.ByteString getTargetBytes (ดัชนี int)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
นามธรรมสาธารณะ int getTargetCount ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
รายการนามธรรมสาธารณะ getTargetList ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
TensorConnection นามธรรมสาธารณะ getTensorConnection (ดัชนี int)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
บทคัดย่อสาธารณะ int getTensorConnectionCount ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
รายการนามธรรมสาธารณะ < TensorConnection > getTensorConnectionList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
นามธรรมสาธารณะ TensorConnectionOrBuilder getTensorConnectionOrBuilder (ดัชนี int)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
รายการบทคัดย่อสาธารณะ<? ขยาย TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
บูลีนนามธรรมสาธารณะ hasRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;