رابط عمومی ConfigProtoOrBuilder
زیر کلاس های غیر مستقیم شناخته شده |
روش های عمومی
بولی انتزاعی | containDeviceCount (کلید رشته) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
بولی انتزاعی | getAllowSoftPlacement () Whether soft placement is allowed. |
ClusterDef چکیده | getClusterDef () Optional list of all workers to use in this session. |
ClusterDefOrBuilder انتزاعی | getClusterDefOrBuilder () Optional list of all workers to use in this session. |
نقشه انتزاعی <رشته، عدد صحیح> | getDeviceCount () به جای آن از getDeviceCountMap() استفاده کنید. |
انتزاعی | getDeviceCountCount () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
نقشه انتزاعی <رشته، عدد صحیح> | getDeviceCountMap () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
انتزاعی | getDeviceCountOrDefault (کلید رشته، int defaultValue) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
انتزاعی | getDeviceCountOrThrow (کلید رشته) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
رشته انتزاعی | getDeviceFilters (int index) When any filters are present sessions will ignore all devices which do not match the filters. |
چکیده com.google.protobuf.ByteString | getDeviceFiltersBytes (شاخص int) When any filters are present sessions will ignore all devices which do not match the filters. |
انتزاعی | getDeviceFiltersCount () When any filters are present sessions will ignore all devices which do not match the filters. |
فهرست انتزاعی<String> | getDeviceFiltersList () When any filters are present sessions will ignore all devices which do not match the filters. |
چکیده ConfigProto.Experimental | دریافت تجربی () .tensorflow.ConfigProto.Experimental experimental = 16; |
چکیده ConfigProto.ExperimentalOrBuilder | getExperimentalOrBuilder () .tensorflow.ConfigProto.Experimental experimental = 16; |
گزینه های GPU انتزاعی | getGpuOptions () Options that apply to all GPUs. |
انتزاعی GPUOptionsOrBuilder | getGpuOptionsOrBuilder () Options that apply to all GPUs. |
GraphOptions انتزاعی | getGraphOptions () Options that apply to all graphs. |
GraphOptionsOrBuilder انتزاعی | getGraphOptionsOrBuilder () Options that apply to all graphs. |
انتزاعی | getInterOpParallelismThreads () Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. |
انتزاعی | getIntraOpParallelismThreads () The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. |
بولی انتزاعی | getIsolateSessionState () If true, any resources such as Variables used in the session will not be shared with other sessions. |
بولی انتزاعی | getLogDevicePlacement () Whether device placements should be logged. |
انتزاعی طولانی | getOperationTimeoutInMs () Global timeout for all blocking operations in this session. |
انتزاعی | getPlacementPeriod () Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically). |
RPCOptions انتزاعی | getRpcOptions () Options that apply when this session uses the distributed runtime. |
چکیده RPCOptionsOrBuilder | getRpcOptionsOrBuilder () Options that apply when this session uses the distributed runtime. |
چکیده ThreadPoolOptionProto | getSessionInterOpThreadPool (شاخص int) This option is experimental - it may be replaced with a different mechanism in the future. |
انتزاعی | getSessionInterOpThreadPoolCount () This option is experimental - it may be replaced with a different mechanism in the future. |
فهرست انتزاعی< ThreadPoolOptionProto > | getSessionInterOpThreadPoolList () This option is experimental - it may be replaced with a different mechanism in the future. |
چکیده ThreadPoolOptionProtoOrBuilder | getSessionInterOpThreadPoolOrBuilder (شاخص int) This option is experimental - it may be replaced with a different mechanism in the future. |
فهرست چکیده <? ThreadPoolOptionProtoOrBuilder > را گسترش می دهد | getSessionInterOpThreadPoolOrBuilderList () This option is experimental - it may be replaced with a different mechanism in the future. |
بولی انتزاعی | getShareClusterDevicesInSession () When true, WorkerSessions are created with device attributes from the full cluster. |
بولی انتزاعی | getUsePerSessionThreads () If true, use a new set of threads for this session rather than the global pool of threads. |
بولی انتزاعی | hasClusterDef () Optional list of all workers to use in this session. |
بولی انتزاعی | دارای تجربی () .tensorflow.ConfigProto.Experimental experimental = 16; |
بولی انتزاعی | hasGpuOptions () Options that apply to all GPUs. |
بولی انتزاعی | hasGraphOptions () Options that apply to all graphs. |
بولی انتزاعی | hasRpcOptions () Options that apply when this session uses the distributed runtime. |
روش های عمومی
بولی انتزاعی عمومی حاویDeviceCount (کلید رشته)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
انتزاع عمومی boolean getAllowSoftPlacement ()
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
چکیده عمومی ClusterDef getClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
چکیده عمومی ClusterDefOrBuilder getClusterDefOrBuilder ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
نقشه انتزاعی عمومی<String, Integer> getDeviceCount ()
به جای آن از getDeviceCountMap()
استفاده کنید.
انتزاعی عمومی int getDeviceCountCount ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
نقشه انتزاعی عمومی <رشته، عدد صحیح> getDeviceCountMap ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
abstract عمومی int getDeviceCountOrDefault (کلید رشته، مقدار پیش فرض int)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
انتزاعی عمومی int getDeviceCountOrThrow (کلید رشته)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
رشته انتزاعی عمومی getDeviceFilters (int index)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
چکیده عمومی com.google.protobuf.ByteString getDeviceFiltersBytes (int index)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
انتزاعی عمومی int getDeviceFiltersCount ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
فهرست انتزاعی عمومی<String> getDeviceFiltersList ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
چکیده عمومی ConfigProto.Experimental getExperimental ()
.tensorflow.ConfigProto.Experimental experimental = 16;
چکیده عمومی ConfigProto.ExperimentalOrBuilder getExperimentalOrBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
GPUOptions انتزاعی عمومی getGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
انتزاعی عمومی GPUOptionsOrBuilder getGpuOptionsOrBuilder ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
انتزاع عمومی GraphOptions getGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
انتزاع عمومی GraphOptionsOrBuilder getGraphOptionsOrBuilder ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
انتزاع عمومی int getInterOpParallelismThreads ()
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
انتزاع عمومی int getIntraOpParallelismThreads ()
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior describe above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.
int32 intra_op_parallelism_threads = 2;
بولی انتزاعی عمومی getIsolateSessionState ()
If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;
getLogDevicePlacement بولین انتزاعی عمومی ()
Whether device placements should be logged.
bool log_device_placement = 8;
انتزاع عمومی طولانی getOperationTimeoutInMs ()
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;
چکیده عمومی int getPlacementPeriod ()
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;
RPCOptions انتزاعی عمومی getRpcOptions ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
انتزاعی عمومی RPCOptionsOrBuilder getRpcOptionsOrBuilder ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
چکیده عمومی ThreadPoolOptionProto getSessionInterOpThreadPool (شاخص int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
انتزاعی عمومی int getSessionInterOpThreadPoolCount ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
فهرست انتزاعی عمومی< ThreadPoolOptionProto > getSessionInterOpThreadPoolList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
انتزاعی عمومی ThreadPoolOptionProtoOrBuilder getSessionInterOpThreadPoolOrBuilder (int index)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
فهرست چکیده عمومی<? گسترش ThreadPoolOptionProtoOrBuilder > getSessionInterOpThreadPoolOrBuilderList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
عمومی انتزاعی بولی getShareClusterDevicesInSession ()
When true, WorkerSessions are created with device attributes from the full cluster. This is helpful when a worker wants to partition a graph (for example during a PartitionedCallOp).
bool share_cluster_devices_in_session = 17;
getUsePerSessionThreads بولی انتزاعی عمومی ()
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;
عمومی انتزاعی بولی hasClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
عمومی انتزاعی بولی دارای تجربی ()
.tensorflow.ConfigProto.Experimental experimental = 16;
انتزاع عمومی بولی hasGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
hasGraphOptions بولی انتزاعی عمومی ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
hasRpcOptions بولی انتزاعی عمومی ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;