الطبقة النهائية العامة ConfigProto
Session configuration parameters. The system picks appropriate values for fields that are not set.
tensorflow.ConfigProto
فئات متداخلة
فصل | ConfigProto.Builder | Session configuration parameters. | |
فصل | ConfigProto.Experimental | Everything inside Experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. | |
واجهة | ConfigProto.ExperimentalOrBuilder |
الثوابت
كثافة العمليات | ALLOW_SOFT_PLACEMENT_FIELD_NUMBER | |
كثافة العمليات | CLUSTER_DEF_FIELD_NUMBER | |
كثافة العمليات | DEVICE_COUNT_FIELD_NUMBER | |
كثافة العمليات | DEVICE_FILTERS_FIELD_NUMBER | |
كثافة العمليات | الحقل التجريبي_NUMBER | |
كثافة العمليات | GPU_OPTIONS_FIELD_NUMBER | |
كثافة العمليات | GRAPH_OPTIONS_FIELD_NUMBER | |
كثافة العمليات | INTER_OP_PARALLELISM_THREADS_FIELD_NUMBER | |
كثافة العمليات | INTRA_OP_PARALLELISM_THREADS_FIELD_NUMBER | |
كثافة العمليات | ISOLATE_SESSION_STATE_FIELD_NUMBER | |
كثافة العمليات | LOG_DEVICE_PLACEMENT_FIELD_NUMBER | |
كثافة العمليات | OPERATION_TIMEOUT_IN_MS_FIELD_NUMBER | |
كثافة العمليات | PLACEMENT_PERIOD_FIELD_NUMBER | |
كثافة العمليات | RPC_OPTIONS_FIELD_NUMBER | |
كثافة العمليات | SESSION_INTER_OP_THREAD_POOL_FIELD_NUMBER | |
كثافة العمليات | SHARE_CLUSTER_DEVICES_IN_SESSION_FIELD_NUMBER | |
كثافة العمليات | USE_PER_SESSION_THREADS_FIELD_NUMBER |
الأساليب العامة
منطقية | يحتوي علىDeviceCount (مفتاح السلسلة) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
منطقية | يساوي (كائن كائن) |
منطقية | الحصول علىAllowSoftPlacement () Whether soft placement is allowed. |
ClusterDef | الحصول على ClusterDef () Optional list of all workers to use in this session. |
ClusterDefOrBuilder | الحصول على ClusterDefOrBuilder () Optional list of all workers to use in this session. |
ثابت ConfigProto | |
ConfigProto | |
النهائي الثابت com.google.protobuf.Descriptors.Descriptor | |
خريطة<سلسلة، عدد صحيح> | getDeviceCount () استخدم getDeviceCountMap() بدلاً من ذلك. |
كثافة العمليات | getDeviceCountCount () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
خريطة<سلسلة، عدد صحيح> | getDeviceCountMap () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
كثافة العمليات | getDeviceCountOrDefault (مفتاح السلسلة، int defaultValue) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
كثافة العمليات | getDeviceCountOrThrow (مفتاح السلسلة) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
خيط | getDeviceFilters (فهرس كثافة العمليات) When any filters are present sessions will ignore all devices which do not match the filters. |
com.google.protobuf.ByteString | getDeviceFiltersBytes (فهرس كثافة العمليات) When any filters are present sessions will ignore all devices which do not match the filters. |
كثافة العمليات | getDeviceFiltersCount () When any filters are present sessions will ignore all devices which do not match the filters. |
com.google.protobuf.ProtocolStringList | getDeviceFiltersList () When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Experimental | الحصول التجريبي () .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.ExperimentalOrBuilder | الحصول على التجريبية أو البناء () .tensorflow.ConfigProto.Experimental experimental = 16; |
خيارات GPU | خيارات getGpu () Options that apply to all GPUs. |
GPUOptionsOrBuilder | getGpuOptionsOrBuilder () Options that apply to all GPUs. |
خيارات الرسم البياني | خيارات الرسم البياني () Options that apply to all graphs. |
GraphOptionsOrBuilder | الحصول على GraphOptionsOrBuilder () Options that apply to all graphs. |
كثافة العمليات | getInterOpParallelismThreads () Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. |
كثافة العمليات | getIntraOpParallelismThreads () The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. |
منطقية | getIsolateSessionState () If true, any resources such as Variables used in the session will not be shared with other sessions. |
منطقية | getLogDevicePlacement () Whether device placements should be logged. |
طويل | الحصول علىOperationTimeoutInMs () Global timeout for all blocking operations in this session. |
كثافة العمليات | الحصول على فترة التنسيب () Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically). |
خيارات RPC | خيارات الحصول على Rpc () Options that apply when this session uses the distributed runtime. |
RPCOptionsOrBuilder | getRpcOptionsOrBuilder () Options that apply when this session uses the distributed runtime. |
كثافة العمليات | |
ThreadPoolOptionProto | getSessionInterOpThreadPool (فهرس كثافة العمليات) This option is experimental - it may be replaced with a different mechanism in the future. |
كثافة العمليات | getSessionInterOpThreadPoolCount () This option is experimental - it may be replaced with a different mechanism in the future. |
القائمة< ThreadPoolOptionProto > | getSessionInterOpThreadPoolList () This option is experimental - it may be replaced with a different mechanism in the future. |
ThreadPoolOptionProtoOrBuilder | getSessionInterOpThreadPoolOrBuilder (فهرس كثافة العمليات) This option is experimental - it may be replaced with a different mechanism in the future. |
القائمة<؟ يمتد ThreadPoolOptionProtoOrBuilder > | getSessionInterOpThreadPoolOrBuilderList () This option is experimental - it may be replaced with a different mechanism in the future. |
منطقية | getShareClusterDevicesInSession () When true, WorkerSessions are created with device attributes from the full cluster. |
النهائي com.google.protobuf.UnknownFieldSet | |
منطقية | getUsePerSessionThreads () If true, use a new set of threads for this session rather than the global pool of threads. |
منطقية | hasClusterDef () Optional list of all workers to use in this session. |
منطقية | تجريبي () .tensorflow.ConfigProto.Experimental experimental = 16; |
منطقية | hasGpuOptions () Options that apply to all GPUs. |
منطقية | خيارات الرسم البياني () Options that apply to all graphs. |
منطقية | hasRpcOptions () Options that apply when this session uses the distributed runtime. |
كثافة العمليات | رمز التجزئة () |
منطقية نهائية | تمت التهيئة () |
ثابت ConfigProto.Builder | منشئ جديد () |
ثابت ConfigProto.Builder | newBuilder (النموذج الأولي لـ ConfigProto ) |
ConfigProto.Builder | |
ثابت ConfigProto | parseDelimitedFrom (إدخال InputStream) |
ثابت ConfigProto | parseDelimitedFrom (إدخال InputStream، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
ثابت ConfigProto | parseFrom (بيانات ByteBuffer، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
ثابت ConfigProto | parseFrom (com.google.protobuf.CodedInputStream الإدخال) |
ثابت ConfigProto | parseFrom (بيانات البايت[]، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
ثابت ConfigProto | parseFrom (بيانات ByteBuffer) |
ثابت ConfigProto | parseFrom (com.google.protobuf.CodedInputStream input، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
ثابت ConfigProto | parseFrom (بيانات com.google.protobuf.ByteString) |
ثابت ConfigProto | parseFrom (إدخال InputStream، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
ثابت ConfigProto | parseFrom (com.google.protobuf.ByteString data، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
ثابت | محلل () |
ConfigProto.Builder | إلى البناء () |
فارغ | writeTo (إخراج com.google.protobuf.CodedOutputStream) |
الطرق الموروثة
الثوابت
العدد النهائي الثابت العام ALLOW_SOFT_PLACEMENT_FIELD_NUMBER
القيمة الثابتة: 7
العدد النهائي الثابت العام CLUSTER_DEF_FIELD_NUMBER
القيمة الثابتة: 14
العدد النهائي الثابت العام DEVICE_COUNT_FIELD_NUMBER
القيمة الثابتة: 1
العدد النهائي الثابت العام DEVICE_FILTERS_FIELD_NUMBER
القيمة الثابتة: 4
العدد النهائي الثابت العام EXPERIMENTAL_FIELD_NUMBER
القيمة الثابتة: 16
العدد النهائي الثابت العام GPU_OPTIONS_FIELD_NUMBER
القيمة الثابتة: 6
العدد النهائي الثابت العام GRAPH_OPTIONS_FIELD_NUMBER
القيمة الثابتة: 10
العدد النهائي الثابت العام INTER_OP_PARALLELISM_THREADS_FIELD_NUMBER
القيمة الثابتة: 5
العدد النهائي الثابت العام INTRA_OP_PARALLELISM_THREADS_FIELD_NUMBER
القيمة الثابتة: 2
العدد النهائي الثابت العام ISOLATE_SESSION_STATE_FIELD_NUMBER
القيمة الثابتة: 15
العدد النهائي الثابت العام LOG_DEVICE_PLACEMENT_FIELD_NUMBER
القيمة الثابتة: 8
العدد النهائي الثابت العام OPERATION_TIMEOUT_IN_MS_FIELD_NUMBER
القيمة الثابتة: 11
العدد النهائي الثابت العام PLACEMENT_PERIOD_FIELD_NUMBER
القيمة الثابتة: 3
النهائي الثابت العام RPC_OPTIONS_FIELD_NUMBER
القيمة الثابتة: 13
العدد النهائي الثابت العام SESSION_INTER_OP_THREAD_POOL_FIELD_NUMBER
القيمة الثابتة: 12
العدد النهائي الثابت العام SHARE_CLUSTER_DEVICES_IN_SESSION_FIELD_NUMBER
القيمة الثابتة: 17
العدد النهائي الثابت العام USE_PER_SESSION_THREADS_FIELD_NUMBER
القيمة الثابتة: 9
الأساليب العامة
المنطقية العامة تحتوي علىDeviceCount (مفتاح السلسلة)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
القيمة المنطقية العامة تساوي (Object obj)
getAllowSoftPlacement () المنطقية العامة
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
ClusterDef العامة getClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
getClusterDefOrBuilder العامة getClusterDefOrBuilder ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
النهائي العام الثابت com.google.protobuf.Descriptors.Descriptor getDescriptor ()
int public getDeviceCountCount ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
الخريطة العامة<String, Integer> getDeviceCountMap ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
int public getDeviceCountOrDefault (مفتاح السلسلة، int defaultValue)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
int public getDeviceCountOrThrow (مفتاح السلسلة)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
سلسلة getDeviceFilters العامة (مؤشر int)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
com.google.protobuf.ByteString getDeviceFiltersBytes العام (مؤشر int)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
int public getDeviceFiltersCount ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
com.google.protobuf.ProtocolStringList العامة getDeviceFiltersList ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
ConfigProto.ExperimentalOrBuilder العامة getExperimentalOrBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
خيارات GPU العامة getGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
GPUOptionsOrBuilder العام getGpuOptionsOrBuilder ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
خيارات GraphOptions العامة getGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
GraphOptionsOrBuilder العام getGraphOptionsOrBuilder ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
int العام getInterOpParallelismThreads ()
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
int العام getIntraOpParallelismThreads ()
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior describe above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.
int32 intra_op_parallelism_threads = 2;
getIsolateSessionState () المنطقية العامة
If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;
getLogDevicePlacement () المنطقية العامة
Whether device placements should be logged.
bool log_device_placement = 8;
getOperationTimeoutInMs العامة الطويلة ()
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;
عام الحصول على بارسيرفورتيب ()
int public getPlacementPeriod ()
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;
RPCOptions العامة getRpcOptions ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
RPCOptionsOrBuilder العام getRpcOptionsOrBuilder ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
int public getSerializedSize ()
ThreadPoolOptionProto public getSessionInterOpThreadPool (مؤشر int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
int public getSessionInterOpThreadPoolCount ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
القائمة العامة< ThreadPoolOptionProto > getSessionInterOpThreadPoolList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
ThreadPoolOptionProtoOrBuilder العام getSessionInterOpThreadPoolOrBuilder (فهرس كثافة العمليات)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
القائمة العامة <؟ يمتد ThreadPoolOptionProtoOrBuilder > getSessionInterOpThreadPoolOrBuilderList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
getShareClusterDevicesInSession () المنطقية العامة
When true, WorkerSessions are created with device attributes from the full cluster. This is helpful when a worker wants to partition a graph (for example during a PartitionedCallOp).
bool share_cluster_devices_in_session = 17;
النهائي العام com.google.protobuf.UnknownFieldSet getUnknownFields ()
getUsePerSessionThreads المنطقية العامة ()
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;
المنطق المنطقي العام hasClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
القيمة المنطقية العامة تجريبية ()
.tensorflow.ConfigProto.Experimental experimental = 16;
المنطق المنطقي العام hasGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
المنطق المنطقي العام hasGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
hasRpcOptions () المنطقية العامة
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
كود التجزئة الدولي العام ()
تمت تهيئة القيمة المنطقية النهائية العامة ()
ConfigProto العام الثابت parseDelimitedFrom (إدخال InputStream، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
رميات
IOEException |
---|
تحليل ConfigProto العام الثابت (بيانات ByteBuffer، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
رميات
InvalidProtocolBufferException |
---|
تحليل ConfigProto العام الثابت (بيانات البايت []، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
رميات
InvalidProtocolBufferException |
---|
تحليل ConfigProto العام الثابت (com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
رميات
IOEException |
---|
تحليل ConfigProto العام الثابت (بيانات com.google.protobuf.ByteString)
رميات
InvalidProtocolBufferException |
---|
تحليل ConfigProto العام الثابت (إدخال InputStream، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
رميات
IOEException |
---|
تحليل ConfigProto العام الثابت (com.google.protobuf.ByteString data، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
رميات
InvalidProtocolBufferException |
---|
ساكنة عامة محلل ()
الكتابة إلى الفراغ العام (إخراج com.google.protobuf.CodedOutputStream)
رميات
IOEException |
---|