ConfigProto.Builder

공개 정적 최종 클래스 ConfigProto.Builder

 Session configuration parameters.
 The system picks appropriate values for fields that are not set.
 
Protobuf 유형 tensorflow.ConfigProto

공개 방법

ConfigProto.Builder
addAllDeviceFilters (Iterable<String> 값)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
ConfigProto.Builder
addAllSessionInterOpThreadPool (Iterable<? 확장 ThreadPoolOptionProto > 값)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
addDeviceFilters (문자열 값)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
ConfigProto.Builder
addDeviceFiltersBytes (com.google.protobuf.ByteString 값)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
ConfigProto.Builder
addRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor 필드, 개체 값)
ConfigProto.Builder
addSessionInterOpThreadPool ( ThreadPoolOptionProto 값)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
addSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto 값)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
addSessionInterOpThreadPool ( ThreadPoolOptionProto.Builder builderForValue)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
addSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto.Builder builderForValue)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ThreadPoolOptionProto.Builder
addSessionInterOpThreadPoolBuilder (int 인덱스)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ThreadPoolOptionProto.Builder
addSessionInterOpThreadPoolBuilder ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
컨피그프로토
짓다 ()
컨피그프로토
ConfigProto.Builder
ConfigProto.Builder
ClearAllowSoftPlacement ()
 Whether soft placement is allowed.
ConfigProto.Builder
클리어클러스터데프 ()
 Optional list of all workers to use in this session.
ConfigProto.Builder
ConfigProto.Builder
클리어디바이스필터 ()
 When any filters are present sessions will ignore all devices which do not
 match the filters.
ConfigProto.Builder
명확한실험적 ()
.tensorflow.ConfigProto.Experimental experimental = 16;
ConfigProto.Builder
ClearField (com.google.protobuf.Descriptors.FieldDescriptor 필드)
ConfigProto.Builder
클리어Gpu옵션 ()
 Options that apply to all GPUs.
ConfigProto.Builder
클리어그래프옵션 ()
 Options that apply to all graphs.
ConfigProto.Builder
ClearInterOpParallelismThreads ()
 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
ConfigProto.Builder
ClearIntraOpParallelismThreads ()
 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
ConfigProto.Builder
클리어IsolateSessionState ()
 If true, any resources such as Variables used in the session will not be
 shared with other sessions.
ConfigProto.Builder
ClearLogDevicePlacement ()
 Whether device placements should be logged.
ConfigProto.Builder
ClearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof)
ConfigProto.Builder
ClearOperationTimeoutInMs ()
 Global timeout for all blocking operations in this session.
ConfigProto.Builder
ClearPlacementPeriod ()
 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
ConfigProto.Builder
클리어Rpc옵션 ()
 Options that apply when this session uses the distributed runtime.
ConfigProto.Builder
ClearSessionInterOpThreadPool ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
ClearShareClusterDevicesInSession ()
 When true, WorkerSessions are created with device attributes from the
 full cluster.
ConfigProto.Builder
ClearUsePerSessionThreads ()
 If true, use a new set of threads for this session rather than the global
 pool of threads.
ConfigProto.Builder
클론 ()
불리언
containDeviceCount (문자열 키)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
불리언
getAllowSoftPlacement ()
 Whether soft placement is allowed.
ClusterDef
getClusterDef ()
 Optional list of all workers to use in this session.
ClusterDef.Builder
getClusterDefBuilder ()
 Optional list of all workers to use in this session.
ClusterDefOrBuilder
getClusterDefOrBuilder ()
 Optional list of all workers to use in this session.
컨피그프로토
최종 정적 com.google.protobuf.Descriptors.Descriptor
com.google.protobuf.Descriptors.Descriptor
맵<문자열, 정수>
getDeviceCount ()
대신 getDeviceCountMap() 사용하세요.
정수
getDeviceCountCount ()
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
맵<문자열, 정수>
getDeviceCountMap ()
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
정수
getDeviceCountOrDefault (문자열 키, int defaultValue)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
정수
getDeviceCountOrThrow (문자열 키)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
getDeviceFilters (정수 인덱스)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
com.google.protobuf.ByteString
getDeviceFiltersBytes (정수 인덱스)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
정수
getDeviceFiltersCount ()
 When any filters are present sessions will ignore all devices which do not
 match the filters.
com.google.protobuf.ProtocolStringList
getDeviceFiltersList ()
 When any filters are present sessions will ignore all devices which do not
 match the filters.
ConfigProto.Experimental
getExperimental ()
.tensorflow.ConfigProto.Experimental experimental = 16;
ConfigProto.Experimental.Builder
getExperimentalBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
ConfigProto.ExperimentalOrBuilder
getExperimentalOrBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
GPU옵션
getGpu옵션 ()
 Options that apply to all GPUs.
GPU옵션.빌더
getGpuOptionsBuilder ()
 Options that apply to all GPUs.
GPU옵션또는 빌더
getGpuOptionsOrBuilder ()
 Options that apply to all GPUs.
그래프옵션
getGraphOptions ()
 Options that apply to all graphs.
GraphOptions.Builder
getGraphOptionsBuilder ()
 Options that apply to all graphs.
그래프옵션또는빌더
getGraphOptionsOrBuilder ()
 Options that apply to all graphs.
정수
getInterOpParallelismThreads ()
 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
정수
getIntraOpParallelismThreads ()
 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
불리언
getIsolateSessionState ()
 If true, any resources such as Variables used in the session will not be
 shared with other sessions.
불리언
getLogDevicePlacement ()
 Whether device placements should be logged.
맵<문자열, 정수>
getMutableDeviceCount ()
대신 대체 돌연변이 접근자를 사용하세요.
getOperationTimeoutInMs ()
 Global timeout for all blocking operations in this session.
정수
getPlacementPeriod ()
 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
RPC옵션
getRpc옵션 ()
 Options that apply when this session uses the distributed runtime.
RPCOptions.Builder
getRpcOptionsBuilder ()
 Options that apply when this session uses the distributed runtime.
RPCOptionsOrBuilder
getRpcOptionsOrBuilder ()
 Options that apply when this session uses the distributed runtime.
ThreadPoolOptionProto
getSessionInterOpThreadPool (정수 인덱스)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ThreadPoolOptionProto.Builder
getSessionInterOpThreadPoolBuilder (정수 인덱스)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
목록< ThreadPoolOptionProto.Builder >
getSessionInterOpThreadPoolBuilderList ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
정수
getSessionInterOpThreadPoolCount ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
목록< ThreadPoolOptionProto >
getSessionInterOpThreadPoolList ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ThreadPoolOptionProtoOrBuilder
getSessionInterOpThreadPoolOrBuilder (int 인덱스)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
목록<? ThreadPoolOptionProtoOrBuilder 확장 >
getSessionInterOpThreadPoolOrBuilderList ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
불리언
getShareClusterDevicesInSession ()
 When true, WorkerSessions are created with device attributes from the
 full cluster.
불리언
getUsePerSessionThreads ()
 If true, use a new set of threads for this session rather than the global
 pool of threads.
불리언
hasClusterDef ()
 Optional list of all workers to use in this session.
불리언
실험적 ()
.tensorflow.ConfigProto.Experimental experimental = 16;
불리언
hasGpu옵션 ()
 Options that apply to all GPUs.
불리언
hasGraphOptions ()
 Options that apply to all graphs.
불리언
hasRpc옵션 ()
 Options that apply when this session uses the distributed runtime.
최종 부울
ConfigProto.Builder
mergeClusterDef ( ClusterDef 값)
 Optional list of all workers to use in this session.
ConfigProto.Builder
mergeExperimental ( ConfigProto.Experimental 값)
.tensorflow.ConfigProto.Experimental experimental = 16;
ConfigProto.Builder
mergeFrom (com.google.protobuf.다른 메시지 보내기)
ConfigProto.Builder
mergeFrom (com.google.protobuf.CodedInputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
ConfigProto.Builder
mergeGpuOptions ( GPUOptions 값)
 Options that apply to all GPUs.
ConfigProto.Builder
mergeGraphOptions ( GraphOptions 값)
 Options that apply to all graphs.
ConfigProto.Builder
mergeRpcOptions ( RPCOptions 값)
 Options that apply when this session uses the distributed runtime.
최종 ConfigProto.Builder
mergeUnknownFields (com.google.protobuf.UnknownFieldSet 알려지지 않은Fields)
ConfigProto.Builder
putAllDeviceCount (Map<String, Integer> 값)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
ConfigProto.Builder
putDeviceCount (문자열 키, int 값)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
ConfigProto.Builder
RemoveDeviceCount (문자열 키)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
ConfigProto.Builder
RemoveSessionInterOpThreadPool (int 인덱스)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
setAllowSoftPlacement (부울 값)
 Whether soft placement is allowed.
ConfigProto.Builder
setClusterDef ( ClusterDef.Builder builderForValue)
 Optional list of all workers to use in this session.
ConfigProto.Builder
setClusterDef ( ClusterDef 값)
 Optional list of all workers to use in this session.
ConfigProto.Builder
setDeviceFilters (int 인덱스, 문자열 값)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
ConfigProto.Builder
setExperimental ( ConfigProto.Experimental 값)
.tensorflow.ConfigProto.Experimental experimental = 16;
ConfigProto.Builder
setExperimental ( ConfigProto.Experimental.Builder builderForValue)
.tensorflow.ConfigProto.Experimental experimental = 16;
ConfigProto.Builder
setField (com.google.protobuf.Descriptors.FieldDescriptor 필드, 개체 값)
ConfigProto.Builder
setGpuOptions ( GPUOptions.Builder builderForValue)
 Options that apply to all GPUs.
ConfigProto.Builder
setGpuOptions ( GPUOptions 값)
 Options that apply to all GPUs.
ConfigProto.Builder
setGraphOptions ( GraphOptions.Builder builderForValue)
 Options that apply to all graphs.
ConfigProto.Builder
setGraphOptions ( GraphOptions 값)
 Options that apply to all graphs.
ConfigProto.Builder
setInterOpParallelismThreads (int 값)
 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
ConfigProto.Builder
setIntraOpParallelismThreads (정수 값)
 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
ConfigProto.Builder
setIsolateSessionState (부울 값)
 If true, any resources such as Variables used in the session will not be
 shared with other sessions.
ConfigProto.Builder
setLogDevicePlacement (부울 값)
 Whether device placements should be logged.
ConfigProto.Builder
setOperationTimeoutInMs (긴 값)
 Global timeout for all blocking operations in this session.
ConfigProto.Builder
setPlacementPeriod (정수 값)
 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
ConfigProto.Builder
setRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor 필드, int 인덱스, 개체 값)
ConfigProto.Builder
setRpcOptions ( RPCOptions 값)
 Options that apply when this session uses the distributed runtime.
ConfigProto.Builder
setRpcOptions ( RPCOptions.Builder builderForValue)
 Options that apply when this session uses the distributed runtime.
ConfigProto.Builder
setSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto 값)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
setSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto.Builder builderForValue)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
ConfigProto.Builder
setShareClusterDevicesInSession (부울 값)
 When true, WorkerSessions are created with device attributes from the
 full cluster.
최종 ConfigProto.Builder
setUnknownFields (com.google.protobuf.UnknownFieldSet UnknownFields)
ConfigProto.Builder
setUsePerSessionThreads (부울 값)
 If true, use a new set of threads for this session rather than the global
 pool of threads.

상속된 메서드

공개 방법

공개 ConfigProto.Builder addAllDeviceFilters (Iterable<String> 값)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 ConfigProto.Builder addAllSessionInterOpThreadPool (Iterable<? 확장 ThreadPoolOptionProto > 값)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder addDeviceFilters (문자열 값)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 ConfigProto.Builder addDeviceFiltersBytes (com.google.protobuf.ByteString 값)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 ConfigProto.Builder addRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor 필드, 개체 값)

공개 ConfigProto.Builder addSessionInterOpThreadPool ( ThreadPoolOptionProto 값)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder addSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto 값)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder addSessionInterOpThreadPool ( ThreadPoolOptionProto.Builder builderForValue)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder addSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto.Builder builderForValue)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공용 ThreadPoolOptionProto.Builder addSessionInterOpThreadPoolBuilder (int 인덱스)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ThreadPoolOptionProto.Builder addSessionInterOpThreadPoolBuilder ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto 빌드 ()

공개 ConfigProto 빌드Partial ()

공개 ConfigProto.Builder 지우기 ()

공개 ConfigProto.Builder ClearAllowSoftPlacement ()

 Whether soft placement is allowed. If allow_soft_placement is true,
 an op will be placed on CPU if
   1. there's no GPU implementation for the OP
 or
   2. no GPU devices are known or registered
 or
   3. need to co-locate with reftype input(s) which are from CPU.
 
bool allow_soft_placement = 7;

공개 ConfigProto.Builder 클리어ClusterDef ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ConfigProto.Builder ClearDeviceCount ()

공개 ConfigProto.Builder ClearDeviceFilters ()

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 ConfigProto.Builder ClearExperimental ()

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 ConfigProto.BuilderclearField ( com.google.protobuf.Descriptors.FieldDescriptor 필드)

공개 ConfigProto.Builder ClearGpuOptions ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 ConfigProto.Builder ClearGraphOptions ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 ConfigProto.Builder ClearInterOpParallelismThreads ()

 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
 0 means the system picks an appropriate number.
 Negative means all operations are performed in caller's thread.
 Note that the first Session created in the process sets the
 number of threads for all future sessions unless use_per_session_threads is
 true or session_inter_op_thread_pool is configured.
 
int32 inter_op_parallelism_threads = 5;

공개 ConfigProto.Builder ClearIntraOpParallelismThreads ()

 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
 0 means the system picks an appropriate number.
 If you create an ordinary session, e.g., from Python or C++,
 then there is exactly one intra op thread pool per process.
 The first session created determines the number of threads in this pool.
 All subsequent sessions reuse/share this one global pool.
 There are notable exceptions to the default behavior describe above:
 1. There is an environment variable  for overriding this thread pool,
    named TF_OVERRIDE_GLOBAL_THREADPOOL.
 2. When connecting to a server, such as a remote `tf.train.Server`
    instance, then this option will be ignored altogether.
 
int32 intra_op_parallelism_threads = 2;

공개 ConfigProto.Builder ClearIsolateSessionState ()

 If true, any resources such as Variables used in the session will not be
 shared with other sessions. However, when clusterspec propagation is
 enabled, this field is ignored and sessions are always isolated.
 
bool isolate_session_state = 15;

공개 ConfigProto.Builder ClearLogDevicePlacement ()

 Whether device placements should be logged.
 
bool log_device_placement = 8;

공개 ConfigProto.BuilderclearOneof ( com.google.protobuf.Descriptors.OneofDescriptor oneof)

공개 ConfigProto.Builder ClearOperationTimeoutInMs ()

 Global timeout for all blocking operations in this session.  If non-zero,
 and not overridden on a per-operation basis, this value will be used as the
 deadline for all blocking operations.
 
int64 operation_timeout_in_ms = 11;

공개 ConfigProto.Builder ClearPlacementPeriod ()

 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
 
int32 placement_period = 3;

공개 ConfigProto.Builder ClearRpcOptions ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 ConfigProto.Builder ClearSessionInterOpThreadPool ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder ClearShareClusterDevicesInSession ()

 When true, WorkerSessions are created with device attributes from the
 full cluster.
 This is helpful when a worker wants to partition a graph
 (for example during a PartitionedCallOp).
 
bool share_cluster_devices_in_session = 17;

공개 ConfigProto.Builder ClearUsePerSessionThreads ()

 If true, use a new set of threads for this session rather than the global
 pool of threads. Only supported by direct sessions.
 If false, use the global threads created by the first session, or the
 per-session thread pools configured by session_inter_op_thread_pool.
 This option is deprecated. The same effect can be achieved by setting
 session_inter_op_thread_pool to have one element, whose num_threads equals
 inter_op_parallelism_threads.
 
bool use_per_session_threads = 9;

공개 ConfigProto.Builder 클론 ()

공개 부울 containDeviceCount (문자열 키)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

공개 부울 getAllowSoftPlacement ()

 Whether soft placement is allowed. If allow_soft_placement is true,
 an op will be placed on CPU if
   1. there's no GPU implementation for the OP
 or
   2. no GPU devices are known or registered
 or
   3. need to co-locate with reftype input(s) which are from CPU.
 
bool allow_soft_placement = 7;

공개 ClusterDef getClusterDef ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ClusterDef.Builder getClusterDefBuilder ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ClusterDefOrBuilder getClusterDefOrBuilder ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ConfigProto getDefaultInstanceForType ()

공개 정적 최종 com.google.protobuf.Descriptors.Descriptor getDescriptor ()

공개 com.google.protobuf.Descriptors.Descriptor getDescriptorForType ()

공용 Map<String, Integer> getDeviceCount ()

대신 getDeviceCountMap() 사용하세요.

공개 int getDeviceCountCount ()

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

공개 맵<String, Integer> getDeviceCountMap ()

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public int getDeviceCountOrDefault (문자열 키, int defaultValue)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public int getDeviceCountOrThrow (문자열 키)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

공개 문자열 getDeviceFilters (int 인덱스)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 com.google.protobuf.ByteString getDeviceFiltersBytes (int 인덱스)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 int getDeviceFiltersCount ()

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 com.google.protobuf.ProtocolStringList getDeviceFiltersList ()

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 ConfigProto.Experimental getExperimental ()

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 ConfigProto.Experimental.Builder getExperimentalBuilder ()

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 ConfigProto.ExperimentalOrBuilder getExperimentalOrBuilder ()

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 GPU옵션 getGpuOptions ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 GPUOptions.Builder getGpuOptionsBuilder ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 GPUOptionsOrBuilder getGpuOptionsOrBuilder ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 GraphOptions getGraphOptions ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 GraphOptions.Builder getGraphOptionsBuilder ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 GraphOptionsOrBuilder getGraphOptionsOrBuilder ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 int getInterOpParallelismThreads ()

 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
 0 means the system picks an appropriate number.
 Negative means all operations are performed in caller's thread.
 Note that the first Session created in the process sets the
 number of threads for all future sessions unless use_per_session_threads is
 true or session_inter_op_thread_pool is configured.
 
int32 inter_op_parallelism_threads = 5;

공개 int getIntraOpParallelismThreads ()

 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
 0 means the system picks an appropriate number.
 If you create an ordinary session, e.g., from Python or C++,
 then there is exactly one intra op thread pool per process.
 The first session created determines the number of threads in this pool.
 All subsequent sessions reuse/share this one global pool.
 There are notable exceptions to the default behavior describe above:
 1. There is an environment variable  for overriding this thread pool,
    named TF_OVERRIDE_GLOBAL_THREADPOOL.
 2. When connecting to a server, such as a remote `tf.train.Server`
    instance, then this option will be ignored altogether.
 
int32 intra_op_parallelism_threads = 2;

공개 부울 getIsolateSessionState ()

 If true, any resources such as Variables used in the session will not be
 shared with other sessions. However, when clusterspec propagation is
 enabled, this field is ignored and sessions are always isolated.
 
bool isolate_session_state = 15;

공개 부울 getLogDevicePlacement ()

 Whether device placements should be logged.
 
bool log_device_placement = 8;

공개 Map<String, Integer> getMutableDeviceCount ()

대신 대체 돌연변이 접근자를 사용하세요.

공개 긴 getOperationTimeoutInMs ()

 Global timeout for all blocking operations in this session.  If non-zero,
 and not overridden on a per-operation basis, this value will be used as the
 deadline for all blocking operations.
 
int64 operation_timeout_in_ms = 11;

공개 int getPlacementPeriod ()

 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
 
int32 placement_period = 3;

공개 RPCOptions getRpcOptions ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 RPCOptions.Builder getRpcOptionsBuilder ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 RPCOptionsOrBuilder getRpcOptionsOrBuilder ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공용 ThreadPoolOptionProto getSessionInterOpThreadPool (int 인덱스)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공용 ThreadPoolOptionProto.Builder getSessionInterOpThreadPoolBuilder (int 인덱스)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 목록< ThreadPoolOptionProto.Builder > getSessionInterOpThreadPoolBuilderList ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 int getSessionInterOpThreadPoolCount ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 목록< ThreadPoolOptionProto > getSessionInterOpThreadPoolList ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ThreadPoolOptionProtoOrBuilder getSessionInterOpThreadPoolOrBuilder (int 인덱스)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 목록<? ThreadPoolOptionProtoOrBuilder > getSessionInterOpThreadPoolOrBuilderList ()를 확장합니다.

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 부울 getShareClusterDevicesInSession ()

 When true, WorkerSessions are created with device attributes from the
 full cluster.
 This is helpful when a worker wants to partition a graph
 (for example during a PartitionedCallOp).
 
bool share_cluster_devices_in_session = 17;

공개 부울 getUsePerSessionThreads ()

 If true, use a new set of threads for this session rather than the global
 pool of threads. Only supported by direct sessions.
 If false, use the global threads created by the first session, or the
 per-session thread pools configured by session_inter_op_thread_pool.
 This option is deprecated. The same effect can be achieved by setting
 session_inter_op_thread_pool to have one element, whose num_threads equals
 inter_op_parallelism_threads.
 
bool use_per_session_threads = 9;

공개 부울 hasClusterDef ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 부울 hasExperimental ()

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 부울 hasGpuOptions ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 부울 hasGraphOptions ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 부울 hasRpcOptions ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 최종 부울 isInitialized ()

공개 ConfigProto.Builder mergeClusterDef ( ClusterDef 값)

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ConfigProto.Builder mergeExperimental ( ConfigProto.Experimental 값)

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 ConfigProto.Builder mergeFrom (com.google.protobuf.다른 메시지)

공개 ConfigProto.Builder mergeFrom (com.google.protobuf.CodedInputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)

던지기
IO예외

공개 ConfigProto.Builder mergeGpuOptions ( GPUOptions 값)

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 ConfigProto.Builder mergeGraphOptions ( GraphOptions 값)

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 ConfigProto.Builder mergeRpcOptions ( RPCOptions 값)

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 최종 ConfigProto.Builder mergeUnknownFields (com.google.protobuf.UnknownFieldSet UnknownFields)

공개 ConfigProto.Builder putAllDeviceCount (Map<String, Integer> 값)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

공개 ConfigProto.Builder putDeviceCount (문자열 키, int 값)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

공개 ConfigProto.Builder RemoveDeviceCount (문자열 키)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

공개 ConfigProto.Builder RemoveSessionInterOpThreadPool (int 인덱스)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder setAllowSoftPlacement (부울 값)

 Whether soft placement is allowed. If allow_soft_placement is true,
 an op will be placed on CPU if
   1. there's no GPU implementation for the OP
 or
   2. no GPU devices are known or registered
 or
   3. need to co-locate with reftype input(s) which are from CPU.
 
bool allow_soft_placement = 7;

공개 ConfigProto.Builder setClusterDef ( ClusterDef.Builder builderForValue)

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ConfigProto.Builder setClusterDef ( ClusterDef 값)

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

공개 ConfigProto.Builder setDeviceFilters (int 인덱스, 문자열 값)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

공개 ConfigProto.Builder setExperimental ( ConfigProto.Experimental 값)

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 ConfigProto.Builder setExperimental ( ConfigProto.Experimental.Builder builderForValue)

.tensorflow.ConfigProto.Experimental experimental = 16;

공개 ConfigProto.Builder setField (com.google.protobuf.Descriptors.FieldDescriptor 필드, 개체 값)

공개 ConfigProto.Builder setGpuOptions ( GPUOptions.Builder builderForValue)

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 ConfigProto.Builder setGpuOptions ( GPUOptions 값)

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

공개 ConfigProto.Builder setGraphOptions ( GraphOptions.Builder builderForValue)

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 ConfigProto.Builder setGraphOptions ( GraphOptions 값)

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

공개 ConfigProto.Builder setInterOpParallelismThreads (int 값)

 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
 0 means the system picks an appropriate number.
 Negative means all operations are performed in caller's thread.
 Note that the first Session created in the process sets the
 number of threads for all future sessions unless use_per_session_threads is
 true or session_inter_op_thread_pool is configured.
 
int32 inter_op_parallelism_threads = 5;

공개 ConfigProto.Builder setIntraOpParallelismThreads (int 값)

 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
 0 means the system picks an appropriate number.
 If you create an ordinary session, e.g., from Python or C++,
 then there is exactly one intra op thread pool per process.
 The first session created determines the number of threads in this pool.
 All subsequent sessions reuse/share this one global pool.
 There are notable exceptions to the default behavior describe above:
 1. There is an environment variable  for overriding this thread pool,
    named TF_OVERRIDE_GLOBAL_THREADPOOL.
 2. When connecting to a server, such as a remote `tf.train.Server`
    instance, then this option will be ignored altogether.
 
int32 intra_op_parallelism_threads = 2;

공개 ConfigProto.Builder setIsolateSessionState (부울 값)

 If true, any resources such as Variables used in the session will not be
 shared with other sessions. However, when clusterspec propagation is
 enabled, this field is ignored and sessions are always isolated.
 
bool isolate_session_state = 15;

공개 ConfigProto.Builder setLogDevicePlacement (부울 값)

 Whether device placements should be logged.
 
bool log_device_placement = 8;

공개 ConfigProto.Builder setOperationTimeoutInMs (긴 값)

 Global timeout for all blocking operations in this session.  If non-zero,
 and not overridden on a per-operation basis, this value will be used as the
 deadline for all blocking operations.
 
int64 operation_timeout_in_ms = 11;

공개 ConfigProto.Builder setPlacementPeriod (int 값)

 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
 
int32 placement_period = 3;

공개 ConfigProto.Builder setRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor 필드, int 인덱스, 개체 값)

공개 ConfigProto.Builder setRpcOptions ( RPCOptions 값)

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 ConfigProto.Builder setRpcOptions ( RPCOptions.Builder builderForValue)

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

공개 ConfigProto.Builder setSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto 값)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder setSessionInterOpThreadPool (int 인덱스, ThreadPoolOptionProto.Builder builderForValue)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

공개 ConfigProto.Builder setShareClusterDevicesInSession (부울 값)

 When true, WorkerSessions are created with device attributes from the
 full cluster.
 This is helpful when a worker wants to partition a graph
 (for example during a PartitionedCallOp).
 
bool share_cluster_devices_in_session = 17;

공개 최종 ConfigProto.Builder setUnknownFields (com.google.protobuf.UnknownFieldSetknownFields)

공개 ConfigProto.Builder setUsePerSessionThreads (부울 값)

 If true, use a new set of threads for this session rather than the global
 pool of threads. Only supported by direct sessions.
 If false, use the global threads created by the first session, or the
 per-session thread pools configured by session_inter_op_thread_pool.
 This option is deprecated. The same effect can be achieved by setting
 session_inter_op_thread_pool to have one element, whose num_threads equals
 inter_op_parallelism_threads.
 
bool use_per_session_threads = 9;