Session configuration parameters. The system picks appropriate values for fields that are not set.
tensorflow.ConfigProto
Métodos públicos
ConfigProto.Builder | addAllDeviceFilters (valores Iterable<String>) When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Builder | addAllSessionInterOpThreadPool (Iterable<? extiende los valores ThreadPoolOptionProto >) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | addDeviceFilters (valor de cadena) When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Builder | addDeviceFiltersBytes (valor com.google.protobuf.ByteString) When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Builder | addRepeatedField (campo com.google.protobuf.Descriptors.FieldDescriptor, valor del objeto) |
ConfigProto.Builder | addSessionInterOpThreadPool (valor ThreadPoolOptionProto ) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | addSessionInterOpThreadPool (índice int, valor ThreadPoolOptionProto ) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | addSessionInterOpThreadPool ( ThreadPoolOptionProto.Builder builderForValue) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | addSessionInterOpThreadPool (índice int, ThreadPoolOptionProto.Builder builderForValue) This option is experimental - it may be replaced with a different mechanism in the future. |
ThreadPoolOptionProto.Builder | addSessionInterOpThreadPoolBuilder (índice int) This option is experimental - it may be replaced with a different mechanism in the future. |
ThreadPoolOptionProto.Builder | addSessionInterOpThreadPoolBuilder () This option is experimental - it may be replaced with a different mechanism in the future. |
Proto de configuración | construir () |
Proto de configuración | |
ConfigProto.Builder | claro () |
ConfigProto.Builder | borrarAllowSoftPlacement () Whether soft placement is allowed. |
ConfigProto.Builder | borrarClusterDef () Optional list of all workers to use in this session. |
ConfigProto.Builder | |
ConfigProto.Builder | borrarfiltrosdedispositivos () When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Builder | claroExperimental () .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.Builder | clearField (campo com.google.protobuf.Descriptors.FieldDescriptor) |
ConfigProto.Builder | borrarGpuOptions () Options that apply to all GPUs. |
ConfigProto.Builder | borrar opciones de gráfico () Options that apply to all graphs. |
ConfigProto.Builder | clearInterOpParallelismThreads () Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. |
ConfigProto.Builder | clearIntraOpParallelismThreads () The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. |
ConfigProto.Builder | borrarIsolateSessionState () If true, any resources such as Variables used in the session will not be shared with other sessions. |
ConfigProto.Builder | clearLogDevicePlacement () Whether device placements should be logged. |
ConfigProto.Builder | clearOneof (com.google.protobuf.Descriptors.OneofDescriptor uno de) |
ConfigProto.Builder | borrarOperaciónTimeoutInMs () Global timeout for all blocking operations in this session. |
ConfigProto.Builder | clearPlacementPeriod () Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically). |
ConfigProto.Builder | borrarRpcOptions () Options that apply when this session uses the distributed runtime. |
ConfigProto.Builder | clearSessionInterOpThreadPool () This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | clearShareClusterDevicesInSession () When true, WorkerSessions are created with device attributes from the full cluster. |
ConfigProto.Builder | clearUsePerSessionThreads () If true, use a new set of threads for this session rather than the global pool of threads. |
ConfigProto.Builder | clonar () |
booleano | contieneDeviceCount (clave de cadena) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
booleano | getAllowSoftPlacement () Whether soft placement is allowed. |
Definición de clúster | obtenerClusterDef () Optional list of all workers to use in this session. |
ClusterDef.Constructor | getClusterDefBuilder () Optional list of all workers to use in this session. |
ClusterDefOrBuilder | getClusterDefOrBuilder () Optional list of all workers to use in this session. |
Proto de configuración | |
com.google.protobuf.Descriptors.Descriptor estático final | |
com.google.protobuf.Descriptors.Descriptor | |
Mapa<Cadena, Entero> | getDeviceCount () Utilice getDeviceCountMap() en su lugar. |
entero | getDeviceCountCount () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
Mapa<Cadena, Entero> | getDeviceCountMap () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
entero | getDeviceCountOrDefault (clave de cadena, int defaultValue) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
entero | getDeviceCountOrThrow (clave de cadena) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
Cadena | getDeviceFilters (índice int) When any filters are present sessions will ignore all devices which do not match the filters. |
com.google.protobuf.ByteString | getDeviceFiltersBytes (índice int) When any filters are present sessions will ignore all devices which do not match the filters. |
entero | getDeviceFiltersCount () When any filters are present sessions will ignore all devices which do not match the filters. |
com.google.protobuf.ProtocolStringList | getDeviceFiltersList () When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Experimental | obtenerExperimental () .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.Experimental.Builder | getExperimentalBuilder () .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.ExperimentalOrBuilder | getExperimentalOrBuilder () .tensorflow.ConfigProto.Experimental experimental = 16; |
Opciones de GPU | getGpuOptions () Options that apply to all GPUs. |
GPUOptions.Builder | getGpuOptionsBuilder () Options that apply to all GPUs. |
GPUOptionsOrBuilder | getGpuOptionsOrBuilder () Options that apply to all GPUs. |
Opciones de gráfico | getGraphOptions () Options that apply to all graphs. |
GraphOptions.Builder | getGraphOptionsBuilder () Options that apply to all graphs. |
GraphOptionsOrBuilder | getGraphOptionsOrBuilder () Options that apply to all graphs. |
entero | getInterOpParallelismThreads () Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. |
entero | getIntraOpParallelismThreads () The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. |
booleano | getIsolateSessionState () If true, any resources such as Variables used in the session will not be shared with other sessions. |
booleano | getLogDevicePlacement () Whether device placements should be logged. |
Mapa<Cadena, Entero> | getMutableDeviceCount () Utilice descriptores de acceso de mutación alternativos en su lugar. |
largo | getOperationTimeoutInMs () Global timeout for all blocking operations in this session. |
entero | getPlacementPeriod () Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically). |
Opciones RPC | getRpcOptions () Options that apply when this session uses the distributed runtime. |
Opciones RPC.Constructor | getRpcOptionsBuilder () Options that apply when this session uses the distributed runtime. |
RPCOptionsOrBuilder | getRpcOptionsOrBuilder () Options that apply when this session uses the distributed runtime. |
ThreadPoolOptionProto | getSessionInterOpThreadPool (índice int) This option is experimental - it may be replaced with a different mechanism in the future. |
ThreadPoolOptionProto.Builder | getSessionInterOpThreadPoolBuilder (índice int) This option is experimental - it may be replaced with a different mechanism in the future. |
Lista< ThreadPoolOptionProto.Builder > | getSessionInterOpThreadPoolBuilderList () This option is experimental - it may be replaced with a different mechanism in the future. |
entero | getSessionInterOpThreadPoolCount () This option is experimental - it may be replaced with a different mechanism in the future. |
Lista< ThreadPoolOptionProto > | getSessionInterOpThreadPoolList () This option is experimental - it may be replaced with a different mechanism in the future. |
ThreadPoolOptionProtoOrBuilder | getSessionInterOpThreadPoolOrBuilder (índice int) This option is experimental - it may be replaced with a different mechanism in the future. |
Lista<? extiende ThreadPoolOptionProtoOrBuilder > | getSessionInterOpThreadPoolOrBuilderList () This option is experimental - it may be replaced with a different mechanism in the future. |
booleano | getShareClusterDevicesInSession () When true, WorkerSessions are created with device attributes from the full cluster. |
booleano | getUsePerSessionThreads () If true, use a new set of threads for this session rather than the global pool of threads. |
booleano | tieneClusterDef () Optional list of all workers to use in this session. |
booleano | tieneExperimental () .tensorflow.ConfigProto.Experimental experimental = 16; |
booleano | tieneGpuOptions () Options that apply to all GPUs. |
booleano | tiene opciones de gráfico () Options that apply to all graphs. |
booleano | tieneRpcOptions () Options that apply when this session uses the distributed runtime. |
booleano final | |
ConfigProto.Builder | |
ConfigProto.Builder | mergeExperimental (valor ConfigProto.Experimental ) .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.Builder | mergeFrom (com.google.protobuf.Message otro) |
ConfigProto.Builder | mergeFrom (entrada com.google.protobuf.CodedInputStream, com.google.protobuf.ExtensionRegistryLite extensiónRegistry) |
ConfigProto.Builder | |
ConfigProto.Builder | |
ConfigProto.Builder | mergeRpcOptions (valor RPCOptions ) Options that apply when this session uses the distributed runtime. |
ConfigProto.Builder final | mergeUnknownFields (com.google.protobuf.UnknownFieldSet desconocidoFields) |
ConfigProto.Builder | putAllDeviceCount (valores de mapa<cadena, entero>) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
ConfigProto.Builder | putDeviceCount (clave de cadena, valor int) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
ConfigProto.Builder | removeDeviceCount (clave de cadena) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
ConfigProto.Builder | removeSessionInterOpThreadPool (índice int) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | setAllowSoftPlacement (valor booleano) Whether soft placement is allowed. |
ConfigProto.Builder | setClusterDef ( ClusterDef.Builder constructorForValue) Optional list of all workers to use in this session. |
ConfigProto.Builder | |
ConfigProto.Builder | setDeviceFilters (índice int, valor de cadena) When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Builder | setExperimental (valor ConfigProto.Experimental ) .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.Builder | setExperimental ( ConfigProto.Experimental.Builder constructorForValue) .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.Builder | setField (campo com.google.protobuf.Descriptors.FieldDescriptor, valor del objeto) |
ConfigProto.Builder | |
ConfigProto.Builder | |
ConfigProto.Builder | |
ConfigProto.Builder | |
ConfigProto.Builder | setInterOpParallelismThreads (valor int) Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. |
ConfigProto.Builder | setIntraOpParallelismThreads (valor int) The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. |
ConfigProto.Builder | setIsolateSessionState (valor booleano) If true, any resources such as Variables used in the session will not be shared with other sessions. |
ConfigProto.Builder | setLogDevicePlacement (valor booleano) Whether device placements should be logged. |
ConfigProto.Builder | setOperationTimeoutInMs (valor largo) Global timeout for all blocking operations in this session. |
ConfigProto.Builder | setPlacementPeriod (valor int) Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically). |
ConfigProto.Builder | setRepeatedField (campo com.google.protobuf.Descriptors.FieldDescriptor, índice int, valor del objeto) |
ConfigProto.Builder | setRpcOptions (valor de RPCOptions ) Options that apply when this session uses the distributed runtime. |
ConfigProto.Builder | setRpcOptions ( RPCOptions.Builder builderForValue) Options that apply when this session uses the distributed runtime. |
ConfigProto.Builder | setSessionInterOpThreadPool (índice int, valor ThreadPoolOptionProto ) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | setSessionInterOpThreadPool (índice int, ThreadPoolOptionProto.Builder builderForValue) This option is experimental - it may be replaced with a different mechanism in the future. |
ConfigProto.Builder | setShareClusterDevicesInSession (valor booleano) When true, WorkerSessions are created with device attributes from the full cluster. |
ConfigProto.Builder final | setUnknownFields (com.google.protobuf.UnknownFieldSet desconocidoFields) |
ConfigProto.Builder | setUsePerSessionThreads (valor booleano) If true, use a new set of threads for this session rather than the global pool of threads. |
Métodos heredados
Métodos públicos
public ConfigProto.Builder addAllDeviceFilters (valores Iterable<String>)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
public ConfigProto.Builder addAllSessionInterOpThreadPool (Iterable<? extiende los valores ThreadPoolOptionProto >)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
public ConfigProto.Builder addDeviceFilters (valor de cadena)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público ConfigProto.Builder addDeviceFiltersBytes (valor com.google.protobuf.ByteString)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
public ConfigProto.Builder addRepeatedField (campo com.google.protobuf.Descriptors.FieldDescriptor, valor del objeto)
público ConfigProto.Builder addSessionInterOpThreadPool (valor ThreadPoolOptionProto )
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder addSessionInterOpThreadPool (índice int, valor ThreadPoolOptionProto )
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder addSessionInterOpThreadPool ( ThreadPoolOptionProto.Builder builderForValue)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder addSessionInterOpThreadPool (índice int, ThreadPoolOptionProto.Builder builderForValue)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ThreadPoolOptionProto.Builder addSessionInterOpThreadPoolBuilder (índice int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ThreadPoolOptionProto.Builder addSessionInterOpThreadPoolBuilder ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder clearAllowSoftPlacement ()
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
público ConfigProto.Builder clearClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
público ConfigProto.Builder clearDeviceFilters ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público ConfigProto.Builder clearExperimental ()
.tensorflow.ConfigProto.Experimental experimental = 16;
público ConfigProto.Builder clearGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
público ConfigProto.Builder clearGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
public ConfigProto.Builder clearInterOpParallelismThreads ()
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
público ConfigProto.Builder clearIntraOpParallelismThreads ()
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior describe above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.
int32 intra_op_parallelism_threads = 2;
público ConfigProto.Builder clearIsolateSessionState ()
If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;
público ConfigProto.Builder clearLogDevicePlacement ()
Whether device placements should be logged.
bool log_device_placement = 8;
public ConfigProto.Builder clearOperationTimeoutInMs ()
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;
público ConfigProto.Builder clearPlacementPeriod ()
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;
público ConfigProto.Builder clearRpcOptions ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público ConfigProto.Builder clearSessionInterOpThreadPool ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder clearShareClusterDevicesInSession ()
When true, WorkerSessions are created with device attributes from the full cluster. This is helpful when a worker wants to partition a graph (for example during a PartitionedCallOp).
bool share_cluster_devices_in_session = 17;
público ConfigProto.Builder clearUsePerSessionThreads ()
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;
booleano público contieneDeviceCount (clave de cadena)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
getAllowSoftPlacement booleano público ()
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
public ClusterDef getClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
público ClusterDef.Builder getClusterDefBuilder ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
público ClusterDefOrBuilder getClusterDefOrBuilder ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
público estático final com.google.protobuf.Descriptors.Descriptor getDescriptor ()
público com.google.protobuf.Descriptors.Descriptor getDescriptorForType ()
público int getDeviceCountCount ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
Mapa público<Cadena, Entero> getDeviceCountMap ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
public int getDeviceCountOrDefault (clave de cadena, int defaultValue)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
public int getDeviceCountOrThrow (clave de cadena)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
cadena pública getDeviceFilters (índice int)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público com.google.protobuf.ByteString getDeviceFiltersBytes (índice int)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público int getDeviceFiltersCount ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público com.google.protobuf.ProtocolStringList getDeviceFiltersList ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público ConfigProto.Experimental getExperimental ()
.tensorflow.ConfigProto.Experimental experimental = 16;
público ConfigProto.Experimental.Builder getExperimentalBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
público ConfigProto.ExperimentalOrBuilder getExperimentalOrBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
GPUOptions públicas getGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
GPUOptions.Builder público getGpuOptionsBuilder ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
GPUOptionsOrBuilder público getGpuOptionsOrBuilder ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
GraphOptions públicas getGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
pública GraphOptions.Builder getGraphOptionsBuilder ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
público GraphOptionsOrBuilder getGraphOptionsOrBuilder ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
público int getInterOpParallelismThreads ()
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
público int getIntraOpParallelismThreads ()
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior describe above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.
int32 intra_op_parallelism_threads = 2;
getIsolateSessionState público booleano ()
If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;
getLogDevicePlacement público booleano ()
Whether device placements should be logged.
bool log_device_placement = 8;
Mapa público<Cadena, Entero> getMutableDeviceCount ()
Utilice descriptores de acceso de mutación alternativos en su lugar.
público largo getOperationTimeoutInMs ()
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;
público int getPlacementPeriod ()
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;
RPCOptions públicas getRpcOptions ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público RPCOptions.Builder getRpcOptionsBuilder ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público RPCOptionsOrBuilder getRpcOptionsOrBuilder ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público ThreadPoolOptionPro para getSessionInterOpThreadPool (índice int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ThreadPoolOptionProto.Builder getSessionInterOpThreadPoolBuilder (índice int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
Lista pública< ThreadPoolOptionProto.Builder > getSessionInterOpThreadPoolBuilderList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público int getSessionInterOpThreadPoolCount ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
Lista pública< ThreadPoolOptionProto > getSessionInterOpThreadPoolList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ThreadPoolOptionProtoOrBuilder getSessionInterOpThreadPoolOrBuilder (índice int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
Lista pública<? extiende ThreadPoolOptionProtoOrBuilder > getSessionInterOpThreadPoolOrBuilderList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público booleano getShareClusterDevicesInSession ()
When true, WorkerSessions are created with device attributes from the full cluster. This is helpful when a worker wants to partition a graph (for example during a PartitionedCallOp).
bool share_cluster_devices_in_session = 17;
getUsePerSessionThreads booleano público ()
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;
hasClusterDef booleano público ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
hasExperimental público booleano ()
.tensorflow.ConfigProto.Experimental experimental = 16;
hasGpuOptions booleano público ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
hasGraphOptions booleano público ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
hasRpcOptions booleano público ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público final booleano isInitialized ()
public ConfigProto.Builder mergeClusterDef (valor ClusterDef )
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
público ConfigProto.Builder mergeExperimental (valor ConfigProto.Experimental )
.tensorflow.ConfigProto.Experimental experimental = 16;
public ConfigProto.Builder mergeFrom (entrada com.google.protobuf.CodedInputStream, com.google.protobuf.ExtensionRegistryLite extensionRegistry)
Lanza
IOExcepción |
---|
public ConfigProto.Builder mergeGpuOptions (valor GPUOptions )
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
public ConfigProto.Builder mergeGraphOptions (valor GraphOptions )
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
public ConfigProto.Builder mergeRpcOptions (valor RPCOptions )
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
ConfigProto.Builder final público mergeUnknownFields (com.google.protobuf.UnknownFieldSet desconocidoFields)
public ConfigProto.Builder putAllDeviceCount (valores de mapa <cadena, entero>)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
public ConfigProto.Builder putDeviceCount (clave de cadena, valor int)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
public ConfigProto.Builder removeDeviceCount (clave de cadena)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
público ConfigProto.Builder removeSessionInterOpThreadPool (índice int)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder setAllowSoftPlacement (valor booleano)
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
público ConfigProto.Builder setClusterDef ( ClusterDef.Builder builderForValue)
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
public ConfigProto.Builder setClusterDef (valor ClusterDef )
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
public ConfigProto.Builder setDeviceFilters (índice int, valor de cadena)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
público ConfigProto.Builder setExperimental (valor ConfigProto.Experimental )
.tensorflow.ConfigProto.Experimental experimental = 16;
público ConfigProto.Builder setExperimental ( ConfigProto.Experimental.Builder builderForValue)
.tensorflow.ConfigProto.Experimental experimental = 16;
public ConfigProto.Builder setField (campo com.google.protobuf.Descriptors.FieldDescriptor, valor del objeto)
público ConfigProto.Builder setGpuOptions ( GPUOptions.Builder builderForValue)
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
público ConfigProto.Builder setGpuOptions (valor GPUOptions )
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
público ConfigProto.Builder setGraphOptions ( GraphOptions.Builder builderForValue)
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
público ConfigProto.Builder setGraphOptions (valor GraphOptions )
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
público ConfigProto.Builder setInterOpParallelismThreads (valor int)
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
público ConfigProto.Builder setIntraOpParallelismThreads (valor int)
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior describe above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.
int32 intra_op_parallelism_threads = 2;
público ConfigProto.Builder setIsolateSessionState (valor booleano)
If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;
público ConfigProto.Builder setLogDevicePlacement (valor booleano)
Whether device placements should be logged.
bool log_device_placement = 8;
public ConfigProto.Builder setOperationTimeoutInMs (valor largo)
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;
público ConfigProto.Builder setPlacementPeriod (valor int)
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;
public ConfigProto.Builder setRepeatedField (campo com.google.protobuf.Descriptors.FieldDescriptor, índice int, valor del objeto)
público ConfigProto.Builder setRpcOptions (valor RPCOptions )
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público ConfigProto.Builder setRpcOptions ( RPCOptions.Builder builderForValue)
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
público ConfigProto.Builder setSessionInterOpThreadPool (índice int, valor ThreadPoolOptionProto )
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder setSessionInterOpThreadPool (índice int, ThreadPoolOptionProto.Builder builderForValue)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
público ConfigProto.Builder setShareClusterDevicesInSession (valor booleano)
When true, WorkerSessions are created with device attributes from the full cluster. This is helpful when a worker wants to partition a graph (for example during a PartitionedCallOp).
bool share_cluster_devices_in_session = 17;
público final ConfigProto.Builder setUnknownFields (com.google.protobuf.UnknownFieldSet desconocidoFields)
público ConfigProto.Builder setUsePerSessionThreads (valor booleano)
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;