публичный статический конечный класс GPUOptions.Builder
Тип protobuf tensorflow.GPUOptions
Публичные методы
GPUOptions.Builder | addRepeatedField (поле com.google.protobuf.Descriptors.FieldDescriptor, значение объекта) |
Параметры графического процессора | строить () |
Параметры графического процессора | |
GPUOptions.Builder | прозрачный () |
GPUOptions.Builder | ОчиститьАллокаторТип () The type of GPU allocation strategy to use. |
GPUOptions.Builder | ОчиститьАлловГровс () If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
GPUOptions.Builder | ClearDeferredDeletionBytes () Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
GPUOptions.Builder | очиститьЭкспериментальный () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | ClearField (поле com.google.protobuf.Descriptors.FieldDescriptor) |
GPUOptions.Builder | ClearForceGpuCompatible () Force all tensors to be gpu_compatible. |
GPUOptions.Builder | ClearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof) |
GPUOptions.Builder | ClearPerProcessGpuMemoryFraction () Fraction of the available GPU memory to allocate for each process. |
GPUOptions.Builder | ClearPollingActiveDelayUsecs () In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
GPUOptions.Builder | ClearPollingInactiveDelayMsecs () This field is deprecated and ignored. |
GPUOptions.Builder | ClearVisibleDeviceList () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
GPUOptions.Builder | клон () |
Нить | ПолучитьАллокаторТип () The type of GPU allocation strategy to use. |
com.google.protobuf.ByteString | getAllocatorTypeBytes () The type of GPU allocation strategy to use. |
логическое значение | getAllowGrowth () If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
Параметры графического процессора | |
длинный | getDeferredDeletionBytes () Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
окончательный статический com.google.protobuf.Descriptors.Descriptor | |
com.google.protobuf.Descriptors.Descriptor | |
GPUOptions.Экспериментальный | получитьЭкспериментальный () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Experimental.Builder | getExperimentalBuilder () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ExperimentalOrBuilder | getExperimentalOrBuilder () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
логическое значение | getForceGpuCompatible () Force all tensors to be gpu_compatible. |
двойной | getPerProcessGpuMemoryFraction () Fraction of the available GPU memory to allocate for each process. |
интервал | getPollingActiveDelayUsecs () In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
интервал | getPollingInactiveDelayMsecs () This field is deprecated and ignored. |
Нить | getVisibleDeviceList () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
com.google.protobuf.ByteString | getVisibleDeviceListBytes () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
логическое значение | имеетЭкспериментальный () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
последнее логическое значение | |
GPUOptions.Builder | mergeExperimental ( GPUOptions.Экспериментальное значение) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | mergeFrom (com.google.protobuf.Message другое) |
GPUOptions.Builder | mergeFrom (ввод com.google.protobuf.CodedInputStream, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
окончательный вариант GPUOptions.Builder | mergeUnknownFields (com.google.protobuf.UnknownFieldSetknownFields) |
GPUOptions.Builder | setAllocatorType (строковое значение) The type of GPU allocation strategy to use. |
GPUOptions.Builder | setAllocatorTypeBytes (значение com.google.protobuf.ByteString) The type of GPU allocation strategy to use. |
GPUOptions.Builder | setAllowGrowth (логическое значение) If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
GPUOptions.Builder | setDeferredDeletionBytes (длинное значение) Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
GPUOptions.Builder | setExperimental ( GPUOptions.Experimental.Builder builderForValue) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | setExperimental ( GPUOptions.Экспериментальное значение) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | setField (поле com.google.protobuf.Descriptors.FieldDescriptor, значение объекта) |
GPUOptions.Builder | setForceGpuCompatible (логическое значение) Force all tensors to be gpu_compatible. |
GPUOptions.Builder | setPerProcessGpuMemoryFraction (двойное значение) Fraction of the available GPU memory to allocate for each process. |
GPUOptions.Builder | setPollingActiveDelayUsecs (целое значение) In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
GPUOptions.Builder | setPollingInactiveDelayMsecs (целое значение) This field is deprecated and ignored. |
GPUOptions.Builder | setRepeatedField (поле com.google.protobuf.Descriptors.FieldDescriptor, индекс int, значение объекта) |
окончательный вариант GPUOptions.Builder | setUnknownFields (com.google.protobuf.UnknownFieldSetknownFields) |
GPUOptions.Builder | setVisibleDeviceList (строковое значение) A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
GPUOptions.Builder | setVisibleDeviceListBytes (значение com.google.protobuf.ByteString) A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
Унаследованные методы
Публичные методы
public GPUOptions.Builder addRepeatedField (поле com.google.protobuf.Descriptors.FieldDescriptor, значение объекта)
общедоступный GPUOptions.Builder ClearAllocatorType ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
общедоступный GPUOptions.Builder ClearAllowGrowth ()
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
общедоступный GPUOptions.Builder ClearDeferredDeletionBytes ()
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
public GPUOptions.Builder ClearExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
public GPUOptions.Builder ClearForceGpuCompatible ()
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
public GPUOptions.Builder ClearPerProcessGpuMemoryFraction ()
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
public GPUOptions.Builder ClearPollingActiveDelayUsecs ()
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
public GPUOptions.Builder ClearPollingInactiveDelayMsecs ()
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
общедоступный GPUOptions.Builder ClearVisibleDeviceList ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
общедоступная строка getAllocatorType ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
public com.google.protobuf.ByteString getAllocatorTypeBytes ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
общедоступное логическое значение getAllowGrowth ()
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
общедоступный длинный getDeferredDeletionBytes ()
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
общедоступный статический окончательный com.google.protobuf.Descriptors.Descriptor getDescriptor ()
общедоступный com.google.protobuf.Descriptors.Descriptor getDescriptorForType ()
public GPUOptions.Experimental getExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
общедоступный GPUOptions.Experimental.Builder getExperimentalBuilder ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
public GPUOptions.ExperimentalOrBuilder getExperimentalOrBuilder ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
общедоступное логическое значение getForceGpuCompatible ()
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
публичный двойной getPerProcessGpuMemoryFraction ()
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
public int getPollingActiveDelayUsecs ()
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
public int getPollingInactiveDelayMsecs ()
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
публичная строка getVisibleDeviceList ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
public com.google.protobuf.ByteString getVisibleDeviceListBytes ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
общедоступное логическое значение hasExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
публичное финальное логическое значение isInitialized ()
public GPUOptions.Builder mergeExperimental ( GPUOptions.Experimental value)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
public GPUOptions.Builder mergeFrom (вход com.google.protobuf.CodedInputStream, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
Броски
Исключение IO |
---|
общедоступный окончательный вариант GPUOptions.Builder mergeUnknownFields (com.google.protobuf.UnknownFieldSetknownFields)
public GPUOptions.Builder setAllocatorType (строковое значение)
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
public GPUOptions.Builder setAllocatorTypeBytes (значение com.google.protobuf.ByteString)
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
public GPUOptions.Builder setAllowGrowth (логическое значение)
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
public GPUOptions.Builder setDeferredDeletionBytes (длинное значение)
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
общедоступный набор GPUOptions.BuilderExperimental ( GPUOptions.Experimental.Builder builderForValue)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
public GPUOptions.Builder setExperimental ( GPUOptions.Experimental value)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
public GPUOptions.Builder setField (поле com.google.protobuf.Descriptors.FieldDescriptor, значение объекта)
public GPUOptions.Builder setForceGpuCompatible (логическое значение)
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
public GPUOptions.Builder setPerProcessGpuMemoryFraction (двойное значение)
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
public GPUOptions.Builder setPollingActiveDelayUsecs (целое значение)
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
public GPUOptions.Builder setPollingInactiveDelayMsecs (целое значение)
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
public GPUOptions.Builder setRepeatedField (поле com.google.protobuf.Descriptors.FieldDescriptor, индекс int, значение объекта)
общедоступный окончательный вариант GPUOptions.Builder setUnknownFields (com.google.protobuf.UnknownFieldSetknownFields)
public GPUOptions.Builder setVisibleDeviceList (строковое значение)
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
public GPUOptions.Builder setVisibleDeviceListBytes (значение com.google.protobuf.ByteString)
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;