Can I see what are the available GPUs with mxnet
?
Is there something similar for TensorFlow's
tf.test.gpu_device_name()
in mxnet
?
Can I see what are the available GPUs with mxnet
?
Is there something similar for TensorFlow's
tf.test.gpu_device_name()
in mxnet
?
The definitive way to check if your GPU is being utilized is by using nvidia-smi
command. My favorite arguments are:
nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv -l 1
If you just want to test if gpu support is available (which is what tf.test.gpu_device_name()) does, the following function can help:
import mxnet as mx
def gpu_device(gpu_number=0):
try:
_ = mx.nd.array([1, 2, 3], ctx=mx.gpu(gpu_number))
except mx.MXNetError:
return None
return mx.gpu(gpu_number)
This function returns None
if the requested gpu device isn't available, or returns the relevant context if gpu device is available. You can also use this function to check if there is any support for GPU on this system:
if not gpu_device():
print('No GPU device found!')
Check if mxnet
have listed the gpu.
import mxnet as mx
mx.context.num_gpus()
To use the library, make sure to pass the argument mx.gpu(0)
where the context is required. The 0
is the gpu indice, in the case of multi-gpus, there will be more indices.
In case you have build from source
>>> from mxnet.runtime import feature_list
>>> feature_list()
[✔ CUDA, ✔ CUDNN, ✖ NCCL, ✔ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✖ OPENMP, ✖ SSE, ✔ F16C, ✔ JEMALLOC, ✖ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ BLAS_MKL, ✖ BLAS_APPLE, ✔ LAPACK, ✔ MKLDNN, ✔ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, ✖ CXX14, ✖ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✖ DEBUG]
Here, CUDA and CUDNN is on in the build flag, indicating that it was build with GPU!
© 2022 - 2024 — McMap. All rights reserved.