multi-gpu Questions
1
So I am on windows 10 and am using multiple GPUs now in order to run the training of some machine learning model and this model is about GAN algorithm you can check the full code over here :
Here, ...
Fineman asked 17/8, 2020 at 12:16
1
I have a server (Ubuntu 16.04) with 4 GPUs. My team shares this, and our current approach is to containerize all of our work with Docker, and to restrict containers to GPUs using something like $ N...
5
In Mac OS X, every display gets a unique CGDirectDisplayID number assigned to it. You can use CGGetActiveDisplayList() or [NSScreen screens] to access them, among others. Per Apple's docs:
A dis...
Atomics asked 18/5, 2010 at 23:26
1
Solved
I have access to six 24GB GPUs.
When I try to load some HuggingFace models, for example the following
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pr...
Diazo asked 15/2, 2023 at 11:33
5
Solved
When I launch my main script on the cluster with ddp mode (2 GPU's), Pytorch Lightning duplicates whatever is executed in the main script, e.g. prints or other logic. I need some extended training ...
Fusil asked 18/2, 2021 at 14:13
2
Let's begin with the premise that I'm newly approaching to TensorFlow and deep learning in general.
I have TF 2.0 Keras-style model trained using tf.Model.train(), two available GPUs and I'm looki...
Relate asked 20/11, 2019 at 9:50
2
Solved
My server has two GPUs, How can I use two GPUs for training at the same time to maximize their computing power? Is my code below correct? Does it allow my model to be properly trained?
class MyMode...
3
I want my model to run on multiple GPU-sharing parameters but with different batches of data.
Can I do something like that with model.fit()? Is there any other alternative?
It asked 18/7, 2017 at 12:2
0
I have multiple GPU devices and want to run a Pytorch on them. I have already tried MULTI-GPU EXAMPLES and DATA PARALLELISM in my code by
device = torch.device("cuda:0,1,2")
model = torch...
1
I was to set up DDP (distributed data parallel) on a DGX A100 but it doesn't work. Whenever I try to run it simply hangs. My code is super simple just spawning 4 processes for 4 gpus (for the sake ...
Gingery asked 5/3, 2021 at 18:49
2
Solved
I want to train a model on several GPUs using tensorflow 2.0. In the tensorflow tutorial for distributed training (https://www.tensorflow.org/guide/distributed_training), the tf.data datagenerator ...
Brummett asked 4/12, 2019 at 22:51
5
Solved
I'm working on a business project that is done in Java, and it needs huge computation power to compute business markets. Simple math, but with huge amount of data.
We ordered some CUDA GPUs to try...
3
I'd like to know the possible ways to implement batch normalization layers with synchronizing batch statistics when training with multi-GPU.
Caffe Maybe there are some variants of caffe that coul...
Ursal asked 27/3, 2017 at 21:42
2
I have a server with multiple GPUs and want to make full use of them during model inference inside a java app.
By default tensorflow seizes all available GPUs, but uses only the first one.
I can ...
Lineman asked 13/12, 2017 at 18:35
3
Solved
Would using multi-GPUs in Vulkan be something like making many command queues then dividing command buffers between them?
There are 2 problems:
In OpenGL, we use GLEW to get functions. With more...
1
Solved
I have created 3 virtual GPU's (have 1 GPU) and try to speedup vectorization on images. However, using provided below code with manual placement from off docs (here) I got strange results: training...
Eriha asked 26/12, 2019 at 9:18
5
Solved
I have a standard tensorflow Estimator with some model and want to run it on multiple GPUs instead of just one. How can this be done using data parallelism?
I searched the Tensorflow Docs but did...
Johanajohanan asked 10/11, 2017 at 13:22
0
I am trying to run a Subclassed Keras Model on multiple GPUs. The code is running as expected, however, the following "warning" crops up during the execution of the code:
"Efficient allreduce is ...
Allonym asked 2/7, 2019 at 0:22
1
Solved
In order to leverage the GPUs on a system, I'd like to be able to draw a block diagram and understand the connections represented by "nvidia-smi topo -m" output.
Here is an example output:
Can...
2
I have a 4 GPU machine on which I run Tensorflow (GPU) with Keras. Some of my classification problems take several hours to complete.
nvidia-smi returns Volatile GPU-Util which never exceeds 25% ...
Illness asked 15/11, 2017 at 2:33
2
Solved
I have a CUDA stream which someone handed to me - a cudaStream_t value. The CUDA Runtime API does not seem to indicate how I can obtain the index of the device with which this stream is associated....
Huh asked 17/7, 2015 at 11:28
2
Solved
I'm having a problem with ffmpeg video encoding using GPU (CUDA).
I have 2x nVidia GTX 1050 Ti
The problem comes when i try to do multiple parallel encodings. More than 2 processes and ffmpeg die...
1
Solved
When I try the following code sample for using Tensorflow with Ray, Tensorflow fails to detect the GPU's on my machine when invoked by the "remote" worker but it does find the GPU's when invoked "l...
Snigger asked 27/1, 2018 at 1:26
3
Solved
Following the upgrade to Keras 2.0.9, I have been using the multi_gpu_model utility but I can't save my models or best weights using
model.save('path')
The error I get is
TypeError: can’t pi...
Abundant asked 9/11, 2017 at 20:18
0
I want to use two python source codes, the first one is developed with tensorflow and the other is developed using pythorch. I want to run each of these codes in a thread with a separate GPU. the i...
1 Next >
© 2022 - 2024 — McMap. All rights reserved.