Skip to content Skip to sidebar Skip to footer

Tensorflow Doesn't Allocate Full Gpu Memory

Tensorflow allocates all of GPU memory per default, but my new settings actually only are 9588 MiB / 11264 MiB. I expected around 11.000MiB like my old settings. Tensorflow informa

Solution 1:

It is necessary to use the TCC driver to avoid windows reserving some of the VRAM. You may be using the WDDM driver.

Here is the page on TCC: https://docs.nvidia.com/gameworks/content/developertools/desktop/nsight/tesla_compute_cluster.htm

Here is a related question: How can I use 100% of VRAM on a secondary GPU from a single process on windows 10?

Solution 2:

import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
from keras import backend as K
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.2
session = tf.Session(config=config)
K.set_session(session)

This works well for my case

Post a Comment for "Tensorflow Doesn't Allocate Full Gpu Memory"