How can I solve ‘ran out of gpu memory’ in TensorFlow

I was encountering out of memory errors when training a small CNN on a GTX 970. Through somewhat of a fluke, I discovered that telling TensorFlow to allocate memory on the GPU as needed (instead of up front) resolved all my issues. This can be accomplished using the following Python code:

    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    sess = tf.Session(config=config)

Previously, TensorFlow would pre-allocate ~90% of GPU memory. For some unknown reason, this would later result in out-of-memory errors even though the model could fit entirely in GPU memory. By using the above code, I no longer have OOM errors.

Note: If the model is too big to fit in GPU memory, this probably won’t help!

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)