Get total amount of free GPU memory and available using pytorch

PyTorch can provide you total, reserved and allocated info:

t = torch.cuda.get_device_properties(0).total_memory
r = torch.cuda.memory_reserved(0)
a = torch.cuda.memory_allocated(0)
f = r-a  # free inside reserved

Python bindings to NVIDIA can bring you the info for the whole GPU (0 in this case means first GPU device):

from pynvml import *
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
print(f'total    : {info.total}')
print(f'free     : {info.free}')
print(f'used     : {info.used}')

pip install pynvml

You may check the nvidia-smi to get memory info.
You may use nvtop but this tool needs to be installed from source (at the moment of writing this).
Another tool where you can check memory is gpustat (pip3 install gpustat).

If you would like to use C++ cuda:

include <iostream>
#include "cuda.h"
#include "cuda_runtime_api.h"
  
using namespace std;
  
int main( void ) {
    int num_gpus;
    size_t free, total;
    cudaGetDeviceCount( &num_gpus );
    for ( int gpu_id = 0; gpu_id < num_gpus; gpu_id++ ) {
        cudaSetDevice( gpu_id );
        int id;
        cudaGetDevice( &id );
        cudaMemGetInfo( &free, &total );
        cout << "GPU " << id << " memory: free=" << free << ", total=" << total << endl;
    }
    return 0;
}

Leave a Comment

deneme bonusu veren sitelerbahis casinomakrobetceltabetpinbahispolobetpolobet girişpinbahis girişmakrobet girişpulibet girişmobilbahis girişkolaybet giriş