shared-memory
Why use shm_open?
If you open and mmap() a regular file, data will end up in that file. If you just need to share a memory region, without the need to persist the data, which incurs extra I/O overhead, use shm_open(). Such a memory region would also allow you to store other kinds of objects such as mutexes … Read more
Is C++11 atomic usable with mmap?
I’m two months late, but I’m having the exact same problem right now and I think I’ve found some sort of an answer. The short version is that it should work, but I’m not sure if I’d depend on it. Here’s what I found: The C++11 standard defines a new memory model, but it has … Read more
wait and notify in C/C++ shared memory
Instead of the Java object that you would use to wait/notify, you need two objects: a mutex and a condition variable. These are initialized with pthread_mutex_init and pthread_cond_init. Where you would have synchronized on the Java object, use pthread_mutex_lock and pthread_mutex_unlock (note that in C you have to pair these yourself manually). If you don’t … Read more
Delete all SYSTEM V shared memory and semaphores on UNIX-like systems
Here, save and try this script (kill_ipcs.sh) on your shell: #!/bin/bash ME=`whoami` IPCS_S=`ipcs -s | egrep “0x[0-9a-f]+ [0-9]+” | grep $ME | cut -f2 -d” “` IPCS_M=`ipcs -m | egrep “0x[0-9a-f]+ [0-9]+” | grep $ME | cut -f2 -d” “` IPCS_Q=`ipcs -q | egrep “0x[0-9a-f]+ [0-9]+” | grep $ME | cut -f2 -d” “` for … Read more
Why do I need a memory barrier?
Barrier #2 guarentees that the write to _complete gets committed immediately. Otherwise it could remain in a queued state meaning that the read of _complete in B would not see the change caused by A even though B effectively used a volatile read. Of course, this example does not quite do justice to the problem … Read more
When/why use an MVar over a TVar
MVar can be empty used to implement synchronization patterns between threads allows one-way communication between threads can be faster than TVar in some cases TVar can not be empty atomic transactions “shared memory” between threads; can be used to implement, for example, a lookup cache from which multiple threads can read/write access is linear time … Read more
Use shared GPU memory with TensorFlow?
Shared memory is an area of the main system RAM reserved for graphics. References: https://en.wikipedia.org/wiki/Shared_graphics_memory https://www.makeuseof.com/tag/can-shared-graphics-finally-compete-with-a-dedicated-graphics-card/ This type of memory is what integrated graphics eg Intel HD series typically use. This is not on your NVIDIA GPU, and CUDA can’t use it. Tensorflow can’t use it when running on GPU because CUDA can’t use it, … Read more
Does using .reset() on a std::shared_ptr delete all instances
When you use .reset(), you are eliminating one owner of the pointer, but all of the other owners are still around. Here is an example: #include <memory> #include <cstdio> class Test { public: ~Test() { std::puts(“Test destroyed.”); } }; int main() { std::shared_ptr<Test> p = std::make_shared<Test>(); std::shared_ptr<Test> q = p; std::puts(“p.reset()…”); p.reset(); std::puts(“q.reset()…”); q.reset(); std::puts(“done”); … Read more
When to use Pipes vs When to use Shared Memory
Essentially, pipes – whether named or anonymous – are used like message passing. Someone sends a piece of information to the recipient and the recipient can receive it. Shared memory is more like publishing data – someone puts data in shared memory and the readers (potentially many) must use synchronization e.g. via semaphores to learn … Read more