Does this imply that, ‘there is no case where using a non-lock-free atomic type would be a better choice over using a lock-free atomic type when the latter is available’ ? (Mainly in terms of performance rather than ease-of-use).
No. And that is, in general, not true.
Suppose you have two cores and three threads that are ready-to-run. Assume threads A and B are accessing the same collection and will contend significantly while thread C is accessing totally different data and will minimize contention.
If threads A and B use locks, one of those threads will rapidly wind up being de-scheduled and thread C will run on one core. This will allow whichever thread gets scheduled, A or B, to run with nearly no contention at all.
By contrast, with a lock-free collection, the scheduler never gets a chance to deschedule thread A or B. It is entirely possible that threads A and B will run concurrently through their entire timeslice, ping-ponging the same cache lines between their L2 caches the whole time.
In general, locks are more efficient than lockfree code. That’s why locks are used so much more often in threaded code. However, std::atomic
types are generally not used in contexts like this. It would likely be a mistake to use a std::atomic
type in a context where you have reason to think a lock would be more efficient.