Can a size_type ever be larger than std::size_t?

Yes, and this could be useful in some cases.

Suppose you have a program that wishes to access more storage than will fit in virtual memory. By creating an allocator that references memory mapped storage and mapping it as required when indirecting pointer objects, you can access arbitrarily large amounts of memory.

This remains conformant to 18.2:6 because size_t is defined as large enough to contain the size of any object, but 17.6.3.5:2 table 28 defines size_type as containing the size of the largest object in the allocation model, which need not be an actual object in the C++ memory model.

Note that the requirements in 17.6.3.5:2 table 28 do not constitute a requirement that the allocation of multiple objects should result in an array; for allocate(n) the requirement is:

Memory is allocated for n objects of type T

and for deallocate the assertion is:

All n T objects in the area
pointed to by p shall be
destroyed prior to this call.

Note area, not array. Another point is 17.6.3.5:4:

The X::pointer, X::const_pointer, X::void_pointer, and X::const_void_pointer types shall satisfy
the requirements of NullablePointer (17.6.3.3). No constructor, comparison operator, copy operation,
move operation, or swap operation on these types shall exit via an exception. X::pointer and X::const_pointer shall also satisfy the requirements for a random access iterator (24.2).

There is no requirement here that (&*p) + n should be the same as p + n.

It’s perfectly legitimate for a model expressible within another model to contain objects not representable in the outer model; for example, non-standard models in mathematical logic.

Leave a Comment