VLA does have some overhead (compared to “ordinary” named compile-time-sized array).
Firstly, it has run-time length and yet the language provides you with means to obtain the actual size of the array at run-time (using
sizeof). This immediately means that the actual size of the array has to be stored somewhere. This results in some insignificant per-array memory overhead. However, since VLAs can only be declared as automatic objects, this memory overhead is not something anyone would ever notice. It is just like declaring an extra local variable of integral type.
Secondly, VLA is normally allocated on stack, but because of its variable size, in general case its exact location in memory is not known at compile time. For this reason the underlying implementation usually has to implement it as a pointer to a memory block. This introduces some additional memory overhead (for the pointer), which is again completely insignificant for the reasons described above. This also introduces slight performance overhead, since we have to read the pointer value in order to find the actual array. This is the same overhead you get when accessing
malloc-ed arrays (and don’t get with the named compile-time-sized arrays).
Since the size of the VLA is a run-time integer value, it can, of course, be passed as a command-line argument. VLA doesn’t care where its size comes from.
VLA were introduced as run-time-sized arrays with low allocation/deallocation cost. They fit between “ordinary” named compile-time-sized arrays (which have virtually zero allocation-deallocation cost, but fixed size) and
malloc-ed arrays (which have run-time size, but relatively high allocation-deallocation cost).
VLA obey [almost] the same scope-dependent lifetime rules as automatic (i.e local) objects, which means that in general case they can’t replace
malloc-ed arrays. They applicability is limited to situations when you need a quick run-time sized array with a typical automatic lifetime.