Good question. Actually, I always show these 3 pictures:
n = [0; 10]
n = [0; 100]
n = [0; 1000]
So, O(N*log(N))
is far better than O(N^2)
. It is much closer to O(N)
than to O(N^2)
.
But your O(N^2)
algorithm is faster for N < 100
in real life. There are a lot of reasons why it can be faster. Maybe due to better memory allocation or other “non-algorithmic” effects. Maybe O(N*log(N))
algorithm requires some data preparation phase or O(N^2)
iterations are shorter. Anyway, Big-O notation is only appropriate in case of large enough Ns.
If you want to demonstrate why one algorithm is faster for small Ns, you can measure execution time of 1 iteration and constant overhead for both algorithms, then use them to correct theoretical plot:
Example
Or just measure execution time of both algorithms for different Ns
and plot empirical data.