As xmoex said:
o(1) constitutes a constant memory usage. So amount of input is inconsequential.
o(n) constitutes a linear memory usage. So more input means linearly more memory.
o(n*n) constitutes a quadratic memory usage. So more input means quadratically more memory (x^2 on average.
This measure of memory complexity in most cases is completely independent to the measure of time complexity. For computer algorithms it is important to know how the algorithm will manage both of these complexities to decide the quality of the algorithm. However both must be calculated separately. One may be more important than the other depending on your use cases and circumstances for the problem.