The reason the two other answers contradict each other is that they’re both right (and worth reading), but they apply to different situations.
If you’re releasing a library on PyPI, you should declare whatever dependencies you know about, but not pin to a specific version. For example, if you know you need >= 1.2
, but 1.4
is broken, then you can write something like somepkg >= 1.2, != 1.4
. If one of the things you know is that somepkg
follows SemVer, then you can add a < 2
.
If you’re building something like a web app that you deploy yourself, then you should pin all of your exact dependencies, and use a service like pyup.io or requires.io to notify you when new versions are released. This way you can stay up to date, while making sure that the versions you deploy are the same as the versions you tested against.
Notice that these two pieces of advice complement each other: it’s just a fact that if app A uses library B, then either the author of A or the author of B can pin B’s dependencies, but not both. So we have to pick one. The underlying principle here is that this is best done as late as possible, i.e., by the author of A, who can see their whole system; the job of library B is to pass on some useful hints to help A make these decisions. In particular, if all of the libraries that A depends on record exactly what they know about their underlying dependencies, then it’s possible for A to make sensible decisions about what to do when they overlap. Like if dependency B depends on requests >= 1.0, != 1.2
, and dependency C depends on requests >= 1.1
, then we can guess that 1.1 or 1.3 might be good versions to pin. If dependency B depends on requests == 1.1
and dependency C depends on requests == 1.2
, then we’re just stuck.