Earlier versions of nginx (before 1.1.4), which already powered a huge number of the most visited websites worldwide (and some still do even nowdays, if the server headers are to be believed), didn’t even support keepalive
on the upstream
side, because there is very little benefit for doing so in the datacentre setting, unless you have a non-trivial latency between your various hosts; see https://serverfault.com/a/883019/110020 for some explanation.
Basically, unless you know you specifically need keepalive between your upstream and front-end, chances are it’s only making your architecture less resilient and worse-off.
(Note that your current solution is also wrong because a change in the IP address will likewise go undetected, because you’re doing hostname resolution at config reload only; so, even if nginx does start, it’ll basically stop working once IP addresses of the upstream servers do change.)
Potential solutions, pick one:
-
The best solution would seem to just get rid of
upstream
keepalive
as likely unnecessary in a datacentre environment, and use variables withproxy_pass
for up-to-date DNS resolution for each request (nginx is still smart-enough to still do the caching of such resolutions) -
Another option would be to get a paid version of nginx through a commercial subscription, which has a
resolve
parameter for theserver
directive within theupstream
context. -
Finally, another thing to try might be to use a
set
variable and/or amap
to specify the servers withinupstream
; this is neither confirmed nor denied to have been implemented; e.g., it may or may not work.