If you have enough RAM it’s not too hard to handle 1M or more connections on linux. These guys handled 10 million connections with a java application on a single box using regular CentOS kernel with a few sysctl tweaks:
sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 20000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
Also they balanced /proc/irq/’s of the network adapter and added a tweak for better JVM work with huge pages:
sysctl -w vm.nr_hugepages=30720
Having two 6-core CPUs 57% loaded they served 1Gbps over 12M connections in 2013.
But you need HUGE amount of RAM for that. The above test was on a server with 96GB RAM, and 36GB of those were used by the kernel for buffers of 12M sockets.
To serve 1M connections with similar settings you’ll need a server with at least 8GB RAM and 3-4GB of them would be used just for socket buffers.