Since this is ‘getting started’ my answer will stick with a simple implementation rather than a highly scalable one. It’s best to first feel comfortable with the simple approach before making things more complicated.
1 – Binding and listening
Your code seems fine to me, personally I use:
serverSocket.Bind(new IPEndPoint(IPAddress.Any, 4444));
Rather than going the DNS route, but I don’t think there is a real problem either way.
1.5 – Accepting client connections
Just mentioning this for completeness’ sake… I am assuming you are doing this otherwise you wouldn’t get to step 2.
2 – Receiving data
I would make the buffer a little longer than 255 bytes, unless you can expect all your server messages to be at most 255 bytes. I think you’d want a buffer that is likely to be larger than the TCP packet size so you can avoid doing multiple reads to receive a single block of data.
I’d say picking 1500 bytes should be fine, or maybe even 2048 for a nice round number.
Alternately, maybe you can avoid using a byte[]
to store data fragments, and instead wrap your server-side client socket in a NetworkStream
, wrapped in a BinaryReader
, so that you can read the components of your message direclty from the socket without worrying about buffer sizes.
3 – Sending data and specifying data length
Your approach will work just fine, but it does obviously require that it is easy to calculate the length of the packet before you start sending it.
Alternately, if your message format (order of its components) is designed in a fashion so that at any time the client will be able to determine if there should be more data following (for example, code 0x01 means next will be an int and a string, code 0x02 means next will be 16 bytes, etc, etc). Combined with the NetworkStream
approach on the client side, this may be a very effective approach.
To be on the safe side you may want to add validation of the components being received to make sure you only process sane values. For example, if you receive an indication for a string of length 1TB you may have had a packet corruption somewhere, and it may be safer to close the connection and force the client to re-connect and ‘start over’. This approach gives you a very good catch-all behaviour in case of unexpected failures.
4/5 – Closing the client and the server
Personally I would opt for just Close
without further messages; when a connection is closed you will get an exception on any blocking read/write at the other end of the connection which you will have to cater for.
Since you have to cater for ‘unknown disconnections’ anyway to get a robust solution, making disconnecting any more complicated is generally pointless.
6 – Unknown disconnections
I would not trust even the socket status… it is possible for a connection to die somewhere along the path between client / server without either the client or the server noticing.
The only guaranteed way to tell a connection that has died unexpectedly is when you next try to send something along the connection. At that point you will always get an exception indicating failure if anything has gone wrong with the connection.
As a result, the only fool-proof way to detect all unexpected connections is to implement a ‘ping’ mechanism, where ideally the client and the server will periodically send a message to the other end that only results in a response message indicating that the ‘ping’ was received.
To optimise out needless pings, you may want to have a ‘time-out’ mechanism that only sends a ping when no other traffic has been received from the other end for a set amount of time (for example, if the last message from the server is more than x seconds old, the client sends a ping to make sure the connection has not died without notification).
More advanced
If you want high scalability you will have to look into asynchronous methods for all the socket operations (Accept / Send / Receive). These are the ‘Begin/End’ variants, but they are a lot more complicated to use.
I recommend against trying this until you have the simple version up and working.
Also note that if you are not planning to scale further than a few dozen clients this is not actually going to be a problem regardless. Async techniques are really only necessary if you intend to scale into the thousands or hundreds of thousands of connected clients while not having your server die outright.
I probably have forgotten a whole bunch of other important suggestions, but this should be enough to get you a fairly robust and reliable implementation to start with