What is a distributed cache?

  1. Yes, half the data on server a, and half on server b would be a distributed cache. There are many methods of distributing the data, though some sort of hashing of the keys seems to be most popular.

  2. The terms server and node are generally interchangeable. A node is generally a single unit of some collection, often called a cluster. A server is generally a single piece of hardware. In erlang, you can run multiple instances of the erlang runtime on a single server, and thus you’d have multiple erlang nodes… but generally you’d want to have one node per server for more optimum scheduling. (For non-distributed languages and platforms you have to manage your processes based on your needs.)

  3. If a server goes down, and it is a cache server, then the data would have to come from its original source. EG: A cache is usually a memory based database designed for quick retrieval. The data in the cache sticks around only so long as its being used regularly, and eventually will be purged. But for distributed systems where you need persistence, a common technique is to have multiple copies. EG: you have servers A, B, C, D, E, and F. For data 1, you would put it on A, and then a copy on B and C. Couchbase and Riak do this. For data 2, it could be on B, and then copies on C and D. This way if any one server goes down you still have two copies.

Leave a Comment

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)