gensim Doc2Vec vs tensorflow Doc2Vec

Old question, but an answer would be useful for future visitors. So here are some of my thoughts.

There are some problems in the tensorflow implementation:

  • window is 1-side size, so window=5 would be 5*2+1 = 11 words.
  • Note that with PV-DM version of doc2vec, the batch_size would be the number of documents. So train_word_dataset shape would be batch_size * context_window, while train_doc_dataset and train_labels shapes would be batch_size.
  • More importantly, sampled_softmax_loss is not negative_sampling_loss. They are two different approximations of softmax_loss.

So for the OP’s listed questions:

  1. This implementation of doc2vec in tensorflow is working and correct in its own way, but it is different from both the gensim implementation and the paper.
  2. window is 1-side size as said above. If document size is less than context size, then the smaller one would be use.
  3. There are many reasons why gensim implementation is faster. First, gensim was optimized heavily, all operations are faster than naive python operations, especially data I/O. Second, some preprocessing steps such as min_count filtering in gensim would reduce the dataset size. More importantly, gensim uses negative_sampling_loss, which is much faster than sampled_softmax_loss, I guess this is the main reason.
  4. Is it easier to find somethings when there are many of them? Just kidding 😉
    It’s true that there are many solutions in this non-convex optimization problem, so the model would just find a local optimum. Interestingly, in neural network, most local optima are “good enough”. It has been observed that stochastic gradient descent seems to find better local optima than larger batch gradient descent, although this is still a riddle in current research.

Leave a Comment

tech