What is the difference between word2Vec and Glove ?

Spread the Knowledge

Word2Vec: Feed forward neural network based model to find word embeddings. The Skip-gram model takes the input as each word in the corpus, sends them to a hidden layer (embedding layer) and from there it predicts the context words. Once trained, the embedding for a particular word is obtained by feeding the word as input and taking the hidden layer value as the final embedding vector. 

GloVe: Glove is based on matrix factorization techniques on the word-context matrix. It first constructs a large matrix of (words x context) co-occurrence information, i.e. for each “word” (the rows), you count how frequently we see this word in some “context” (the columns) in a large corpus.  The number of “contexts” is of course large, since it is essentially combinatorial in size. So then we factorize this matrix to yield a lower-dimensional (word x features) matrix, where each row now yields a vector representation for each word. In general, this is done by minimizing a “reconstruction loss”. This loss tries to find the lower-dimensional representations which can explain most of the variance in the high-dimensional data.

The word vectors in an abstract way represent different facets of the meaning of a word. Some notable properties are :

  1. such word vectors are good at answering analogy questions. The relationship between words is derived by distance between words.
  2. We can also use element-wise addition of vector elements to ask questions such as ‘German + airlines’   

word2vec

Analogy using word vectors

 


Spread the Knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *