Аннотация:
We develop an asymptotic theory for the partition function of the word embeddings model WORD2VEC. The proof involves a study of properties of matrices, their determinants and distributions of random normal vectors when their dimension tends to infinity. The conditions imposed are mild enough to cover practically important situations. The implication is that for any word $i$ from a vocabulary $\mathcal{W}$, the context vector $\mathbf{c}_i$ is a reflection of the word vector $\mathbf{w}_i$ in approximately half of the dimensions. This allows us to halve the number of trainable parameters in static word embedding models.
Ключевые слова и фразы:word embeddings, partition function, neural networks, WORD2VEC, asymptotic distribution.