Embedding Words in Non-Vector Space with Unsupervised Graph Learning
M. Ryabinin, S. Popov, L. Prokhorenkova, and E. Voita. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), page 7317--7331. Online, Association for Computational Linguistics, (November 2020)
Abstract
It has become a de-facto standard to represent words as elements of a vector space (word2vec, GloVe). While this approach is convenient, it is unnatural for language: words form a graph with a latent hierarchical structure, and this structure has to be revealed and encoded by word embeddings. We introduce GraphGlove: unsupervised graph word representations which are learned end-to-end. In our setting, each word is a node in a weighted graph and the distance between words is the shortest path distance between the corresponding nodes. We adopt a recent method learning a representation of data in the form of a differentiable weighted graph and use it to modify the GloVe training algorithm. We show that our graph-based representations substantially outperform vector-based methods on word similarity and analogy tasks. Our analysis reveals that the structure of the learned graphs is hierarchical and similar to that of WordNet, the geometry is highly non-trivial and contains subgraphs with different local topology.
%0 Conference Paper
%1 ryabinin-etal-2020-embedding
%A Ryabinin, Max
%A Popov, Sergei
%A Prokhorenkova, Liudmila
%A Voita, Elena
%B Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
%C Online
%D 2020
%I Association for Computational Linguistics
%K graph representation
%P 7317--7331
%T Embedding Words in Non-Vector Space with Unsupervised Graph Learning
%U https://www.aclweb.org/anthology/2020.emnlp-main.594
%X It has become a de-facto standard to represent words as elements of a vector space (word2vec, GloVe). While this approach is convenient, it is unnatural for language: words form a graph with a latent hierarchical structure, and this structure has to be revealed and encoded by word embeddings. We introduce GraphGlove: unsupervised graph word representations which are learned end-to-end. In our setting, each word is a node in a weighted graph and the distance between words is the shortest path distance between the corresponding nodes. We adopt a recent method learning a representation of data in the form of a differentiable weighted graph and use it to modify the GloVe training algorithm. We show that our graph-based representations substantially outperform vector-based methods on word similarity and analogy tasks. Our analysis reveals that the structure of the learned graphs is hierarchical and similar to that of WordNet, the geometry is highly non-trivial and contains subgraphs with different local topology.
@inproceedings{ryabinin-etal-2020-embedding,
abstract = {It has become a de-facto standard to represent words as elements of a vector space (word2vec, GloVe). While this approach is convenient, it is unnatural for language: words form a graph with a latent hierarchical structure, and this structure has to be revealed and encoded by word embeddings. We introduce GraphGlove: unsupervised graph word representations which are learned end-to-end. In our setting, each word is a node in a weighted graph and the distance between words is the shortest path distance between the corresponding nodes. We adopt a recent method learning a representation of data in the form of a differentiable weighted graph and use it to modify the GloVe training algorithm. We show that our graph-based representations substantially outperform vector-based methods on word similarity and analogy tasks. Our analysis reveals that the structure of the learned graphs is hierarchical and similar to that of WordNet, the geometry is highly non-trivial and contains subgraphs with different local topology.},
added-at = {2020-11-25T14:59:30.000+0100},
address = {Online},
author = {Ryabinin, Max and Popov, Sergei and Prokhorenkova, Liudmila and Voita, Elena},
biburl = {https://www.bibsonomy.org/bibtex/274514d3c8e3ee5b66f4e90d4ea7c930f/tobias.koopmann},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
interhash = {bfc211190a65620d10a95518aa1159b3},
intrahash = {74514d3c8e3ee5b66f4e90d4ea7c930f},
keywords = {graph representation},
month = nov,
pages = {7317--7331},
publisher = {Association for Computational Linguistics},
timestamp = {2020-11-25T14:59:30.000+0100},
title = {{E}mbedding {W}ords in {N}on-{V}ector {S}pace with {U}nsupervised {G}raph {L}earning},
url = {https://www.aclweb.org/anthology/2020.emnlp-main.594},
year = 2020
}