Abstract
We explore various methods for computing sentence representations from
pre-trained word embeddings without any training, i.e., using nothing but
random parameterizations. Our aim is to put sentence embeddings on more solid
footing by 1) looking at how much modern sentence embeddings gain over random
methods---as it turns out, surprisingly little; and by 2) providing the field
with more appropriate baselines going forward---which are, as it turns out,
quite strong. We also make important observations about proper experimental
protocol for sentence classification evaluation, together with recommendations
for future research.
Users
Please
log in to take part in the discussion (add own reviews or comments).