Artikel in einem Konferenzbericht,

Reading Tea Leaves: How Humans Interpret Topic Models

, , , , und .
NIPS, Seite 288--296. Curran Associates, Inc., (2009)

Zusammenfassung

Probabilistic topic models are a popular tool for the unsupervised analysis of text, providing both a predictive model of future text and a latent topic representation of the corpus. Practitioners typically assume that the latent space is semantically meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need of quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics.

Tags

Nutzer

  • @jaeschke
  • @ans
  • @dblp
  • @dbenz
  • @lopusz_kdd

Kommentare und Rezensionenanzeigen / verbergen

  • @jaeschke
    vor 12 Jahren
    1. Interesting experiments using "intrusion" technique: add some randomly drawn word into a topic and let the Turkers find it. The more Turkers find the word in a topic, the better the topic. 2. Promising future work: develop algorithms that directly incorprate human judgements. conclusion: a) Let humans decide about the quality of algorithms - not just some measures. b) It /is/ possible to extract coherent topics using existing methods.
Bitte melden Sie sich an um selbst Rezensionen oder Kommentare zu erstellen.