Artikel,

Active Learning in Heteroscedastic Noise

, , und .
Theoretical Computer Science, 411 (29--30): 2712--2728 (Juni 2010)
DOI: 10.1016/j.tcs.2010.04.007

Zusammenfassung

We consider the problem of actively learning the mean values of distributions associated with a finite number of options. The decision maker can select which option to generate the next observation from, the goal being to produce estimates with equally good precision for all the options. If sample means are used to estimate the unknown values then the optimal solution, assuming that the distributions are known up to a shift, is to sample from each distribution proportional to its variance. No information other than the distributions' variances is needed to calculate the optimal solution. In this paper we propose an incremental algorithm that asymptotically achieves the same loss as an optimal rule. We prove that the excess loss su ered by this algorithm, apart from logarithmic factors, scales as $n^(-3/2)$, which we conjecture to be the optimal rate. The performance of the algorithm is illustrated on a simple problem.

Tags

Nutzer

  • @csaba

Kommentare und Rezensionen