CPL - Chalmers Publication Library
| Utbildning | Forskning | Styrkeområden | Om Chalmers | In English In English Ej inloggad.

Linear programs for hypotheses selection in probabilistic inference models

Peter Damaschke (Institutionen för data- och informationsteknik, Datavetenskap, Algoritmer (Chalmers) ; Institutionen för data- och informationsteknik, Datavetenskap, Bioinformatik (Chalmers)) ; Anders Bergkvist ; Marcel Lüthi
Journal of Machine Learning Research (1533-7928). Vol. 7 (2006), p. 1339-1355.
[Artikel, refereegranskad vetenskaplig]

We consider an optimization problem in probabilistic inference: Given n hypotheses, m possible observations, their conditional probabilities, and a particular observation, select a possibly small subset of hypotheses excluding the true target only with some given error probability. After specifying the optimization goal we show that this problem can be solved through a linear program in mn variables that indicate the probabilities to discard a hypothesis given an observation. Moreover, we can compute optimal strategies where only O(m+n) of these variables get fractional values. The manageable size of the linear programs and the mostly deterministic shape of optimal strategies make the method practicable. We interpret the dual variables as worst-case distributions of hypotheses, and we point out some counterintuitive nonmonotonic behaviour of the variables as a function of the error bound. One of the open problems is the existence of a purely combinatorial algorithm that is faster than generic linear programming. (Slightly adapted from the original abstract.)

Nyckelord: probabilistic inference, error probability, linear programming, cycle-free graphs, network flows

Denna post skapades 2006-08-25. Senast ändrad 2015-02-11.
CPL Pubid: 21848