### Skapa referens, olika format (klipp och klistra)

**Harvard**

Dimitrakakis, C. och Savu-Krohn, C. (2008) *Cost-minimising strategies for data labelling: Optimal stopping and active learning*.

** BibTeX **

@conference{

Dimitrakakis2008,

author={Dimitrakakis, Christos and Savu-Krohn, C.},

title={Cost-minimising strategies for data labelling: Optimal stopping and active learning},

booktitle={Foundations of Information and Knowledge Systems, FoIKS 2008},

isbn={3540776834},

abstract={Supervised learning deals with the inference of a distribution over an output or label space conditioned on points in an observation space , given a training dataset D of pairs in . However, in a lot of applications of interest, acquisition of large amounts of observations is easy, while the process of generating labels is time-consuming or costly. One way to deal with this problem is active learning, where points to be labelled are selected with the aim of creating a model with better performance than that of an model trained on an equal number of randomly sampled points. In this paper, we instead propose to deal with the labelling cost directly: The learning goal is defined as the minimisation of a cost which is a function of the expected model performance and the total cost of the labels used. This allows the development of general strategies and specific algorithms for (a) optimal stopping, where the expected cost dictates whether label acquisition should continue (b) empirical evaluation, where the cost is used as a performance metric for a given combination of inference, stopping and sampling methods. Though the main focus of the paper is optimal stopping, we also aim to provide the background for further developments and discussion in the related field of active learning. © 2008 Springer-Verlag Berlin Heidelberg.},

year={2008},

}

** RefWorks **

RT Conference Proceedings

SR Electronic

ID 189773

A1 Dimitrakakis, Christos

A1 Savu-Krohn, C.

T1 Cost-minimising strategies for data labelling: Optimal stopping and active learning

YR 2008

T2 Foundations of Information and Knowledge Systems, FoIKS 2008

SN 3540776834

AB Supervised learning deals with the inference of a distribution over an output or label space conditioned on points in an observation space , given a training dataset D of pairs in . However, in a lot of applications of interest, acquisition of large amounts of observations is easy, while the process of generating labels is time-consuming or costly. One way to deal with this problem is active learning, where points to be labelled are selected with the aim of creating a model with better performance than that of an model trained on an equal number of randomly sampled points. In this paper, we instead propose to deal with the labelling cost directly: The learning goal is defined as the minimisation of a cost which is a function of the expected model performance and the total cost of the labels used. This allows the development of general strategies and specific algorithms for (a) optimal stopping, where the expected cost dictates whether label acquisition should continue (b) empirical evaluation, where the cost is used as a performance metric for a given combination of inference, stopping and sampling methods. Though the main focus of the paper is optimal stopping, we also aim to provide the background for further developments and discussion in the related field of active learning. © 2008 Springer-Verlag Berlin Heidelberg.

LA eng

DO 10.1007/978-3-540-77684-0_9

LK http://dx.doi.org/10.1007/978-3-540-77684-0_9

OL 30