CPL - Chalmers Publication Library
| Utbildning | Forskning | Styrkeområden | Om Chalmers | In English In English Ej inloggad.

Machine Learning Methods Using Class-specific Subspace Kernel Representations for Large-Scale Applications

Yinan Yu (Institutionen för signaler och system, Signalbehandling)
Göteborg : Chalmers University of Technology, 2016. ISBN: 978-91-7597-487-3.
[Doktorsavhandling]

Kernel techniques became popular due to and along with the rising success of Support Vector Machines (SVM). During the last two decades, the kernel idea itself has been extracted from SVM and is now widely studied as an independent subject. Essentially, kernel methods are nonlinear transformation techniques that take data from an input set to a high (possibly infinite) dimensional vector space, called the Reproducing Kernel Hilbert Space (RKHS), in which linear models can be applied. The original input set could be data from different domains and applications, such as tweets, ratings of movies, images, medical measurements, etc. The two spaces are connected by a Positive-Semi Definite (PSD) kernel function and all computations in the RKHS are evaluated on the low dimensional input set using the kernel function. Kernel methods are proven to be efficient on various applications. However, the computational complexity of most kernel algorithms typically grows cubically, or at least quadratically, with respect to the training size. This is due to the fact that a Gram kernel matrix needs to be constructed and/or inverted. To improve the scalability for large-scale training, kernel approximation techniques are employed, where the kernel matrix is assumed to have a low-rank structure. Essentially, this is equivalent to assuming a subspace model spanned by a subset of the training data in the RKHS. The task is hence to estimate the subspace with respect to some criteria, such as the reconstruction error, the discriminative power for classification tasks, etc. Based on these motivations, this thesis focuses on the development of scalable kernel techniques for supervised classification problems. Inspired by the idea of the subspace classifier and kernel clustering models, we have proposed the CLAss-specific Subspace Kernel (CLASK) representation, where class-specific kernel functions are applied and individual subspaces can be constructed accordingly. In this thesis work, an automatic model selection technique is proposed to choose the best multiple kernel functions for each class based on a criterion using the subspace projection distance. Moreover, subset selection and transformation techniques using CLASK are developed to further reduce the model complexity with an enhanced discriminative power for kernel approximation and classification. Furthermore, we have also proposed both a parallel and a sequential framework to tackle large-scale learning problems.

Nyckelord: large-scale framework, subspace model, RKHS, class-specific kernels, kernel approximation, classification



Den här publikationen ingår i följande styrkeområden:

Läs mer om Chalmers styrkeområden  

Denna post skapades 2016-10-17. Senast ändrad 2016-11-04.
CPL Pubid: 243506

 

Läs direkt!

Lokal fulltext (fritt tillgänglig)


Institutioner (Chalmers)

Institutionen för signaler och system, Signalbehandling

Ämnesområden

Informations- och kommunikationsteknik
Sannolikhetsteori och statistik
Datorseende och robotik (autonoma system)
Signalbehandling

Chalmers infrastruktur

Examination

Datum: 2016-11-11
Tid: 10:00
Lokal: SB-H8, Sven Hultins gata 6
Opponent: Sergios Theodoridis, Department of Informatics and Telecommunications, University of Athens

Ingår i serie

Doktorsavhandlingar vid Chalmers tekniska högskola. Ny serie 4168