Low-rank decomposition meets kernel learning: A generalized Nyström method


Abstract

Low-rank matrix decomposition and kernel learning are two useful techniques in building advanced learning systems. Low-rank decomposition can greatly reduce the computational cost of manipulating large kernel matrices. However, existing approaches are mostly unsupervised and do not incorporate side information such as class labels, making the decomposition less effective for a specific learning task. On the other hand, kernel learning techniques aim at constructing kernel matrices whose structure is well aligned with the learning target, which improves the generalization performance of kernel methods. However, most kernel learning approaches are computationally very expensive. To obtain the advantages of both techniques and address their limitations, in this paper we propose a novel kernel low-rank decomposition formulation called the generalized Nyström method. Our approach inherits the linear time and space complexity via matrix decomposition, while at the same time fully exploits (partial) label information in computing task-dependent decomposition. In addition, the resultant low-rank factors can generalize to arbitrary new samples, rendering great flexibility in inductive learning scenarios. We further extend the algorithm to a multiple kernel learning setup. The experimental results on semi-supervised classification demonstrate the usefulness of the proposed method.

Keywords

  • Kernel learning;
  • Nyström low-rank decomposition;
  • Large-scale learning algorithms;
  • Multiple kernel learning

Note to users:
Accepted manuscripts are Articles in Press that have been peer reviewed and accepted for publication by the Editorial Board of this publication. They have not yet been copy edited and/or formatted in the publication house style, and may not yet have the full ScienceDirect functionality, e.g., supplementary files may still need to be added, links to references may not resolve yet etc. The text could still change before final publication.

Although accepted manuscripts do not have all bibliographic details available yet, they can already be cited using the year of online publication and the DOI, as follows: author(s), article title, Publication (year), DOI. Please consult the journal’s reference style for the exact appearance of these elements, abbreviation of journal names and use of punctuation.

When the final article is assigned to volumes/issues of the Publication, the Article in Press version will be removed and the final version will appear in the associated published volumes/issues of the Publication. The date the article was first made available online will be carried over.

Author : Liang Lan, Kai Zhang, Hancheng Ge, Wei Cheng, Jun Liu, Andreas Rauber, Xiao-Li Li, Jun Wang, Hongyuan Zha

from ScienceDirect Publication: Artificial Intelligence http://ift.tt/2pHrTmi

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s