NIPS 2002 Workshop
Spectral Methods in Dimensionality Reduction, Clustering, and Classification
Date: Friday or Saturday, Dec. 13 or 14, 2002
Length: One day
Location: Whistler, British Columbia, Canada
Sam Roweis or
Data-driven learning in neural or artificial systems has often been
disparaged as a painfully slow process fraught with local minima.
However, by formulating a learning task as an appropriate algebraic
problem, globally optimal solutions may be computed efficiently in
closed form via an eigendecomposition. Traditionally, this spectral
approach was thought to be applicable only to learning problems with
an essentially linear structure, such as principal component analysis
or linear discriminant analysis. Recently, researchers in machine
learning, statistics, and theoretical computer science have figured
out how to cast a number of important nonlinear learning problems in
terms amenable to spectral methods. These problems include nonlinear
dimensionality reduction, nonparameteric clustering, and nonlinear
classification with fully or partially labeled data. Spectral
approaches to these problems offer the potential for dramatic
improvements in efficiency and accuracy relative to traditional
iterative or greedy algorithms. Furthermore, numerical methods for
spectral computations are extremely mature and well understood,
allowing learning algorithms to benefit from a long history of
implementation efficiencies in other fields.
The goal of this workshop is to bring together researchers
working on spectral approaches across this broad range of problem
areas, for a series of talks on state-of-the-art research and
discussions of common themes and open questions. Some examples of
questions we hope to discuss include:
- What are the crucial similarities and differences in how spectral
methods have been -- or should be -- applied to different kinds of
problems, such as dimensionality reduction, clustering and
classification? To what extent do these apparently different learning
tasks collapse into a single task when viewed from the perspective of
- How do spectral approaches relate to Bayesian or other
statistical learning frameworks? Under what circumstances can we
understand spectral methods as fitting a reasonable probabilistic
generative model of the data? What commitments about the structure of
the data environment are embodied in different spectral methods, and
how can these methods be adapted to incorporate additional knowledge?
How general are spectral methods? How nonlinear or nonparameteric
can a learning problem become before spectral methods become
inapplicable? What specific kinds of nonlinear or nonparametric
structures are best treated in this framework?
Spectral methods applied to nonlinear or nonparametric problems
often include one or more free numerical parameters that set a local scale
over which linearity or some simple parameteric assumption can
be assumed to hold. What are the prospects for eliminating these
free parameters, or developing automatic procedures to choose
Spectral methods may sometimes be sensitive to noise in surprising
ways, due to the global nature of the eigendecomposition. To what
extent do theoretical error analyses yield insights into the actual
performance of spectral methods? Do these error analyses suggest
ways of making spectral methods more robust?
How do spectral methods relate to non-negative matrix factorizations
or aspect models for which they can be viewed as a least-squares
approximation? Is is possible to reformulate more general low-rank
decompositions in the same framework of eigenmethods?
The workshop will last one day. Most of the talks will be invited,
but we welcome contributions for short talks by other researchers.
Please contact the organizers if you are interested.
Vin de Silva
Yee Why Teh