March 13th, Wednesday 14:15, Room 303, Jacobs Building
Approximating large matrices by other smaller matrices is an important
computational building block. Creating such approximations (aka sketches)
becomes significantly more challenging when the input matrices are large. This
is often the case with modern data sets. I will present three types of matrix
sketching techniques in terms of their running time and required randomness.
First, approaches that utilize subspace embedding arguments and algorithmic
versions of the Johnson-Lindenstrauss lemma. Second, sampling algorithms which
rely on new concentration results for sums of random matrices. Third, a simple
and deterministic algorithm which extends a well know streaming algorithm for
finding frequent items in streams. I will also discuss interesting open
problems and possible research directions for improving upon these results.