Computer Science Colloquium, 2007-2008

Danny Gutfreudn
MIT
January 9, 2008

Title: Worst-case complexity vs. average-case complexity

Abstract:

Worst-case complexity, which measures the performance of algorithms with respect to the most difficult instances of the problem at hand, is a standard and convenient complexity measure. On the other hand, average-case complexity, which measures the performance of algorithms with respect to typical (or simply random) instances, may be a more realistic complexity measure for instances that actually appear in practice. Furthermore, it seems to be the right measure in settings where computational hardness is beneficial (e.g. in cryptography and pseudo-randomness).

A central question in theoretical computer science is for which classes of computational problems (and against which families of algorithms) there is an equivalence between their worst-case and average-case complexities. This question and the techniques to attack it are related to many other areas in computer science such as cryptography, pseudo-randomness, error-correcting codes, program checking and learning.

In this survey talk I will describe some recent results on worst-case/average-case equivalence. I will concentrate on two important classes of computational problems: NP and EXPTIME.

 

 


Benny Pinkas