Exercises: (Exercises are not mandatory. They are meant to give students a chance to practice and make sure that they understand the lectures).

Some Bibliography:

- Lecture notes of a course given by David Zuckerman.
- Lecture notes of a course given by Salil Vadhan.
- A manuscript on "pairwise independence and derandomization" by Mike Luby and Avi Wigderson.
- A manuscript on "derandomizing BPP" which I wrote based on a course by Avi Wigderson.

The following is a list if topics covered in class. Whenever possible I give a reference to written matirial on the subject. I use the following method:

- (***) marks a source which presents the subject essentially in the same way as we did in class.
- (**) marks a source which presents the subject in a way that is close to the way we did it in class.
- (*) marks a source which covers the subject (but the presentation may differ than the way we did it in class).

Introduction : (Lecture 1). *
(** Vadhan's lectures 1,2)*

- Probabilistic algorithms.
- A probabilistic algorithm for max-cut.
- A probabilistic algorithm for undirected-connectivity.

- The pseudorandomness paradigm for derandomization:
*(* Zuckerman's lecture 1).*- The notion of a pseudorandom generator.
- A pseudorandom generator for the MAX-CUT algorithm based on 2-wise independence.
- A pseudorandom generator for small space algorithms derandomizes the algorithm for undirected connectivity.

Pair-wise and k-wise independence: (Lecture 2)
*(** Some of this subject is covered in Zuckerman's
lecture 5, ** See also a manuscript by Mike Luby and Avi Wigderson. Relevant
chapters are: 2,3,4,5, ** See also Vadhan's lecture 7)*

- Definition of k-wise independent hash families.
- Constructions of 2-wise (and k-wise) independent hash families.
- Corollary: pseudorandom generators against k-wise tests.
- The expected number of collisions of hashing a set: (A
set of size 2^k of n bits strings hashed to m bit strings will have an
expected number of 2^(2k-m) collisions.)
- Application: Dictionaries for 2^k words of length n with storage 2^2k and O(1) access time.
- Using less storage: The [FKS] hashing scheme.

Averaging samplers: (Lecture 3) *
(* A survey on averaging samplers by Oded Goldrecih, ** Vadhan's
lecture 2, * Chapters 7,9 in the manuscript of Luby and Wigderson.) *

- Definitions of complexity classes associated with
probabilistic algorithms (BPP,RP, ZPP) and their basic properties:
- RP intersect coRP=ZPP
- RP is in NP
- BPP is in EXP
- Questions: BPP=P? BPP=EXP? P=NP => BPP=P ?

- The notion of an averaging sampler.
- Chernoff's inequality.
- Chebichev's inequality and a sampler based on 2-wise independence.
- Deterministic amplification (that is using averaging samplers to do a randomness efficient amplification of success probability)
- Tail-inequalities for k-wise independence and a sampler based on k-wise independence.
- Applications of hashing and sampling:
- BPP is in Sigma_2 (corollary: P=NP => BPP=P).
- The proof of Goldreich and Zuckerman based on
"strong" averaging samplers. (
*The notes above give the more standard proof. Here's the paper of Goldreich and Zuckerman).*

More applications of hashing: (Lecture 4)
*(** Lectures 5 in a
course by Oded
Goldreich.) *

- Counting problems and the class #P.
- Oracles and oracle classes.
- approximate counting: (additive approximation versus relative approximation).
- A relative approximation scheme in BPP^NP.
- Tools: for m=k-O(log n) n-wise independent hash functions h:(n)->(m) have the property that for every set of size 2^k a random hash function sends essentially the same number of elements to every bin.
- This also gives another proof that BPP in Sigma_2.

Small sample spaces: (end of lecture 4,
lecture 5 and part of lecture 6) *(** Lectures 6,7,8,9
in Zuckerman's notes. Note that this also includes an introduction to
error-correcting codes which is our next topic).*

- Families of tests: (k-wise tests, linear tests, poly-size tests, all tests).
- Main question: how many random bits are needed to get an eps-pseudorandom generator which output m bits against a family of test.

Type of tests: | eps=0 | general eps |

k-wise tests | k log m | k + log log m + log(1/eps) |

linear tests | m | log m + log(1/eps) |

poly-size tests | m | log m + log(1/eps) (existence only. No known explicit construction.) |

- The notion of eps-bias [NN].
- Thm: If a distrubution on n bits is eps-bias then it is
eps 2^(n/2) close to uniform.
- Corollary: for eps=0 no savings can be made when fooling linear (and therefore poly-size) test.
- Corollary: If a distribution is eps/2^k bias then it is eps-pseudorandom for k-wise tests.

- Fourier analysis (proof of theorem).
- A construction of eps-bias with seed 2log (m/eps)
[AGHP].
- Corollary: (k,eps)-wise indepndence with k+log m+log(1/eps) bit seed.

- Improving the corollary from log m to log log m
- Combinatorial approach based on hashing.
- Algebraic approach: Using a (linear) generator for k-wise tests with an eps/2^k bias seed.

Error correcting codes: (2nd part of lecture 6
and part of lecture 7). *(** Zuckerman's lectures
8,9,10)*

- The definition of (n,k,d)_q codes and linear codes: (distance, rate, and algorithmic problems).
- The Reed-Solomon code.
- The Hadamard code.
- Concatenation of codes.
- A code based on restricting Hadamard to an eps-bias set.
- List-decoding.
- Combinatorial list-decoding of Hadamard.
- Algorithmic (sublinear time) decoding and list-decoding of Hadamard (The Goldreich-Levin Theorem). (*** See section on "Goldreich-Levin" in the manuscript that I wrote).

Expander graphs: (lecture 8) *
(*** Vadhan's lecture 9)*

- Random walks on connected d-regular non-bipartite graphs converge to the uniform distribution.
- The 2nd eigenvalue governs the rate of convergence.
- Definition of vertex expansion.
- Relations between vertex expansion and 2nd eigenvalue.
- A walk that starts from a random vertex produces a sequence of vertices that is "pseudorandom".
- Expander Chernoff bound (statement only).
- Special case: the probability that the walk stays in t steps in a small set goes down exponentially with t.
- The notion of explicit construction of expander graphs.

Derandomizing BPP: (lectures 9,10)

- PRG's that fool size n^{2c} derandomize probabilistic algorithms that run in time n^c.
- PRG's that fool Turing Machines can be useful if inputs are "feasibly generated".
- Explicit PRG's imply the existence of a function f in E that is hard for size 2^{\Omega(l)} circuits.
- The hardness versus randomness paradigm: Construct PRG's from hardness assumptions.
- A candidate PRG: G(x)=x,f(x)
- Hardness on the worst case versus hardness on average.
- A generator that outputs m bits and (eps/m)-fools size m
prediction test is pseudorandom against size m general tests.
*(*** see the relevant chapter in the manuscript that I wrote.)* - The generator G(x1,x2)=x1,x2,f(x1),f(x2) : Proof of correctness by using nonuniformity.
- (l,u)-designs.
*(** See chapter on Nisan-Wigderson generator in the manuscript that I wrote. There are also notes in Vadhan's notes).* - The Nisan-Wigderson generator.
*(** See chapter on Nisan-Wigderson generator in the manuscript that I wrote. There are also notes in Vadhan's notes).* - The S-Umans generator. (*** powerpoint presentation)