PAC Learning, VC Dimension of MLPs

Feel free to browse some old lecture notes of mine on this topic, in postscript form.

To cut to its chase, to guarantee that

P(generalization error < eps) > 1 - delta
requires
N = max( (8 d / eps) log (8 d / eps) , (4 / eps) log (2 / delta) )
samples, where d=VC(C) is the VC dimension of the concept class in use.

Also, modern results (not in the book for some reason) show that the VC dimension of a multilayer perceptron with W weights is W. (It's somewhat larger for sigmoidal units rather than hard thresholds - actually W*W - but you can just consider hard threshold units for simplicity.)


Barak Pearlmutter <bap@cs.unm.edu>