PAC Learning under Helpful Distributions
Équipe GRAPPA, LIFL UPRESA 8022 du CNRS, Université de Lille 1,
59655 Villeneuve-d'Ascq, France; (email@example.com)
2 Équipe GRAPPA, LIFL UPRESA 8022 du CNRS, Université de Lille 1, 59655 Villeneuve-d'Ascq, France; (firstname.lastname@example.org)
Revised: 20 December 2000
Accepted: 3 April 2001
A PAC teaching model -under helpful distributions -is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam razor theorem and its converse are proved. Some classical classes of boolean functions, such as Decision Lists, DNF and CNF formulas are proved learnable in this model. Comparisons with other teaching models are made: learnability in the Goldman and Mathias model implies PAC learnability under helpful distributions. Note that Decision lists and DNF are not known to be learnable in the Goldman and Mathias model. A new simple PAC model, where "simple" refers to Kolmogorov complexity, is introduced. We show that most learnability results obtained within previously defined simple PAC models can be simply derived from more general results in our model.
Mathematics Subject Classification: 68Q30 / 68Q32
Key words: PAC learning / teaching model / Kolmogorov complexity.
© EDP Sciences, 2001