Whats the definition of a class of weak learners?

Is it that for a **specific** distribution D, every h in H has error(h) <=0.5-gamma for some gamma>0 with respect to D? Or that there exists an algorithm A that for every destribution D, target function c*, delta<0.5 there exists gamma>0 s.t. with probability 1-delta A outputs an hypothesis h in H with error(h)<=0.5-gamma with respect to D? And if so does it imply that there necessarily exists such hypothesis in H? (With such error for every distribution D)

I'll just add another question about AdaBoost instead of open another thread:

What do we do if the error e_t is zero for some t? Then alpha will be infinity and it seems to make no sense to update any distribution weights…

Then we stop. But note that if this is the case, the weights do not matter, and we would have gotten e_t=0 in t=1.

This is well defined in the lesson and recitation scribes. There is an algorithm A, and there exists gamma>0, so that for every D, c* and delta, etc. etc. I don't understand what you mean by "such hypothesis exists".

I mean that if H is a class of slow learners then is it guaranteed that for every distribution D and target function c* the class H contains some hypothesis h such that error(h)<=0.5-gamma for some gamma>0 with respect to D? i.e. Can we assume that H contains a weak learner for every distribution and target function (instead of a single specific distribution).

This is not what the definition of a class of hypothesis being weak-PAC-learnable means.

But if with probability 1-delta>0.5 an algorithm A outputs an hypothesis h with error<0.5 doesn't it imply that H must contain such an hypothesis? Otherwise the probability of A outputting such h is 0, am I missing something?

Sorry, I got myself confused. Ignore what I said about "unrealizable" weak-PAC. To answer your question, yes. We assume that this is the case.

(I deleted the thread below to avoid confusion for future readers).