No free lunch theorem

The no-free-lunch theorem basically claims that every learning algorithm sucks as they won’t be better than random guessing for a supervised problem. The argument is that there are infinitely many functions out there and since we only sample some of the points of the function. The rest of the function can literally be anything.

If the no-free-lunch theorem is true, how can any learning algorithm be so successful. This seemingly paradoxical puzzle came from the fact that functions in the real world are not actually random. And they have nice properties such as smoothness and continuity. So a “learned” model is possible to generalize.

Leave a Reply

Your email address will not be published. Required fields are marked *