Calibrated Lazy Associative Classification

Adriano VelosoWagner Meira Jr.Mohammed Zaki

Classification is an important problem in data mining. Given an example x and a class c, a classifier usually works by estimating the probability of x being member of c (i.e., membership probability). Well calibrated classifiers are those able to provide accurate estimates of class membership probabilities, that is, the estimated probability p(c|x) is close to p(c|(c|x)), which is the true, p empirical probability of x being member of c given that the probability estimated by the classifier is p(c|x). Calibration is not a necessary property for producing accurate classifiers, and thus, most of the research has focused on direct accuracy maximization strategies (i.e., maximum margin) rather than on calibration. However, non-calibrated classifiers are problematic in applications where the reliability associated with a prediction must be taken into account (i.e., cost-sensitive classification, cautious classification etc.). In these applications, a sensible use of the classifier must be based on the reliability of its predictions, and thus, the classifier must be well calibrated. In this paper we show that lazy associative classifiers (LAC) are accurate, and well calibrated using a well known, sound, entropy-minimization method. We explore important applications where such characteristics (i.e., accuracy and calibration) are relevant, and we demonstrate empirically that LAC drastically outperforms other classifiers, such as SVMs, Naive Bayes, and Decision Trees (even after these classifiers are calibrated by specific methods). Additional highlights of LAC include the ability to incorporate reliable predictions for improving training, and the ability to refrain from doubtful predictions.

Caso o link acima esteja inválido, faça uma busca pelo texto completo na Web: Buscar na Web

Biblioteca Digital Brasileira de Computação - Contato:
     Mantida por: