View Submission - COMPSTAT2018

A0414
**Title: **Posterior probability SVMs for big data problems
**Authors: **Pedro Duarte Silva - Universidade Catolica Portuguesa / Porto (Portugal) **[presenting]**

**Abstract: **Assume that objects belonging to two well-defined groups can be described by random pairs $(x, y)$, where $x$ is an object descriptor and $y$ is a group label. Then, given a training sample of $l$ independent examples drawn from a common distribution, and a rich enough Reproducing Kernel Hilbert Space, the minimal misclassification probability Bayes rule can be consistently estimated by standard two-group classification Support Vector Machines (SVMs). However, if misclassification costs differ across groups or the training sample proportions do not converge to true a priori probabilities, standard SVMs do not approximate optimal Bayes rules. Nevertheless, non-standard SVMs that minimize a weighted misclassification loss plus a regularization penalty are consistent estimators of the minimal expected cost rule. Furthermore, posterior probabilities may be consistently estimated from the solutions of a succession of non-standard SVMs with varying weights in the loss function. A l1-norm regularization posterior probability SVM specially adapted for Big Data problems will be described. It is known that l1-norm based SVMs are able to handle efficiently much larger data sets than l2-norm based SVMs. The computational and statistical properties of this proposal will be illustrated by simulation experiments.