COMPSTAT 2022: Start Registration
View Submission - COMPSTAT2022
A0674
Title: Prior weights of Dirichlet PDFs Authors:  Audun Josang - University of Oslo (Norway) [presenting]
Abstract: In the model to be proposed, a Dirichlet PDF has an uninformative prior weight which is initially equal to the domain cardinality $k$, and decreases to become equal to a convergence constant $C$ as the amount of observation evidence increases. The advantage of this approach is that the vacuous Dirichlet PDF (i.e. in the absence of evidence) is always uniform, while at the same time ensuring that the prior carries low weight relative to the observation evidence irrespective of the domain cardinality. More formally, the evidence of a Dirichlet PDF over a multidimensional domain $X$ is denoted as a vector alpha expressed as: $\alpha(x) = r(x) + a(x) W$, where $r(x) \ge 0$ for all $x\in X$. The prior probability distribution is denoted by the vector $a$, and the observation evidence is represented by the vector $r$. With these parameters defined, the uninformative prior weight denoted by $W$ can be expressed as: $W = (k + C k\sum[r(x)]) / (1 + k\sum[r(x)])$. The convergence uninformative prior weight $C$ determines the sensitivity of the Dirichlet PDF to new observation evidence. The larger $C$, the less sensitive the Dirichlet PDF becomes to new observation evidence. If we assume that the sensitivity should always be the same irrespective of the domain cardinality, then it is natural to set $C = 2$, which reflects the same sensitivity as for the uninformative prior weight of the Beta PDF over a binary domain.