In recent years the Dirichlet process prior has experienced a great success in the context of Bayesian mixture modeling. The idea of overcoming discreteness of its realizations by exploiting it in hierarchical models, combined with the development of suitable sampling techniques, represent one of the reasons of its popularity. In this article we propose the normalized inverse-Gaussian (N–IG) process as an alternative to the Dirichlet process to be used in Bayesian hierarchical models. The N–IG prior is constructed via its finite-dimensional distributions. This prior, although sharing the discreteness property of the Dirichlet prior, is characterized by a more elaborate and sensible clustering which makes use of all the information contained in the data.Whereas in the Dirichlet case the mass assigned to each observation depends solely on the number of times that it occurred, for the N–IG prior the weight of a single observation depends heavily on the whole number of ties in the sample. Moreover, expressions corresponding to relevant statistical quantities, such as a priori moments and the predictive distributions, are as tractable as those arising from the Dirichlet process. This implies that well-established sampling schemes can be easily extended to cover hierarchical models based on the N–IG process. The mixture of N–IG process and the mixture of Dirichlet process are compared using two examples involving mixtures of normals.

Hierarchical mixture modelling with normalized inverse Gaussian priors.

PRUENSTER, Igor
2005-01-01

Abstract

In recent years the Dirichlet process prior has experienced a great success in the context of Bayesian mixture modeling. The idea of overcoming discreteness of its realizations by exploiting it in hierarchical models, combined with the development of suitable sampling techniques, represent one of the reasons of its popularity. In this article we propose the normalized inverse-Gaussian (N–IG) process as an alternative to the Dirichlet process to be used in Bayesian hierarchical models. The N–IG prior is constructed via its finite-dimensional distributions. This prior, although sharing the discreteness property of the Dirichlet prior, is characterized by a more elaborate and sensible clustering which makes use of all the information contained in the data.Whereas in the Dirichlet case the mass assigned to each observation depends solely on the number of times that it occurred, for the N–IG prior the weight of a single observation depends heavily on the whole number of ties in the sample. Moreover, expressions corresponding to relevant statistical quantities, such as a priori moments and the predictive distributions, are as tractable as those arising from the Dirichlet process. This implies that well-established sampling schemes can be easily extended to cover hierarchical models based on the N–IG process. The mixture of N–IG process and the mixture of Dirichlet process are compared using two examples involving mixtures of normals.
2005
100
1278
1291
http://www.ingentaconnect.com/content/asa/jasa
Bayesian nonparametrics; Density estimation; Dirichlet process; Inverse-Gaussian distribution; Mixture models; Predictive distribution; Semiparametric inference.
A. LIJOI; R.H. MENA; I. PRUENSTER
File in questo prodotto:
File Dimensione Formato  
JASA.pdf

Accesso riservato

Tipo di file: POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione 2.72 MB
Formato Adobe PDF
2.72 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/8534
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 116
  • ???jsp.display-item.citation.isi??? 114
social impact