dc.contributor.author |
Fernando, MACS |
en |
dc.contributor.author |
Curran, James |
en |
dc.contributor.author |
Meyer, Renate |
en |
dc.coverage.spatial |
Singapore |
en |
dc.date.accessioned |
2017-04-18T22:21:11Z |
en |
dc.date.issued |
2016-11-22 |
en |
dc.identifier.citation |
18th International Conference on Statistics and Analysis, 2016 |
en |
dc.identifier.uri |
http://hdl.handle.net/2292/32582 |
en |
dc.description.abstract |
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as Akaike information criterion (AIC), Bayesian information criterion (BIC), Deviance information criterion (DIC), and Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out or exact cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing Makov Chain Monte Carlo (MCMC) results avoiding expensive computational issues. Although, information criteria and LOO are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. In this paper we illustrate the limitations of the information criteria in practical model comparison problems with respect to an example from forensic statistics. In addition, the relationships among LOO approximation methods and WAIC with their limitations are discussed. Finally, we provide some useful recommendations that may help in practical model comparisons with these methods. Key words: information criteria, cross-validation, importance sampling, predictive accuracy |
en |
dc.relation.ispartof |
18th International Conference on Statistics and Analysis |
en |
dc.rights |
Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher. |
en |
dc.rights.uri |
https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm |
en |
dc.title |
Performance and Limitations of Likelihood based Information Criteria and Leave-one-out Cross-validation Approximation Methods |
en |
dc.type |
Presentation |
en |
dc.rights.holder |
Copyright: The author |
en |
pubs.author-url |
https://www.waset.org/abstracts/57619 |
en |
pubs.finish-date |
2016-11-22 |
en |
pubs.start-date |
2016-11-21 |
en |
dc.rights.accessrights |
http://purl.org/eprint/accessRights/OpenAccess |
en |
pubs.subtype |
Conference Oral Presentation |
en |
pubs.elements-id |
606131 |
en |
pubs.org-id |
Science |
en |
pubs.org-id |
Statistics |
en |
pubs.record-created-at-source-date |
2017-01-18 |
en |