Consistency of estimators of population scaled parameters using composite likelihood

Carsten Wiuf*

*Corresponding author for this work
32 Citations (Scopus)

Abstract

Composite likelihood methods have become very popular for the analysis of large-scale genomic data sets because of the computational intractability of the basic coalescent process and its generalizations: It is virtually impossible to calculate the likelihood of an observed data set spanning a large chromosomal region without using approximate or heuristic methods. Composite likelihood methods are approximate methods and, in the present article, assume the likelihood is written as a product of likelihoods, one for each of a number of smaller regions that together make up the whole region from which data is collected. A very general framework for neutral coalescent models is presented and discussed. The framework comprises many of the most popular coalescent models that are currently used for analysis of genetic data. Assume data is collected from a series of consecutive regions of equal size. Then it is shown that the observed data forms a stationary, ergodic process. General conditions are given under which the maximum composite estimator of the parameters describing the model (e.g. mutation rates, demographic parameters and the recombination rate) is a consistent estimator as the number of regions tends to infinity.

Original languageEnglish
JournalJournal of Mathematical Biology
Volume53
Issue number5
Pages (from-to)821-841
Number of pages21
ISSN0303-6812
DOIs
Publication statusPublished - 1 Nov 2006
Externally publishedYes

Keywords

  • Coalescent theory
  • Composite likelihood
  • Consistency
  • Estimator
  • Genomic data

Fingerprint

Dive into the research topics of 'Consistency of estimators of population scaled parameters using composite likelihood'. Together they form a unique fingerprint.

Cite this