Composite likelihood approach to finite normal mixture models - Statistics and Actuarial Science - Simon Fraser University
A composite likelihood consists of a combination of valid likelihood objects. It is shown to be an good and practical alternative to the ordinary full likelihood when the full likelihood is intractable, or difficult to evaluate due to complex dependencies. The composite likelihood approach has demonstrated its advantage in a number of applications. For a few but important cases the composite likelihood is fully efficient with identical estimators compared to the full likelihood.. In this talk, we propose to use composite likelihood method for analyzing multivariate normal mixture models. Some statistical properties of the composite likelihood estimator, consistency and asymptotic normality, are established. A composite likelihood EM algorithm is used to maximize the penalized pairwise log-likelihood function. We prove that the CL-EM algorithm satisfies the ascent property and converges to a stationary point of the objective function. Simulation studies are presented to demonstrate the ...
Empirical likelihood and extremes
In 1988, Owen introduced empirical likelihood as a nonparametric method for constructing confidence intervals and regions. Since then, empirical likelihood has been studied extensively in the literature due to its generality and effectiveness. It is well known that empirical likelihood has several attractive advantages comparing to its competitors such as bootstrap: determining the shape of confidence regions automatically using only the data; straightforwardly incorporating side information expressed through constraints; being Bartlett correctable. The main part of this thesis extends the empirical likelihood method to several interesting and important statistical inference situations. This thesis has four components. The first component (Chapter II) proposes a smoothed jackknife empirical likelihood method to construct confidence intervals for the receiver operating characteristic (ROC) curve in order to overcome the computational difficulty when we have nonlinear constrains in the ...
Sieve empirical likelihood ratio tests for nonparametric functions - Kent Academic Repository
Generalized likelihood ratio statistics have been proposed in Fan, Zhang and Zhang [Ann. Statist. 29 (2001) 153-193] as a generally applicable method for testing norparametic hypotheses about nonparametric functions. The likelihood ratio statistics are constructed based on the assumption that the distributions of stochastic errors are in a certain parametric family. We extend their work to the case where the error distribution is completely unspecified via newly proposed sieve empirical likelihood ratio (SELR) tests. The approach is also applied to test conditional estimating equations on the distributions of stochastic errors. It is shown that the proposed SELR statistics follow asymptotically resealed chi(2)-distributions, with the scale constants and the degrees of freedom being independent of the nuisance parameters. This demonstrates that the Wilks phenomenon observed in Fan, Zhang and Zhang [Ann. Statist. 29 (2001) 153-193] continues to hold under more relaxed models and a larger class of ...
Composite likelihood methods in statistical genetics - Lancaster EPrints
Due to the dimension and the dependency structure of genetic data, composite likelihood methods have found their natural place in the statistical methodo-logy involving such data. After a brief description of the type of data one encounters in population genetic studies, we introduce the questions of interest concerning the main genetic parameters in population genetics, and present an up-to-date review on how composite likelihoods have been used to estimate these parameters.. ...
Maximum-Likelihood Estimation of Relatedness | Genetics
These conclusions differ dramatically from those obtained earlier for different maximum-likelihood estimators of relatedness (Ritland 1996a; Lynch and Ritland 1999). Their estimators performed so poorly as to be immediately discarded as useless in practice. This difference arises either from admitting solutions that cannot be directly interpreted biologically as probabilities of identity-by-descent or from the nature of the likelihood function (see appendix b). In contrast, the likelihood estimator investigated here is consistent with the traditional literature on likelihood estimation of relatedness (Thompson 1975) and admits only solutions that are fully interpretable biologically. As a result, it performs much better than previously suggested for maximum-likelihood estimators.. The other feature of the likelihood estimator is that it, unlike the others, is biased under some conditions. The degree of bias is dependent on the actual degree of relatedness between individuals and the nature of ...
Composite likelihood estimation in multivariate data analysis | UBC Department of Statistics
The authors propose two composite likelihood estimation procedures for multivariate models with regression/univariate and dependence parameters. One is a two-stage method based on both univariate and bivariate margins. The other estimates all the parameters simultaneously based on bivariate margins. For some special cases, the authors compare their asymptotic efficiencies with the maximum likelihood method. The performance of the two methods is reasonable, except that the first procedure is inefficient for the regression parameters under strong dependence. The second approach is generally better for the regression parameters, but less efficient for the dependence parameters under weak dependence ...
Zero-inflated Poisson Regression | Stata Data Analysis Examples
zip count child camper, inflate(persons) vuong Fitting constant-only model: Iteration 0: log likelihood = -1347.807 Iteration 1: log likelihood = -1315.5343 Iteration 2: log likelihood = -1126.3689 Iteration 3: log likelihood = -1125.5358 Iteration 4: log likelihood = -1125.5357 Iteration 5: log likelihood = -1125.5357 Fitting full model: Iteration 0: log likelihood = -1125.5357 Iteration 1: log likelihood = -1044.8553 Iteration 2: log likelihood = -1031.8733 Iteration 3: log likelihood = -1031.6089 Iteration 4: log likelihood = -1031.6084 Iteration 5: log likelihood = -1031.6084 Zero-inflated Poisson regression Number of obs = 250 Nonzero obs = 108 Zero obs = 142 Inflation model = logit LR chi2(2) = 187.85 Log likelihood = -1031.608 Prob > chi2 = 0.0000 ------------------------------------------------------------------------------ count , Coef. Std. Err. z P>,z, [95% Conf. Interval] -------------+---------------------------------------------------------------- count , child , -1.042838 .0999883 ...
Nuisance parameters, composite likelihoods and a panel of GARCH models
Downloadable! We investigate the properties of the composite likelihood (CL) method for (T ×N_T ) GARCH panels. The defining feature of a GARCH panel with time series length T is that, while nuisance parameters are allowed to vary across N_T series, other parameters of interest are assumed to be common. CL pools information across the panel instead of using information available in a single series only. Simulations and empirical analysis illustrate that in reasonably large T CL performs well. However, due to the estimation error introduced through nuisance parameter estimation, CL is subject to the
Statistical Inference Using Maximum Likelihood Estimation and the Generalized Likelihood Ratio when the True Parameter is on...
Statistical Inference Using Maximum Likelihood Estimation and the Generalized Likelihood Ratio when the True Parameter is on the Boundary of the Parameter Space* (Feng, Ziding; McCulloch, Charles E.) 13 ...
Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed‐form Approximation Approach | The Econometric Society
p. 223-262. Yacine Aït‐Sahalia When a continuous‐time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed‐form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models ...
Statistical Diagnosis for General Transformation Model with Right Censored Data Based on Empirical Likelihood
In
this work, we consider statistical diagnostic for general transformation models
with right censored data based on empirical likelihood. The models are a class
of flexible semiparametric survival models and include many popular survival
models as their special cases. Based on empirical likelihood methodologe, we
define some diagnostic statistics. Through some simulation studies, we show
that out proposed procedure can work fairly well.
Maximum Likelihood Estimation in GAUSS - Aptech
Learn how to perform maximum likelihood estimation with the GAUSS Maximum Likelihood MT library using our simple linear regression example.
Size Refinement of Empirical Likelihood Tests in Time Series Models using Sieve Bootstraps | Korea Science
Size Refinement of Empirical Likelihood Tests in Time Series Models using Sieve Bootstraps - Time series;empirical likelihood;size of the test;sieve bootstrap;
Entropy | Free Full-Text | Log Likelihood Spectral Distance, Entropy Rate Power, and Mutual Information with Applications to...
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech.
Estimating operational risk capital: the challenges of truncation, the hazards of maximum likelihood estimation, and the...
ABSTRACT. In operational risk measurement, the estimation of severity distribution parameters is the main driver of capital estimates, yet this remains a nontrivial challenge for many reasons. Maximum likelihood estimation (MLE) does not adequately meet this challenge because of its well-documented nonrobustness to modest violations of idealized textbook model assumptions: specifically, that the data is independent and identically distributed (iid), which is clearly violated by operational loss data. Yet, even using iid data, capital estimates based on MLE are biased upward, sometimes dramatically, due to Jensens inequality. This overstatement of the true risk profile increases as the heaviness of the severity distribution tail increases, so dealing with data collection thresholds by using truncated distributions, which have thicker tails, increases MLE-related capital bias considerably. Truncation also augments correlation between a distributions parameters, and this exacerbates the ...
R] Survival Analysis - Cox Regression - Confidence Limits for HR based on Likelihood function
Hallo! Does anybody know how to make R calculate confidence limits for the Hazard Ratio based on the likelihood function when doing Cox Regression? As the p Values and the confidence intervals of Wald statistics differ from the p Value of the likelihood function sometimes likelihood-p is ,.05 while the 95%-CI is still including 1. Any help is appreciated. Will -- Dipl.-Psych. Wilmar Igl c/o Institut für Psychotherapie u. Medizinische Psychologie Arbeitsbereich Rehabilitationswissenschaften Marcusstrasse 9-11 (R. 409), 97070 Würzburg Telefon: 0931/31-2573, FAX: 0931/31-2078 E-mail: wilmar.igl at mail.uni-wuerzburg.de URL: http://www.psychotherapie.uni-wuerzburg.de/mitarbeiter/igl.html ...
Ashley Askew | Department of Statistics
Empirical likelihood is a nonparametric method based on a data-driven likelihood. The flexibility of empirical likelihood facilitates its use in complex settings, which can in turn create computational challenges. Additionally, the Empty Set Problem (ESP) which arises with the Empirical Estimating Equations approach can pose problems with estimation, as data are unable to meet constraints when the true parameter is outside the convex hull. As an alternative to the Newton and quasi-Newton methods conventionally used in empirical likelihood, this dissertation develops and examines various Evolutionary Algorithms for global optimization of empirical likelihood estimates, as well as a comparison of the ESP versus non-ESP data sets on an overdetermined problem. Finally, we carry out a preliminary application of composite empirical likelihood methods, noting the impact of the ESP on the subsets of data, and compare the results obtained on all possible combinations against those from convergent subsets ...
Bias Reduction for the Maximum Likelihood Estimator of the Scale Parameter in the Half-Logistic Distribution
Downloadable! We derive an analytic expression for the bias, to O(n-1) of the maximum likelihood estimator of the scale parameter in the half-logistic distribution. Using this expression to bias-correct the estimator is shown to be very effective in terms of bias reduction, without adverse consequences for the estimators precision. The analytic bias-corrected estimator is also shown to be dramatically superior to the alternative of bootstrap-bias-correction.
Joon Y Park - Asymptotic Theory of Maximum Likelihood Estimator for Diffusion Model.pdf - Department of Economics
You are here: Home / Seminars / Seminar Documents / Joon Y Park - Asymptotic Theory of Maximum Likelihood Estimator for Diffusion Model.pdf ...
CIS - Likelihood-based inference for correlated diffusions
article{CIS-250919, Author = {Kalogeropoulos, Konstantinos and Dellaportas, Petros and Roberts, Gareth O.}, Title = {Likelihood-based inference for correlated diffusions}, Journal = {Canadian Journal of Statistics}, Volume = {39}, Number = {1}, Year = {2011}, Pages = {52--72}, Keywords = {Cholesky decomposition, Markov chain Monte Carlo, Multivariate stochastic volatility models, multivariate CIR, multi-dimensional SDEs ...
Mélange de modèles mixtes : application à lanalyse des appariements de chromosomes chez des haploïdes de colza
Dempster A., Laird N.M., Rubin D.B. (1977), Maximum likelihood estimation from incomplete data via the EM algorithm, Journal of the Royal Statistical Society, B 39, 1-38. , MR 501537 , Zbl 0364.62022 Loisel P., Goffinet B., Monod H., Montes De Oca G. (1994), Detecting a major gene in an F2 population, Biometries, 50, 512-516. , MR 1294684 , Zbl 0825.62767 Meng X.L., Rubin D.B. (1993), Maximum likelihood estimation via the ECM algorithm : A gênerai framework, Biometrika, 80, 267-278. , MR 1243503 , Zbl 0778.62022 Self S.G., Liang K. (1987), Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard condition, Journal of the American Statistical Association, 82, 605-610. , MR 898365 , Zbl 0639.62020 ...
Improving the accuracy of likelihood-based inference in meta-analysis
and meta-regression
Random-effects models are frequently used to synthesise information from different studies in meta-analysis. While likelihood-based inference is attractive both in terms of limiting properties and of implementation, its application in random-effects meta-analysis may result in misleading conclusions, especially when the … -
1509.00650
Maximum Likelihood Estimation of Time-Verying Loadings in High-Dimensional Factor Models - Tinbergen
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as s
Genealogical Evidence for Positive Selection in the nef Gene of HIV-1 | Genetics
Maximum likelihood analysis of positive selection in the HIV-1 nef gene: We first reconstructed maximum likelihood trees for samples from within the hemophiliac patient, taking each time point separately and in combination. Three codon-based maximum likelihood models were then applied to see which provided the best fit to these data. Since the positive selection model has two more parameters than the neutral model, the models are nested and their likelihoods can be compared directly using a χ2-test with d.f. = 2. As can be seen in Table 1, the positive selection model has a better fit to the data at 25 mo (P , 0.001), with 20.9% falling into the selected category (ω3 = 8.126). Although positive selection was not significantly favored at 41 mo postinfection (0.1 , P , 0.05), a high value of ω3 (2.671) was obtained for 22.7% of the sites. There was no evidence for positive selection at 11 mo. When successive data points were combined (i.e., 11 plus 25 mo and 25 plus 41 mo) the positive ...
Model selection for extended quasi-likelihood models in small samples
We develop a small sample criterion (AICc) for the selection of extended quasi-likelihood models. In contrast to the Akaike information criterion (AIC). AICc provides a more nearly unbiased estimator for the expected Kullback-Leibler information. Consequently, it often selects better models than AIC …
Measurement error in the explanatory variable of a binary regression: regression calibration and integrated conditional...
In epidemiology, one approach to investigating the dependence of disease risk on an explanatory variable in the presence of several confounding variables is by fitting a binary regression using a conditional likelihood, thus eliminating the nuisance parameters. When the explanatory variable is measured with error, the estimated regression coefficient is biased usually towards zero. Motivated by the need to correct for this bias in analyses that combine data from a number of case-control studies of lung cancer risk associated with exposure to residential radon, two approaches are investigated. Both employ the conditional distribution of the true explanatory variable given the measured one. The method of regression calibration uses the expected value of the true given measured variable as the covariate. The second approach integrates the conditional likelihood numerically by sampling from the distribution of the true given measured explanatory variable. The two approaches give very similar point estimates
Difference between revisions of Phylogenetics: Large Scale Maximum Likelihood Analyses - EEBedia
GARLI (Genetic Algorithm for Rapid Likelihood Inference) is a program written by Derrick Zwickl for estimating the phylogeny using maximum likelihood, and is currently one of the best programs to use if you have a large problem (i.e. many taxa). GARLI now (as of version 1.0) gives you considerable choice in substitution models: GTR[+I][+G] or codon models for nucleotides, plus several choices for amino acids. The genetic algorithm (or GA, for short) search strategy used by GARLI is like other heuristic search strategies in that it cannot guarantee that the optimal tree will be found. Thus, as with all heuristic searches, it is a good idea to run GARLI several times (using different pseudorandom number seeds) to see if there is any variation in the estimated tree. By default, GARLI will conduct two independent searches. If you have a multicore processor (newer Intel-based Macs and PCs are duo-core), GARLI can take advantage of this and use all of your CPUs simultaneously. Today you will run GARLI ...
Data fusion for electromagnetic and electrical resistive tomography based on maximum likelihood - Lund University
for electromagnetic (EM) and electrical resistive (ER) tomography. The statistical maximum likelihood criterion is closely linked to the additive Fisher information measure, and it facilitates an appropriate weighting of the measurement data which can be useful with multi-physics inverse problems. The Fisher information is particularly useful for inverse problems which can be linearized similar to the Born approximation. In this paper, a proper scalar productis dened for the measurements and a truncated Singular Value Decomposition (SVD) based algorithm is devised which combines the measurement data of the two imaging modalities in a way that is optimal in the sense of maximum likelihood ...
Time-series count data regression
The count data model studied in the paper extends the Poisson model by allowing for overdispersion and serial correlation. Alternative approaches to estimate nuisance parameters, required for the correction of the Poisson maximum likelihood covariance matrix estimator and for a quasi-likelihood estimator, are studied. The estimators are evaluated by finite sample Monte Carlo experimentation. It is found that the Poisson maximum likelihood estimator with corrected covariance matrix estimators provide reliable inferences for longer time series. Overdispersion test statistics are wellbehaved, while conventional portmanteau statistics for white noise have too large sizes. Two empirical illustrations are included.. ...
pr.probability - Equivalent method for maximum likelihood estimation of covariance parameters - MathOverflow
My goal is to estimate the parameters of a covariance matrix $\Omega$, by maximizing the following log-likelihood function:. $$\log L(\vec\tau, \rho, \sigma \mid W, X) = -m\ln(\left , \Omega \right ,) - \operatorname{Tr}(X^T\Omega(\vec\tau, \rho, \sigma)^{-1}X)$$. with $\Omega = (1-\rho)\operatorname{diag}(\vec\tau^2) + \rho\vec\tau\vec\tau^T + \sigma^2WW^T$. where $X$ and $W$ are known matrices in $\mathbb{R}^{m \times n}$ and $\mathbb{R}^{n \times k}$, respectively, and thus $\Omega$ is in $\mathbb{R}^{n\times n}$. To clarify my notation: by $\text{diag}(\vec\tau^2)$ I mean the matrix which along its diagonal has the elements of the vector $\tau$ squared, and off-diagonal entries equal to 0 (apologies if this is not the most conventional notation).. The problem is that the maximization of this likelihood has to be done numerically (at least, I have been unable to derive a closed-form expression for any of the parameters, but I would be very happy to be proven wrong), and each iteration of the ...
High-level primitives for recursive maximum likelihood estimation - Inria
This paper proposes a high level language constituted of only a few primitives and macros for describing recursive maximum likelihood (ML) estimation algorithms. This language is applicable to estimation problems involving linear Gaussian models, or processes taking values in a finite set. The use of high level primitive allows the development of highly modular ML estimation algorithms based on only few numerical blocks. The primitives, which correspond to the combination of different measurements, the extraction of sufficient statistics and the conversion of the status of a variable from unknown to observed, or vice-versa are first defined for linear Gaussian relations specifying mixed deterministic/stochastic information about the system variables. These primitives are used to define other macros and are illustrated by considering the filtering and smoothing problems for linear descriptor systems. In a second stage, the primitives are extended to finite state processes and are used to implement the
Link to this page, e.g. for bookmarking
We study parameter estimation in linear Gaussian covariance models, which are p-dimensional Gaussian models with linear constraints on the covariance matrix. Maximum likelihood estimation for this class of models leads to a non-convex optimization problem which typically has many local maxima. Using recent results on the asymptotic distribution of extreme eigenvalues of the Wishart distribution, we provide sufficient conditions for any hill climbing method to converge to the global maximum. Although we are primarily interested in the case in which n≫p, the proofs of our results utilize large sample asymptotic theory under the scheme n/p→γ>1. Remarkably, our numerical simulations indicate that our results remain valid for p as small as 2. An important consequence of this analysis is that, for sample sizes n≃14p, maximum likelihood estimation for linear Gaussian covariance models behaves as if it were a convex optimization problem. © 2016 The Royal Statistical Society and Blackwell ...
Why does Maximum Likelihood estimation maximize probability density instead of probability - Cross Validated
f(x_i, \theta)$ may not be a probability, it is a density function. In general statistics, we dont want to have to make special exceptions for continuous versus discrete random variables all the time, especially since there is a field of mathematics that gives us a unified approach yet allows us to be rigorous about such things.. The rationale for maximizing the product of the densities of a sample, or the likelihood, is much like the rationale for an integral in calculus. Take height, it is a continuous value. And suppose I have some belief about a normal, maximum entropy Gaussian spread to underlie this distribution in a population, and it is parametrized by a mean and standard deviation. My height is measured with error, and even if I knew it to an atomic level I could never actually find a probability associated with that single value. The probability that my height is between 510 and 511 is small, but between 510.25 and 510.75 is even smaller, and if I squeeze and squeeze this ...
Fast pseudolikelihood maximization for direct-coupling analysis of protein structure from many homologous amino-acid sequences
Direct-coupling analysis is a group of methods to harvest information about coevolving residues in a protein family by learning a generative model in an exponential family from data. In protein families of realistic size, this learning can only be done approximately, and there is a trade-off between inference precision and computational speed. We here show that an earlier introduced l(2)-regularized pseudolikelihood maximization method called plmDCA can be modified as to be easily parallelizable, as well as inherently faster on a single processor, at negligible difference in accuracy. We test the new incarnation of the method on 143 protein family/structure-pairs from the Protein Families database (PFAM), one of the larger tests of this class of algorithms to date.. ...
Using Extreme Value Theory and Copulas to Evaluate Market Risk
performs the maximum likelihood estimation (MLE) in two steps. The inner step maximizes the log-likelihood with respect to the linear correlation matrix, given a fixed value for the degrees of freedom. That conditional maximization is placed within a 1-D maximization with respect to the degrees of freedom, thus maximizing the log-likelihood over all parameters. The function being maximized in this outer step is known as the profile log-likelihood for the degrees of freedom.. In contrast, the following code segment uses an alternative which approximates the profile log-likelihood for the degrees of freedom parameter for large sample sizes. Although this method is often significantly faster than MLE, it should be used with caution because the estimates and confidence limits may not be accurate for small or moderate sample sizes.. Specifically, the approximation is derived by differentiating the log-likelihood function with respect to the linear correlation matrix, assuming the degrees of freedom ...
Evidential-EM Algorithm Applied to Progressively Censored Observations
Evidential-EM (E2M) algorithm is an effective approach for computing maximum likelihood estimations under finite mixture models, especially when there is uncertain information about data. In this paper we present an extension of the E2M method in a particular case of incom-plete data, where the loss of information is due to both mixture models and censored observations. The prior uncertain information is expressed by belief functions, while the pseudo-likelihood function is derived based on imprecise observations and prior knowledge. Then E2M method is evoked to maximize the generalized likelihood function to obtain the optimal estimation of parameters. Numerical examples show that the proposed method could effectively integrate the uncertain prior infor-mation with the current imprecise knowledge conveyed by the observed data.
Likelihood methods to infer balancing selection under K-allele models /by Erkan Ozge Buzbas. :: Electronic Theses and...
A balanced pattern in the allele frequencies of polymorphic loci is a potential sign of selection, particularly of overdominance. Although this type of selection is of some interest in population genetics, there exist no likelihood based approaches specifically tailored to make inference on selection intensity. To fill this gap, we present likelihood methods to estimate selection intensity under k-allele models with overdominance.;The stationary distribution of allele frequencies under a variety of Wright-Fisher k-allele models with selection and parent independent mutation is well studied. However, the statistical properties of maximum likelihood estimates of parameters under these models are not well understood. We show that under each of these models, there is a point in data space which carries the strongest possible signal for selection, yet, at this point, the likelihood is unbounded. This result remains valid even if all of the mutation parameters are assumed to be known. Therefore, ...
an improvable Rao-Blackwell improvement, inefficient maximum likelihood estimator, and unbiased generalized Bayes estimator |...
In my quest (!) for examples of location problems with no UMVU estimator, I came across a neat paper by Tal Galili [of R Bloggers fame!] and Isaac Meilijson presenting somewhat paradoxical properties of classical estimators in the case of a Uniform U((1-k)θ,(1+k)θ) distribution when 0|k|1 is known. For this model, the minimal sufficient statistic…
Margin of error, if all responses identical | Physics Forums - The Fusion of Science and Community
n/N is not naive. It is the maximum likelihood estimator. It is wrong to give more weight to the no-knowledge assumption of a uniform prior distribution, than to the data-supported n/N estimate. no-knowledge Bayes techniques are not a good substitute for data. Bayes prior distribution should be based on something applicable to the experiment being done (prior knowledge, a conservative assumption, etc.). It is better to directly use the data and a maximum likelihood estimator than to influence the results with a no-knowledge Bayes prior. You might also consider the technique of bootstrapping if you are not happy using the MLE directly. I dont know if the result will be different ...
Optimal Forecasting of Noncausal Autoregressive Time Series - Munich Personal RePEc Archive
Andrews, B. R.A. Davis, and F.J. Breidt (2006). Maximum likelihood estimation for all-pass time series models. Journal of Multivariate Analysis 97, 1638�-1659.. Breidt, F.J., R.A. Davis, K.S. Lii, and M. Rosenblatt (1991). Maximum likelihood estimation for noncausal autoregressive processes. Journal of Multivariate Analysis 36, 175�-198.. Breidt, F.J., R.A. Davis, and A.A. Trindade (2001). Least absolute deviation estimation for all-pass time series models. The Annals of Statistics 29, 919�-946.. Breidt, F.J. and N.-J. Hsu (2005). Best mean square prediction for moving averages. Statistica Sinica, 15, 427-446. Clements, M.P., and J. Smith (1999). A Monte Carlo study of the forecasting performance of empirical SETAR models. Journal of Applied Econometrics 14, 123�-141.. Diebold, F.X., T. Gunther, and A. Tay (1998). Evaluating density forecasts with applications to �nancial risk management. International Economic Review 39, 863-�883.. Diebold, F.X., and R.S. Mariano (1995). Comparing ...
Maximum Likelihood Estimation of Time-Verying Loadings in High-Dimensional Factor Models - Tinbergen.nl
Tinbergen Institute is the graduate school and research institute operated jointly by the Schools of Economics of the Erasmus University Rotterdam (EUR), University of Amsterdam (UvA) and Vrije Universiteit Amsterdam (VU). Tinbergen Institute was founded in 1987. ...
Download Analysis of Genetic Association Studies by Gang Zheng PDF - ONESTOPSOCKS.COM Library
To find the MLE, the likelihood function is first obtained, which is given by n L(θ ,x1 , . . , xn ) = f (xi ,θ ), i=1 where f (x,θ ) is the PDF or the distribution function. We often use L(θ ) for the likelihood function. An estimate of θ , denoted by θ , is the MLE for θ if it maximizes the likelihood function. , Θ = (0, 1) for the binomial probability p. Then the MLE θ satisfies L(θ ) = max L(θ ). 6) We may also write θ = arg max L(θ ) = arg max l(θ ), θ∈Θ θ∈Θ where l(θ ) = log L(θ ) is the log-likelihood function. 0 ≤ x ≤ n. Let Xi be the number of ith outcomes of a multinomial random variable. The mean and variance of Xi are given by E(Xi ) = npi and Var(Xi ) = pi (1 − pi )/n. The covariance of two outcomes Xi and Xj is given by Cov(Xi , Xj ) = −pi pj /n for i = j . Thus, Corr(Xi , Xj ) = − pi pj . (1 − pi )(1 − pj ) The Normal Distribution The normal distribution is the most commonly used distribution in statistics. Let X be a random variable that ...
CRAN - Package brglm2
Estimation and inference from generalized linear models based on various methods for bias reduction and maximum penalized likelihood with powers of the Jeffreys prior as penalty. The brglmFit fitting method can achieve reduction of estimation bias by solving either the mean bias-reducing adjusted score equations in Firth (1993) ,doi:10.1093/biomet/80.1.27, and Kosmidis and Firth (2009) ,doi:10.1093/biomet/asp055,, or the median bias-reduction adjusted score equations in Kenne et al. (2016) ,arXiv:1604.04768,, or through the direct subtraction of an estimate of the bias of the maximum likelihood estimator from the maximum likelihood estimates as in Cordeiro and McCullagh (1991) ,http://www.jstor.org/stable/2345592,. See Kosmidis et al (2019) ,doi:10.1007/s11222-019-09860-6, for more details. Estimation in all cases takes place via a quasi Fisher scoring algorithm, and S3 methods for the construction of of confidence intervals for the reduced-bias estimates are provided. In the special case of ...
maximum likelihood estimator!
tried to solve a problem on MLE of the this type below; to find the MLE of img.top {vertical-align:15%;} img.top {vertical-align:15%;} img.top {vertica
ab08028 - Page 2 - Annabel Beichman, UCLA
I am currently using fastsimcoal2 to model European and Asian demography.. A relatively recent development in population genetics is the use of maximum likelihood approaches to estimate demographic parameters from the site frequency spectrum (SFS). The SFS gives the number of SNPs observed at given frequencies in a sample. The distribution of these frequencies is affected by the demographic history of the population. For example, population expansion leads to long external branches on coalescent trees and consequently to an abundance of low-frequency variants. Population contraction leads to long internal coalescent branches and a skew toward intermediate frequency variants. Programs such as fastsimcoal2 (Excoffier et al. 2013) have developed methods to estimate the likelihood of an observed SFS under a particular set of demographic parameters.. fastsimcoal2 uses a maximum likelihood approach to estimate demographic parameters from the site frequency spectrum. The user provides a template file ...
Courses | Department of Economics
Introduction to econometrics as it is applied in microeconomics and macroeconomics (modular). Topics related to the analysis of microeconomic data include maximum likelihood estimation and hypothesis testing; cross-section and panel data linear models and robust inference; models for discrete choice; truncation, censoring and sample selection models; and models for event counts and duration data. Topics related to the analysis of macroeconomic data include basic linear and nonlinear time series models; practical issues with likelihood-based inference; forecasting; structural identification based on timing restrictions and heteroskedasticity; and computational methods for hypothesis testing and model comparison. Prerequisite: Graduate student standing or permission of the instructor ...
March 28, 2017 [Binghamton University Department of Mathematical Sciences]
Inference concerning Gaussian graphical models involves pairwise conditional dependencies on Gaussian random variables. In such a situation, regularization of a certain form is often employed to treat an overparameterized model, imposing challenges to inference. The common practice of inference uses either a regularized model, as in inference after model selection, or bias-reduction known as de-bias. While the first ignores statistical uncertainty inherent in regularization, the second reduces the bias inbred in regularization at an expense of increased variance. In this paper, we propose a constrained maximum likelihood method for inference, with a focus of alleviating the impact of regularization on inference. Particularly, for composite hypotheses, we unregularize hypothesized parameters whereas regularizing nuisance parameters through a $L_0$-constraint controlling their degree of sparseness. This approach is an analogy of semiparametric likelihood inference in a high-dimensional ...
Maximum likelihood estimator | definition of maximum likelihood estimator by Medical dictionary
Looking for online definition of maximum likelihood estimator in the Medical Dictionary? maximum likelihood estimator explanation free. What is maximum likelihood estimator? Meaning of maximum likelihood estimator medical term. What does maximum likelihood estimator mean?
Publishers of academic thesis & dissertations. Free search & preview. PDF downloads. Instant access to paperback & ebook...
The empirical likelihood method introduced by Owen (1988, 1990) is a powerful nonparametric method for statistical inference. It has been one of the most researched methods in statistics in the last twenty-five years and remains to be a very active area of research today. There is now a large body of literature on empirical likelihood method which covers its applications in many areas of statistics (Owen, 2001). One important problem affecting the empirical likelihood method is its poor accuracy, especially for small sample and/or high-dimension applications. The poor accuracy can be alleviated by using high-order empirical likelihood methods such as the Bartlett corrected empirical likelihood but it cannot be completely resolved by high-order asymptotic methods alone. Since the work of Tsao (2004), the impact of the convex hull constraint in the formulation of the empirical likelihood on the finite sample accuracy has been better understood, and methods have been developed to break this constraint in
CisASE: A likelihood-based method for detecting putative cis-regulated allele-specific expression in RNA sequencing data<...
TY - JOUR. T1 - CisASE. T2 - A likelihood-based method for detecting putative cis-regulated allele-specific expression in RNA sequencing data. AU - Liu, Zhi. AU - Gui, Tuantuan. AU - Wang, Zhen. AU - Li, Hong. AU - Fu, Yunhe. AU - Dong, Xiao. AU - Li, Yixue. N1 - Publisher Copyright: © The Author 2016. Published by Oxford University Press. All rights reserved.. PY - 2016/11/1. Y1 - 2016/11/1. N2 - Motivation: Allele-specific expression (ASE) is a useful way to identify cis-acting regulatory variation, which provides opportunities to develop new therapeutic strategies that activate beneficial alleles or silence mutated alleles at specific loci. However, multiple problems hinder the identification of ASE in next-generation sequencing (NGS) data. Results: We developed cisASE, a likelihood-based method for detecting ASE on single nucleotide variant (SNV), exon and gene levels from sequencing data without requiring phasing or parental information. cisASE uses matched DNA-seq data to control ...
Likelihood inference in non-linear term structure models: the importance of the lower bound | Bank of England
This paper shows how to use adaptive particle filtering and Markov chain Monte Carlo methods to estimate quadratic term structure models (QTSMs) by likelihood inference. The procedure is applied to a quadratic model for the United States during the recent financial crisis.
Generalized linear mixed model analysis using quasi-likelihood - Memorial University Research Repository
When investigating the relationship between two or more variables, regression is a commonly used method of analysis. Linear regression, in particular, is used when the expected value of the response is a linear function of the explanatory variables. If it is not a linear function, generalized linear regression is used. Furthermore, when the data is not independent, mixed models are used. There are various ways to analyze linear mixed models and generalized linear mixed models. In this thesis, we focus on the moment method of analysis, simulated approaches and the quasi-likelihood method of analysis. Analysis is conducted on simulated data for a linear mixed model, simulated data for a generalized linear mixed model and on a real data, set. The real data set is a clustered data set of the number of times a person visits a physician in a given year.. ...
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm
OBJECTIVE We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. METHODS We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. RESULTS Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our ...
Functional Data Analysis to Guide a Conditional Likelihood Regression by Juana Maribel Herrera Hernandez
In this study we are focused on exploring whether social characteristics modify the relationship between air pollution and hospitalizations due to asthma or chronic pulmonary obstructive disease (COPD) in El Paso, Tx. The case-crossover design with conditional regression analysis was used, here the controls and the case are the same subject at different
times and has the advantage of removing confounding by permanently confounding factors. Social characteristics are included in the models as interactions with the pollutants, variables included are age, sex, ethnicity and insurance status as indicator for the socio-economic status. The pollutants lags were chosen using the historical functional linear model to estimate the association between the response and pollutant at all lags simultaneously. The regression coefficient function was calculated by P-splines with the smoothing parameter chosen with a modified ridge trace method. We included single pollutant analyses for NO2 and PM2.5 for both asthma
Regime-switching Stochastic Volatility Model : Estimation and Calibration to VIX options
We develop and implement a method for maximum likelihood estimation of a regime-switching stochastic volatility model. Our model uses a continuous time stochastic process for the stock dynamics with the instantaneous variance driven by a Cox-Ingersoll-Ross (CIR) process and each parameter modulated by a hidden Markov chain. We propose an extension of the EM algorithm through the Baum-Welch implementation to estimate our model and filter the hidden state of the Markov chain while using the VIX index to invert the latent volatility state. Using Monte Carlo simulations, we test the convergence of our algorithm and compare it with an approximate likelihood procedure where the volatility state is replaced by the VIX index. We found that our method is more accurate than the approximate procedure. Then, we apply Fourier methods to derive a semi-analytical expression of S&P 500 and VIX option prices, which we calibrate to market data. We show that the model is sufficiently rich to encapsulate important features
Svein-Erik Hamran - Department of Technology Systems
Berger, Tor; Tollisen, Steffen & Hamran, Svein Erik (2010). ISAR imaging of small aircraft. Show summary In ISAR imaging, the relative motion between the target and the radar must be known precisely to produce focused radar images. The translational motion of the target must be compensated for in order to use only the rotational motion around a fixed centre point for the imaging of the target. Different methods for range alignment can be used, and the success depends much on the nature of the data. If strong reflectors are dominant throughout the observation period, a prominent point processing (PPP) method may produce good results. If not, other methods may apply. The advantage of using PPP is simplicity in concept and its processing speed. Another method, in general slower than PPP, is based on maximum likelihood estimation of translational motion. An maximum likelihood algorithm for translational motion estimation (TME) is described. Data from the German research radar TIRA at FGAN-FHR is ...
Inference on variance components near boundary in linear mixed effect models<...
TY - JOUR. T1 - Inference on variance components near boundary in linear mixed effect models. AU - Sakamoto, Wataru. PY - 2019/1/1. Y1 - 2019/1/1. N2 - In making inference on variance components in linear mixed effect models, variance component parameters may be located on some boundary of a constrained parameter space, and hence usual asymptotic theory on parameter estimation, test statistics, and information criteria may not hold. We illustrate such boundary issues on variance components, and introduce some methodologies and properties along with literature. The maximum likelihood estimator of the variance parameter vector near some boundary distributes asymptotically as a projection of a normal random vector onto the boundary. The null distribution of the likelihood ratio test statistic is complicated, and hence it has been studied both from asymptotic and numerical aspects. Moreover, a boundary issue in model selection using information criteria is also essential and is closely related to ...
Likelihood-ratio test - Wikipedia
the likelihood ratio is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true).. The numerator corresponds to the likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be ...
Difference between revisions of Phylogenetics: Morphology and Partitioning in MrBayes - EEBedia
In your first run, you wrote down the harmonic mean of the likelihoods sampled during the MCMC analysis. This value is an estimate of the (log of the) marginal likelihood (the denominator on the right side of Bayes Rule). It turns out that the harmonic mean estimate is always an overestimate of the quantity it is supposed to be estimating, and a variety of better ways of estimating marginal likelihoods have been invented recently. MrBayes provides one of these better methods, known as the stepping-stone method, which you have heard (will hear) about in lecture. Why estimate the marginal likelihood, you ask? The marginal likelihood turns out to be one of the primary ways to compare models in Bayesian statistics. In the Bayesian framework, the effects of the prior have to be included because model performance is affected by the choice of prior distributions: if you choose a prior that presents an opinion very different than the opinion provided by your data, the resulting tug-of-war between prior ...
PLOS ONE: RAxML and FastTree: Comparing Two Methods for Large-Scale Maximum Likelihood Phylogeny Estimation
Statistical methods for phylogeny estimation, especially maximum likelihood (ML), offer high accuracy with excellent theoretical properties. However, RAxML, the current leading method for large-scale ML estimation, can require weeks or longer when used on datasets with thousands of molecular sequences. Faster methods for ML estimation, among them FastTree, have also been developed, but their relative performance to RAxML is not yet fully understood. In this study, we explore the performance with respect to ML score, running time, and topological accuracy, of FastTree and RAxML on thousands of alignments (based on both simulated and biological nucleotide datasets) with up to 27,634 sequences. We find that when RAxML and FastTree are constrained to the same running time, FastTree produces topologically much more accurate trees in almost all cases. We also find that when RAxML is allowed to run to completion, it provides an advantage over FastTree in terms of the ML score, but does not produce
EM Algorithm for Data with Missing Values :: SAS/STAT(R) 13.2 Users Guide
is the associated covariance matrix. A sample covariance matrix is computed at each step of the EM algorithm. If the covariance matrix is singular, the linearly dependent variables for the observed data are excluded from the likelihood function. That is, for each observation with linear dependency among its observed variables, the dependent variables are excluded from the likelihood function. Note that this can result in an unexpected change in the likelihood between iterations prior to the final convergence. See Schafer (1997, pp. 163-181) for a detailed description of the EM algorithm for multivariate normal data. By default, PROC MI uses the means and standard deviations from available cases as the initial estimates for the EM algorithm. The correlations are set to zero. These estimates provide a good starting value with positive definite covariance matrix. For a discussion of suggested starting values for the algorithm, see Schafer (1997, p. 169). You can specify the convergence criterion ...
Dahlia Nadkarni
I belong to the Pattern Theory group at Brown. My current research focuses on analyzing neuron spike train data, modeled as a multivariate binary time series, to infer an underlying neuron network using conditional inference techniques. Conditional inference focuses on the parameters of interests, while being robust to various background effects that influence neuron firing rates. I also compare performance of conditional likelihood estimation to other classical approaches such as MLE and Bayesian estimation using the full likelihood as well as non-parametric approaches using Dirichlet process priors. I have also worked on data analysis and modeling projects in the fields of civic data, epidemiology, genetics & evolution, and petrophysics. Further details about my recent projects can be found here ...
AP Statistics Curriculum 2007 Bayesian Prelim - Socr
the posterior density of the population parameter μ. For this we utilize the likelihood function of our data given our parameter, Failed to parse (syntax error): f(\mathbf{x},\mu) } , and, importantly, a density f(μ), that describes our prior belief in μ. CURRENTLY UNDER CONSTRUCTION -- THANKS FOR YOUR PATIENCE !! is associated with probability statements that relate conditional and marginal properties of two random events. These statements are often written in the form the probability of A, given B and denoted P(A,B) = P(B,A)*P(A)/P(B) where P(B) not equal to 0. P(A) is often known as the Prior Probability (or as the Marginal Probability) P(A,B) is known as the Posterior Probability (Conditional Probability) P(B,A) is the conditional probability of B given A (also known as the likelihood function) P(B) is the prior on B and acts as the normalizing constant. In the Bayesian framework, the posterior probability is equal to the prior belief on A times the likelihood function given by ...
PDF] The Logistic-Exponential Survival Distribution | Semantic Scholar
For various parameter combinations, the logistic-exponential survival distribution belongs to four common classes of survival distributions: increasing failure rate, decreasing failure rate, bathtub-shaped failure rate, and upside-down bathtub-shaped failure rate. Graphical comparison of this new distribution with other common survival distributions is seen in a plot of the skewness versus the coefficient of variation. The distribution can be used as a survival model or as a device to determine the distribution class from which a particular data set is drawn. As the three-parameter version is less mathematically tractable, our major results concern the two-parameter version. Boundaries for the maximum likelihood estimators of the parameters are derived in this article. Also, a fixed-point method to find the maximum likelihood estimators for complete and censored data sets has been developed. The two-parameter and the three-parameter versions of the logistic-exponential distribution are applied to two
A cautious revisitation of early ant evolution - MYRMECOS
Abstract: Martialinae are pale, eyeless and probably hypogaeic predatory ants. Morphological character sets suggest a close relationship to the ant subfamily Leptanillinae. Recent analyses based on molecular sequence data suggest that Martialinae are the sister group to all extant ants. However, by comparing molecular studies and different reconstruction methods, the position of Martialinae remains ambiguous. While this sister group relationship was well supported by Bayesian partitioned analyses, Maximum Likelihood approaches could not unequivocally resolve the position of Martialinae. By re-analysing a previous published molecular data set, we show that the Maximum Likelihood approach is highly appropriate to resolve deep ant relationships, especially between Leptanillinae, Martialinae and the remaining ant subfamilies. Based on improved alignments, alignment masking, and tree reconstructions with a sufficient number of bootstrap replicates, our results strongly reject a placement of ...
r - Create bins for lognormal data for cluster analysis - Cross Validated
First off, there are two broad types of latent class models: supervised and unsupervised. The unsupervised stream is the earliest version and dates back to the post-WWII world and the work of Paul Lazarsfeld, a Columbia sociologist. Lazersfelds tradition was later picked up by more recent sociologists like Leo Goodman and the late Clifford Clogg. Clogg formalized his views with an LC software tool called MLLSA (pronounced like melissa). In unsupervised LC, the inputs are categorical and the replications are based on the cross-classification of the levels taken combinatorily across all factors. Supervised LC, on the other hand, typically takes the form of a finite mixture model and relies on maximum likelihood estimation, which is invariant to scaling. This workstream began with Heckman, saw implementation in sociology with grade-of-membership models and in marketing with late 80s papers by Bill Dillon (LADI, latent discriminant models) and Wagner Kamakura. Replications are typically at ...
Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes | BMC Medical...
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC
Looking for help: assembling a list of neuroscience methods intro papers - Reach & Touch Lab
Myung, I. J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathematical Psychology, 47(1), 90-100. http://doi.org/10.1016/S0022-2496(02)00028-7. Maris, E., & Oostenveld, R. (2007). Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods, 164(1), 177-190. http://doi.org/10.1016/j.jneumeth.2007.03.024. Pernet, C. R., Chauveau, N., Gaspar, C., & Rousselet, G. A. (2011). LIMO EEG: A Toolbox for Hierarchical LInear MOdeling of ElectroEncephaloGraphic Data. Computational Intelligence and Neuroscience, 2011, 1-11. http://doi.org/10.1155/2011/831409. Nakagawa, S., & Hauber, M. E. (2011). Great challenges with few subjects: Statistical strategies for neuroscientists. Neuroscience & Biobehavioral Reviews, 35(3), 462-473. http://doi.org/10.1016/j.neubiorev.2010.06.003. Cumming, G., Fidler, F., & Vaux, D. L. (2007). Error bars in experimental biology. The Journal of Cell Biology, 177(1), 7.. Nieuwenhuis, S., Forstmann, B. U., & Wagenmakers, E.-J. (2011). ...
CRAN - Package GB2
Package GB2 explores the Generalized Beta distribution of the second kind. Density, cumulative distribution function, quantiles and moments of the distributions are given. Functions for the full log-likelihood, the profile log-likelihood and the scores are provided. Formulas for various indicators of inequality and poverty under the GB2 are implemented. The GB2 is fitted by the methods of maximum pseudo-likelihood estimation using the full and profile log-likelihood, and non-linear least squares estimation of the model parameters. Various plots for the visualization and analysis of the results are provided. Variance estimation of the parameters is provided for the method of maximum pseudo-likelihood estimation. A mixture distribution based on the compounding property of the GB2 is presented (denoted as compound in the documentation). This mixture distribution is based on the discretization of the distribution of the underlying random scale parameter. The discretization can be left or right ...
Scale and Dispersion Parameters :: SAS/STAT(R) 13.1 Users Guide
Note that for normal linear models, PROC GLIMMIX by default estimates the parameters by restricted maximum likelihood, whereas PROC GENMOD estimates the parameters by maximum likelihood. As a consequence, the scale parameter in the Parameter Estimates table of the GLIMMIX procedure coincides for these models with the mean-squared error estimate of the GLM or REG procedures. To obtain maximum likelihood estimates in a normal linear model in the GLIMMIX procedure, specify the NOREML option in the PROC GLIMMIX statement. ...
Feasible estimation of generalized linear mixed models (GLMM) with weak dependency between groups
This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylors approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.. ...
DNA Barcoding: Marine Klee-diagrams (3)
In an attempt to confirm these findings across a wide range of fish species and to further test the capabilities of the indicator vector method I conducted a parallel analysis of 6 representative mtDNA genes (ATPase 6, Cytochrome b, Cytochrome Oxidase I, II, III, NADH dehydrogenase I) imposing an identical order of sequences to all data subsets. They were organized based on the topology of a Maximum Likelihood tree generated in RAxML with a concatenated dataset of all mtDNA sequences obtained. A partitioned maximum likelihood analysis was performed with the GTRMIX option. The resulting topology was used to re-order all single gene data sets. ...
A likelihood ratio test for species membership based on DNA sequence data | Philosophical Transactions of the Royal Society B:...
DNA barcoding as an approach for species identification is rapidly increasing in popularity. However, it remains unclear which statistical procedures should accompany the technique to provide a measure of uncertainty. Here we describe a likelihood ratio test which can be used to test if a sampled sequence is a member of an a priori specified species. We investigate the performance of the test using coalescence simulations, as well as using the real data from butterflies and frogs representing two kinds of challenge for DNA barcoding: extremely low and extremely high levels of sequence variability. ...
Adjusted Empirical Likelihood for Varying Coefficient Partially
Linear Models with Censored Data
Journal of Mathematics is a peer-reviewed, Open Access journal that publishes original research articles as well as review articles on all aspects of both pure and applied mathematics.
Statistical tests to identify appropriate types of nucleotide sequence recoding in molecular phylogenetics | BMC Bioinformatics...
Under a Markov model of evolution, recoding, or lumping, of the four nucleotides into fewer groups may permit analysis under simpler conditions but may unfortunately yield misleading results unless the evolutionary process of the recoded groups remains Markovian. If a Markov process is lumpable, then the evolutionary process of the recoded groups is Markovian. We consider stationary, reversible, and homogeneous Markov processes on two taxa and compare three tests for lumpability: one using an ad hoc test statistic, which is based on an index that is evaluated using a bootstrap approximation of its distribution; one that is based on a test proposed specifically for Markov chains; and one using a likelihood-ratio test. We show that the likelihood-ratio test is more powerful than the index test, which is more powerful than that based on the Markov chain test statistic. We also show that for stationary processes on binary trees with more than two taxa, the tests can be applied to all pairs. Finally, we show
Comparison of statistical methods for analysis of clustered binary observations<...
TY - JOUR. T1 - Comparison of statistical methods for analysis of clustered binary observations. AU - Heo, Moonseong. AU - Leon, Andrew C.. N1 - Copyright: Copyright 2008 Elsevier B.V., All rights reserved.. PY - 2005/3/30. Y1 - 2005/3/30. N2 - When correlated observations are obtained in a randomized controlled trial, the assumption of independence among observations within cluster likely will not hold because the observations share the same cluster (e.g. clinic, physician, or subject). Further, the outcome measurements of interest are often binary. The objective of this paper is to compare the performance of four statistical methods for analysis of clustered binary observations: namely (1) full likelihood method; (2) penalized quasi-likelihood method; (3) generalized estimating equation method; (4) fixed-effects logistic regression method. The first three methods take correlations into account in inferential processes whereas the last method does not. Type I error rate, power, bias, and ...
Statistical aspects of genetic mapping in autopolyploids. by M I. Ripol, G A. Churchill et al.
Many plant species of agriculture importance are polyploid, having more than two copies of each chromosome per cell. In this paper, we describe statistical methods for genetic map construction in autopolyploid species with particular reference to the use of molecular markers. The first step is to determine the dosage of each DNA fragment (electrophoretic band) from its segregation ratio. Fragments present in a single dose can be used to construct framework maps for individual chromosomes. Fragments present in multiple doses can often be used to link the single chromosome maps into homologous groups and provide additional ordering information. Marker phenotype probabilities were calculated for pairs of markers arranged in different configurations among the homologous chromosomes. These probabilities were used to compute a maximum likelihood estimator of the recombination fraction between pairs of markers. A likelihood ratio test for linkage of multidose markers was derived. The information
Sensors | Free Full-Text | Spectral and Spatial-Based Classification for Broad-Scale Land Cover Mapping Based on Logistic...
Improvement of satellite sensor characteristics motivates the development of new techniques for satellite image classification. Spatial information seems to be critical in classification processes, especially for heterogeneous and complex landscapes such as those observed in the Mediterranean basin. In our study, a spectral classification method of a LANDSAT-5 TM imagery that uses several binomial logistic regression models was developed, evaluated and compared to the familiar parametric maximum likelihood algorithm. The classification approach based on logistic regression modelling was extended to a contextual one by using autocovariates to consider spatial dependencies of every pixel with its neighbours. Finally, the maximum likelihood algorithm was upgraded to contextual by considering typicality, a measure which indicates the strength of class membership. The use of logistic regression for broad-scale land cover classification presented higher overall accuracy (75.61%), although not statistically
Likelihood ratio test of model specification - MATLAB lratiotest
This MATLAB function returns a logical value (h) with the rejection decision from conducting a likelihood ratio test of model specification.
Question: What Does A High Positive Likelihood Ratio Mean? - virtual planet ice land
Log-likelihood values cannot be used alone as an index of fit because they are a function of sample size but can be used to compare the fit of different coefficients. Because you want to maximize the log-likelihood, the higher value is better. For example, a log-likelihood value of -3 is better than -7. ...
Robust logistic regression to narrow down the winners curse for rare and recessive susceptibility variants [Source Code] -...
Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package robustbase with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation
Specifications of Models for Cross-Classified Counts: Comparisons of the Log-Linear Models and Marginal Models Perspectives<...
TY - JOUR. T1 - Specifications of Models for Cross-Classified Counts. T2 - Comparisons of the Log-Linear Models and Marginal Models Perspectives. AU - Becker, Mark P.. AU - Perkins, Susan. AU - Yang, Ilsoon. PY - 1998/5. Y1 - 1998/5. N2 - Log-linear models are useful for analyzing cross-classifications of counts arising in sociology, but it has been argued that in some cases, an alternative approach for formulating models - one based on simultaneously modeling univariate marginal logits and marginal associations - can lead to models that are more directly relevant for addressing the kinds of questions arising in those cases. In this article, the authors explore some of the similarities and differences between the log-linear models approach to modeling categorical data and a marginal modeling approach. It has been noted in past literature that the model of statistical independence is conveniently represented within both approaches to specifying models for cross-classifications of counts. The ...
Dr. Pavlo Baturin Profile
Material decomposition in absorption-based X-ray CT imaging suffers certain inefficiencies when differentiating among soft tissue materials. To address this problem, decomposition techniques turn to spectral CT, which has gained popularity over the last few years. Although proven to be more effective, such techniques are primarily limited to the identification of contrast agents and soft and bone-like materials. In this work, we introduce a novel conditional likelihood, material-decomposition method capable of identifying any type of material objects scanned by spectral CT. The method takes advantage of the statistical independence of spectral data to assign likelihood values to each of the materials on a pixel-by-pixel basis. It results in likelihood images for each material, which can be further processed by setting certain conditions or thresholds, to yield a final material-diagnostic image. The method can also utilize phase-contrast CT (PCI) data, where measured absorption and phase-shift ...
ese design project: Studying the progression of diabetes - Home
For our ESE 499 senior design project, we studied the progression of diabetes primarily based on BMI but also age. With a dataset of 64,496 individuals in hand, we formulated a hazard function model for the progression of an individual from well to diabetes to death. We used Maximum Likelihood Estimation to derive log-likelihood optimal model parameters. We then used these parameters to solve for transition probabilities based on BMI and diabetes status. We verify these results by changing exogenous variables and starting conditions. We also formulate a moving average model to calculate the risk multiplier based on BMI and age. Our analysis shows that individuals with higher BMIs are more likely to get diabetes and are more likely to die from the disease once developed. ...
Machine Learning and Statistics Seminar | The Faculty of Mathematics and Computer Science
Covariance matrix estimation is essential in many areas of modern Statistics and Machine Learning including Graphical Models, Classification/Discriminant Analysis, Principal Component Analysis, and many others. Classical statistics suggests using Sample Covariance Matrix (SCM) which is a Maximum Likelihood Estimator (MLE) in the Gaussian populations. Real world data, however, usually exhibits heavy-tailed behavior and/or contains outliers, making the SCM non-efficient or even useless. This problem and many similar ones gave rise to the Robust Statistics field in early 60s, where the main goal was to develop estimators stable under reasonable deviations from the basic Gaussian assumptions. One of the most prominent robust covariance matrix estimators was introduced and thoroughly studied by D. Tyler in the mid-80s. This important representative of the family of M-estimators can be defined as an MLE of a certain population. The problem of robust covariance estimation becomes even more involved in ...
Fuzzy Logic
The degree of truth is in fact a frequency probability -- that of a competent speaker of the language using the fuzzy descriptor in question to describe the candidate element which may be in question, of the fuzzy set which is in some sense induced by the fuzzy descriptor.. Seen in this way, the membership function is akin to a likelihood function -- a semantic likelihood function in this case -- induced by a (measurable, frequentist) uncertainty in the use of fuzzy terms. In the same way that likelihood -- which varies over parameter space, as distinct from the sample space (from which come the data) to which it is related -- is distinct from, though related to probability, the membership function over a universe of discourse is not a probability distribution, but it is related to the sample space of yes/no responses that would be obtained when asking any speaker whether he/she would use a fuzzy descriptor (eg. tall) to describe any particular candidate element (eg. height value) for a ...
Assessment Materials in Econometrics | The Economics Network
Fifteen detailed lecture handouts in PDF are archived here along with 11 exercise sheets with answers. The lecture topics are: Sets and Boolean Algebra, The Binomial Distribution, The Multinomial Distribution, The Poisson Distribution, The Binomial Moment Generating Function, The Normal Moment Generating Function, Characteristic Functions and the Uncertainty Principle, The Bivariate Normal Distribution, The Multivariate Normal Distribution, Conditional Expectations and Linear Regression, Sampling Distributions, Maximum Likelihood Estimation, Regression estimation via Maximum Likelihood, Cochranes Theorem, and Stochastic Convergence.. ...
Phylogénie | Migale
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm to perform Nearest Neighbor Interchanges (NNIs), in order to improve a reasonable starting tree topology. Since the original publication (Guindon and Gascuel 2003), PhyML has been widely used (>1,250 citations in ISI Web of Science), due to its simplicity and a fair accuracy/speed compromise. In the mean time research around PhyML has continued. We designed an efficient algorithm to search the tree space using Subtree Pruning and Regrafting (SPR) topological moves (Hordijk and Gascuel 2005), and proposed a fast branch test based on an approximate likelihood ratio test (Anisimova and Gascuel 2006). However, these novelties were not included in the official version of PhyML, and we found that improvements were still needed in order to make them effective in some practical cases. PhyML 3.0 achieves this task. It implements new algorithms to search the space of tree topologies with ...
Estimation and prediction for spatial generalized linear mixed models using high order Laplace approximation<...
TY - JOUR. T1 - Estimation and prediction for spatial generalized linear mixed models using high order Laplace approximation. AU - Evangelou, Evangelos. AU - Zhu, Zhengyuan. AU - Smith, Richard L. PY - 2011/11. Y1 - 2011/11. N2 - Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi- Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our ...
Likelihood Ratio Outlier Detection on Genomic Sequences - alibi-detect 0.7.1 documentation
The outlier detector described by Ren et al. (2019) in Likelihood Ratios for Out-of-Distribution Detection uses the likelihood ratio between 2 generative models as the outlier score. One model is trained on the original data while the other is trained on a perturbed version of the dataset. This is based on the observation that the likelihood score for an instance under a generative model can be heavily affected by population level background statistics. The second generative model is therefore trained to capture the background statistics still present in the perturbed data while the semantic features have been erased by the perturbations.. The perturbations are added using an independent and identical Bernoulli distribution with rate \(\mu\) which substitutes a feature with one of the other possible feature values with equal probability. Each feature in the genome dataset can take 4 values (one of the ACGT nucleobases). This means that a perturbed feature is swapped with one of the other ...
Repositorio da Producao Cientifica e Intelectual da Unicamp: Global Convergence Of Diluted Iterations In Maximum-likelihood...
In this paper we address convergence issues of the Diluted RpR algorithm [1], used to obtain the maximum likelihood estimate for the density matrix in quantum state tomography. We give a new interpretation to the diluted RpR iterations that allows us to prove the global convergence under weaker assumptions. Thus, we propose a new algorithm which is globally convergent and suitable for practical implementation. © Rinton Press ...