DelawareUnited States Dept. of Health and Human Services: A cabinet department in the Executive Branch of the United States Government concerned with administering those agencies and offices having programs pertaining to health and human services.WashingtonDurable Medical Equipment: Devices which are very resistant to wear and may be used over a long period of time. They include items such as wheelchairs, hospital beds, artificial limbs, etc.United StatesState Government: The level of governmental organization and function below that of the national or country-wide government.Medicaid: Federal program, created by Public Law 89-97, Title XIX, a 1965 amendment to the Social Security Act, administered by the states, that provides health care benefits to indigent and medically indigent persons.Models, Statistical: Statistical formulations or analyses which, when applied to data and found to fit the data, are then used to verify the assumptions and parameters used in the analysis. Examples of statistical models are the linear model, binomial model, polynomial model, two-parameter model, etc.Hepatitis, Infectious Canine: A contagious disease caused by canine adenovirus (ADENOVIRUSES, CANINE) infecting the LIVER, the EYE, the KIDNEY, and other organs in dogs, other canids, and bears. Symptoms include FEVER; EDEMA; VOMITING; and DIARRHEA.Dog Diseases: Diseases of the domestic dog (Canis familiaris). This term does not include diseases of wild dogs, WOLVES; FOXES; and other Canidae for which the heading CARNIVORA is used.Algorithms: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.Dogs: The domestic dog, Canis familiaris, comprising about 400 breeds, of the carnivore family CANIDAE. They are worldwide in distribution and live in association with people. (Walker's Mammals of the World, 5th ed, p1065)Software: Sequential operating programs and data which instruct the functioning of a digital computer.Data Interpretation, Statistical: Application of statistical procedures to analyze specific observed or assumed facts from a particular study.Statistics as Topic: The science and art of collecting, summarizing, and analyzing data that are subject to random variation. The term is also applied to the data themselves and to the summarization of the data.PubMed: A bibliographic database that includes MEDLINE as its primary subset. It is produced by the National Center for Biotechnology Information (NCBI), part of the NATIONAL LIBRARY OF MEDICINE. PubMed, which is searchable through NLM's Web site, also includes access to additional citations to selected life sciences journals not in MEDLINE, and links to other resources such as the full-text of articles at participating publishers' Web sites, NCBI's molecular biology databases, and PubMed Central.Periodicals as Topic: A publication issued at stated, more or less regular, intervals.Prognosis: A prediction of the probable outcome of a disease based on a individual's condition and the usual course of the disease as seen in similar situations.BooksGlioma: Benign and malignant central nervous system neoplasms derived from glial cells (i.e., astrocytes, oligodendrocytes, and ependymocytes). Astrocytes may give rise to astrocytomas (ASTROCYTOMA) or glioblastoma multiforme (see GLIOBLASTOMA). Oligodendrocytes give rise to oligodendrogliomas (OLIGODENDROGLIOMA) and ependymocytes may undergo transformation to become EPENDYMOMA; CHOROID PLEXUS NEOPLASMS; or colloid cysts of the third ventricle. (From Escourolle et al., Manual of Basic Neuropathology, 2nd ed, p21)Publishing: "The business or profession of the commercial production and issuance of literature" (Webster's 3d). It includes the publisher, publication processes, editing and editors. Production may be by conventional printing methods or by electronic publishing.Access to Information: Individual's rights to obtain and use information collected or generated by others.Dengue: An acute febrile disease transmitted by the bite of AEDES mosquitoes infected with DENGUE VIRUS. It is self-limiting and characterized by fever, myalgia, headache, and rash. SEVERE DENGUE is a more virulent form of dengue.Neurosciences: The scientific disciplines concerned with the embryology, anatomy, physiology, biochemistry, pharmacology, etc., of the nervous system.Developmental Biology: The field of biology which deals with the process of the growth and differentiation of an organism.Journal Impact Factor: A quantitative measure of the frequency on average with which articles in a journal have been cited in a given period of time.Dengue Virus: A species of the genus FLAVIVIRUS which causes an acute febrile and sometimes hemorrhagic disease in man. Dengue is mosquito-borne and four serotypes are known.Marketing: Activity involved in transfer of goods from producer to consumer or in the exchange of services.Consumer Satisfaction: Customer satisfaction or dissatisfaction with a benefit or service received.RestaurantsCommerce: The interchange of goods or commodities, especially on a large scale, between different countries or between populations within the same country. It includes trade (the buying, selling, or exchanging of commodities, whether wholesale or retail) and business (the purchase and sale of goods to make a profit). (From Random House Unabridged Dictionary, 2d ed, p411, p2005 & p283)Pharmacies: Facilities for the preparation and dispensing of drugs.Marketing of Health Services: Application of marketing principles and techniques to maximize the use of health care resources.Computer Simulation: Computer-based representation of physical systems and phenomena such as chemical processes.Models, Molecular: Models used experimentally or theoretically to study molecular shape, electronic properties, or interactions; includes analogous molecules, computer-generated graphics, and mechanical structures.Tosylarginine Methyl Ester: Arginine derivative which is a substrate for many proteolytic enzymes. As a substrate for the esterase from the first component of complement, it inhibits the action of C(l) on C(4).Bayes Theorem: A theorem in probability theory named for Thomas Bayes (1702-1761). In epidemiology, it is used to obtain the probability of disease in a group of people with some characteristic on the basis of the overall rate of that disease and of the likelihood of that characteristic in healthy and diseased individuals. The most familiar application is in clinical decision analysis where it is used for estimating the probability of a particular diagnosis given the appearance of some symptoms or test result.Pasteurellosis, Pneumonic: Bovine respiratory disease found in animals that have been shipped or exposed to CATTLE recently transported. The major agent responsible for the disease is MANNHEIMIA HAEMOLYTICA and less commonly, PASTEURELLA MULTOCIDA or HAEMOPHILUS SOMNUS. All three agents are normal inhabitants of the bovine nasal pharyngeal mucosa but not the LUNG. They are considered opportunistic pathogens following STRESS, PHYSIOLOGICAL and/or a viral infection. The resulting bacterial fibrinous BRONCHOPNEUMONIA is often fatal.Gene Library: A large collection of DNA fragments cloned (CLONING, MOLECULAR) from a given organism, tissue, organ, or cell type. It may contain complete genomic sequences (GENOMIC LIBRARY) or complementary DNA sequences, the latter being formed from messenger RNA and lacking intron sequences.Linear Models: Statistical models in which the value of a parameter for a given value of a factor is assumed to be equal to a + bx, where a and b are constants. The models predict a linear regression.Hypericum: Genus of perennial plants in the family CLUSIACEAE (sometimes classified as Hypericaceae). Herbal and homeopathic preparations are used for depression, neuralgias, and a variety of other conditions. Hypericum contains flavonoids; GLYCOSIDES; mucilage, TANNINS; volatile oils (OILS, ESSENTIAL), hypericin and hyperforin.Psychological Theory: Principles applied to the analysis and explanation of psychological or behavioral phenomena.Forecasting: The prediction or projection of the nature of future problems or existing conditions based upon the extrapolation or interpretation of existing scientific data or by the application of scientific methodology.EnglandLikelihood Functions: Functions constructed from a statistical model and a set of observed data which give the probability of that data for various values of the unknown model parameters. Those parameter values that maximize the probability are the maximum likelihood estimates of the parameters.Reproducibility of Results: The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.Models, Biological: Theoretical representations that simulate the behavior or activity of biological processes or diseases. For disease models in living animals, DISEASE MODELS, ANIMAL is available. Biological models include the use of mathematical equations, computers, and other electronic equipment.Mendelian Randomization Analysis: The use of the GENETIC VARIATION of known functions or phenotypes to correlate the causal effects of those functions or phenotypes with a disease outcome.American Cancer Society: A voluntary organization concerned with the prevention and treatment of cancer through education and research.Anatomy, Regional: The anatomical study of specific regions or parts of organisms, emphasizing the relationship between the various structures (e.g. muscles, nerves, skeletal, cardiovascular, etc.).Exocrine Glands: Glands of external secretion that release its secretions to the body's cavities, organs, or surface, through a duct.

A computational screen for methylation guide snoRNAs in yeast. (1/16923)

Small nucleolar RNAs (snoRNAs) are required for ribose 2'-O-methylation of eukaryotic ribosomal RNA. Many of the genes for this snoRNA family have remained unidentified in Saccharomyces cerevisiae, despite the availability of a complete genome sequence. Probabilistic modeling methods akin to those used in speech recognition and computational linguistics were used to computationally screen the yeast genome and identify 22 methylation guide snoRNAs, snR50 to snR71. Gene disruptions and other experimental characterization confirmed their methylation guide function. In total, 51 of the 55 ribose methylated sites in yeast ribosomal RNA were assigned to 41 different guide snoRNAs.  (+info)

Influence of sampling on estimates of clustering and recent transmission of Mycobacterium tuberculosis derived from DNA fingerprinting techniques. (2/16923)

The availability of DNA fingerprinting techniques for Mycobacterium tuberculosis has led to attempts to estimate the extent of recent transmission in populations, using the assumption that groups of tuberculosis patients with identical isolates ("clusters") are likely to reflect recently acquired infections. It is never possible to include all cases of tuberculosis in a given population in a study, and the proportion of isolates found to be clustered will depend on the completeness of the sampling. Using stochastic simulation models based on real and hypothetical populations, the authors demonstrate the influence of incomplete sampling on the estimates of clustering obtained. The results show that as the sampling fraction increases, the proportion of isolates identified as clustered also increases and the variance of the estimated proportion clustered decreases. Cluster size is also important: the underestimation of clustering for any given sampling fraction is greater, and the variability in the results obtained is larger, for populations with small clusters than for those with the same number of individuals arranged in large clusters. A considerable amount of caution should be used in interpreting the results of studies on clustering of M. tuberculosis isolates, particularly when sampling fractions are small.  (+info)

Capture-recapture models including covariate effects. (3/16923)

Capture-recapture methods are used to estimate the incidence of a disease, using a multiple-source registry. Usually, log-linear methods are used to estimate population size, assuming that not all sources of notification are dependent. Where there are categorical covariates, a stratified analysis can be performed. The multinomial logit model has occasionally been used. In this paper, the authors compare log-linear and logit models with and without covariates, and use simulated data to compare estimates from different models. The crude estimate of population size is biased when the sources are not independent. Analyses adjusting for covariates produce less biased estimates. In the absence of covariates, or where all covariates are categorical, the log-linear model and the logit model are equivalent. The log-linear model cannot include continuous variables. To minimize potential bias in estimating incidence, covariates should be included in the design and analysis of multiple-source disease registries.  (+info)

Sequence specificity, statistical potentials, and three-dimensional structure prediction with self-correcting distance geometry calculations of beta-sheet formation in proteins. (4/16923)

A statistical analysis of a representative data set of 169 known protein structures was used to analyze the specificity of residue interactions between spatial neighboring strands in beta-sheets. Pairwise potentials were derived from the frequency of residue pairs in nearest contact, second nearest and third nearest contacts across neighboring beta-strands compared to the expected frequency of residue pairs in a random model. A pseudo-energy function based on these statistical pairwise potentials recognized native beta-sheets among possible alternative pairings. The native pairing was found within the three lowest energies in 73% of the cases in the training data set and in 63% of beta-sheets in a test data set of 67 proteins, which were not part of the training set. The energy function was also used to detect tripeptides, which occur frequently in beta-sheets of native proteins. The majority of native partners of tripeptides were distributed in a low energy range. Self-correcting distance geometry (SECODG) calculations using distance constraints sets derived from possible low energy pairing of beta-strands uniquely identified the native pairing of the beta-sheet in pancreatic trypsin inhibitor (BPTI). These results will be useful for predicting the structure of proteins from their amino acid sequence as well as for the design of proteins containing beta-sheets.  (+info)

Pair potentials for protein folding: choice of reference states and sensitivity of predicted native states to variations in the interaction schemes. (5/16923)

We examine the similarities and differences between two widely used knowledge-based potentials, which are expressed as contact matrices (consisting of 210 elements) that gives a scale for interaction energies between the naturally occurring amino acid residues. These are the Miyazawa-Jernigan contact interaction matrix M and the potential matrix S derived by Skolnick J et al., 1997, Protein Sci 6:676-688. Although the correlation between the two matrices is good, there is a relatively large dispersion between the elements. We show that when Thr is chosen as a reference solvent within the Miyazawa and Jernigan scheme, the dispersion between the M and S matrices is reduced. The resulting interaction matrix B gives hydrophobicities that are in very good agreement with experiment. The small dispersion between the S and B matrices, which arises due to differing reference states, is shown to have dramatic effect on the predicted native states of lattice models of proteins. These findings and other arguments are used to suggest that for reliable predictions of protein structures, pairwise additive potentials are not sufficient. We also establish that optimized protein sequences can tolerate relatively large random errors in the pair potentials. We conjecture that three body interaction may be needed to predict the folds of proteins in a reliable manner.  (+info)

Cloning, overexpression, purification, and physicochemical characterization of a cold shock protein homolog from the hyperthermophilic bacterium Thermotoga maritima. (6/16923)

Thermotoga maritima (Tm) expresses a 7 kDa monomeric protein whose 18 N-terminal amino acids show 81% identity to N-terminal sequences of cold shock proteins (Csps) from Bacillus caldolyticus and Bacillus stearothermophilus. There were only trace amounts of the protein in Thermotoga cells grown at 80 degrees C. Therefore, to perform physicochemical experiments, the gene was cloned in Escherichia coli. A DNA probe was produced by PCR from genomic Tm DNA with degenerated primers developed from the known N-terminus of TmCsp and the known C-terminus of CspB from Bacillus subtilis. Southern blot analysis of genomic Tm DNA allowed to produce a partial gene library, which was used as a template for PCRs with gene- and vector-specific primers to identify the complete DNA sequence. As reported for other csp genes, the 5' untranslated region of the mRNA was anomalously long; it contained the putative Shine-Dalgarno sequence. The coding part of the gene contained 198 bp, i.e., 66 amino acids. The sequence showed 61% identity to CspB from B. caldolyticus and high similarity to all other known Csps. Computer-based homology modeling allowed the conclusion that TmCsp represents a beta-barrel similar to CspB from B. subtilis and CspA from E. coli. As indicated by spectroscopic analysis, analytical gel permeation chromatography, and mass spectrometry, overexpression of the recombinant protein yielded authentic TmCsp with a molecular weight of 7,474 Da. This was in agreement with the results of analytical ultracentrifugation confirming the monomeric state of the protein. The temperature-induced equilibrium transition at 87 degrees C exceeds the maximum growth temperature of Tm and represents the maximal Tm-value reported for Csps so far.  (+info)

pKa calculations for class A beta-lactamases: influence of substrate binding. (7/16923)

Beta-Lactamases are responsible for bacterial resistance to beta-lactams and are thus of major clinical importance. However, the identity of the general base involved in their mechanism of action is still unclear. Two candidate residues, Glu166 and Lys73, have been proposed to fulfill this role. Previous studies support the proposal that Glu166 acts during the deacylation, but there is no consensus on the possible role of this residue in the acylation step. Recent experimental data and theoretical considerations indicate that Lys73 is protonated in the free beta-lactamases, showing that this residue is unlikely to act as a proton abstractor. On the other hand, it has been proposed that the pKa of Lys73 would be dramatically reduced upon substrate binding and would thus be able to act as a base. To check this hypothesis, we performed continuum electrostatic calculations for five wild-type and three beta-lactamase mutants to estimate the pKa of Lys73 in the presence of substrates, both in the Henri-Michaelis complex and in the tetrahedral intermediate. In all cases, the pKa of Lys73 was computed to be above 10, showing that it is unlikely to act as a proton abstractor, even when a beta-lactam substrate is bound in the enzyme active site. The pKa of Lys234 is also raised in the tetrahedral intermediate, thus confirming a probable role of this residue in the stabilization of the tetrahedral intermediate. The influence of the beta-lactam carboxylate on the pKa values of the active-site lysines is also discussed.  (+info)

Simplified methods for pKa and acid pH-dependent stability estimation in proteins: removing dielectric and counterion boundaries. (8/16923)

Much computational research aimed at understanding ionizable group interactions in proteins has focused on numerical solutions of the Poisson-Boltzmann (PB) equation, incorporating protein exclusion zones for solvent and counterions in a continuum model. Poor agreement with measured pKas and pH-dependent stabilities for a (protein, solvent) relative dielectric boundary of (4,80) has lead to the adoption of an intermediate (20,80) boundary. It is now shown that a simple Debye-Huckel (DH) calculation, removing both the low dielectric and counterion exclusion regions associated with protein, is equally effective in general pKa calculations. However, a broad-based discrepancy to measured pH-dependent stabilities is maintained in the absence of ionizable group interactions in the unfolded state. A simple model is introduced for these interactions, with a significantly improved match to experiment that suggests a potential utility in predicting and analyzing the acid pH-dependence of protein stability. The methods are applied to the relative pH-dependent stabilities of the pore-forming domains of colicins A and N. The results relate generally to the well-known preponderance of surface ionizable groups with solvent-mediated interactions. Although numerical PB solutions do not currently have a significant advantage for overall pKa estimations, development based on consideration of microscopic solvation energetics in tandem with the continuum model could combine the large deltapKas of a subset of ionizable groups with the overall robustness of the DH model.  (+info)

  • A Bayesian analysis of the data under this best supported model points to an origin of our species ≈141 thousand years ago (Kya), an exit out-of-Africa ≈51 Kya, and a recent colonization of the Americas ≈10.5 Kya. (pnas.org)
  • One of the main objectives of this book is to provide comprehensive explanations of the concepts and derivations of the AIC and related criteria, including Schwarz's Bayesian information criterion (BIC), together with a wide range of practical examples of model selection and evaluation criteria. (springer.com)
  • A generalized information criterion (GIC) and a bootstrap information criterion are presented, which provide unified tools for modeling and model evaluation for a diverse range of models, including various types of nonlinear models and model estimation procedures such as robust estimation, the maximum penalized likelihood method and a Bayesian approach. (springer.com)
  • This course aims to expand our "Bayesian toolbox" with more general models, and computational techniques to fit them. (coursera.org)
  • We will learn how to construct, fit, assess, and compare Bayesian statistical models to answer scientific questions involving continuous, binary, and count data. (coursera.org)
  • http://pymc-devs.github.com/pymc * License : MIT/X Programming Lang: Python Description : Bayesian statistical models and fitting algorithms PyMC is a Python module that implements Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo. (debian.org)
  • Forecasting in large macroeconomic panels using Bayesian Model Averaging ," Staff Reports 163, Federal Reserve Bank of New York. (repec.org)
  • Forecasting in Large Macroeconomic Panels using Bayesian Model Averaging ," Discussion Papers in Economics 04/16, Department of Economics, University of Leicester. (repec.org)
  • It then covers a random effects model estimated using the EM algorithm and concludes with a Bayesian Poisson model using Metropolis-Hastings sampling. (routledge.com)
  • He also develops or co-develops a number of R packages including varian, a package to conduct Bayesian scale-location structural equation models, and MplusAutomation, a popular package that links R to the commercial Mplus software. (springer.com)
  • A variety of issues on model fittings and model diagnostics are addressed, and many criteria for outlier detection and influential observation identification are created within likelihood and Bayesian frameworks. (booktopia.com.au)
  • In this paper we propose to use graph cuts in a Bayesian framework for automatic initialization and propagate multiple mean parametric models derived from principal component analysis of shape and posterior probability information of the prostate region to segment the prostate. (archives-ouvertes.fr)
  • We are collecting empirical temperature climate data to develop local models describing stream temperature and streamflows in headwater streams in Spread Creek, a Tributary to the Upper Snake River, WY. (usgs.gov)
  • Emphasis is on an integrative approach, combining field and laboratory studies to provide data for mathematical models of ecological and evolutionary dynamics. (usgs.gov)
  • The project is integrating downscaled and regionalized climate models (e.g., stream temperature) with riverscape data, fine-scale aquatic species vulnerability assessments, population genetic connectivity, and remotely sensed riparian and aquatic habitat connectivity analyses. (usgs.gov)
  • This kind of analysis can best be done with detailed mechanistic models, but these models require extensive data and advanced estimation procedures. (usgs.gov)
  • The papers in this book cover issues related to the development of novel statistical models for the analysis of data. (springer.com)
  • They offer solutions for relevant problems in statistical data analysis and contain the explicit derivation of the proposed models as well as their implementation. (springer.com)
  • The book assembles the selected and refereed proceedings of the biannual conference of the Italian Classification and Data Analysis Group (CLADAG), a section of the Italian Statistical Society. (springer.com)
  • The course covers: basic probability and random variables, models for discrete and continuous data, estimation of model parameters, assessment of goodness-of-fit, model selection, confidence interval and test construction. (massey.ac.nz)
  • For each of recent published glioma model studies, the original Kaplan-Meier curves on the left-hand side and the validation in TCGA data on the right-hand side. (nih.gov)
  • Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. (pnas.org)
  • However, because past demographic events are likely to have greatly affected current patterns of genetic diversity, genetic data are difficult to interpret without a general demographic model that can explain neutral variability ( 3 ). (pnas.org)
  • I highly recommend this book to anyone who is seriously engaged in the statistical analysis of data or in teaching statistics. (waterstones.com)
  • Real-world data often require more sophisticated models to reach realistic conclusions. (coursera.org)
  • How to select an appropriate model given data from a clinical study? (le.ac.uk)
  • How to assess whether a model fits data well? (le.ac.uk)
  • The material also covers the inclusion of different types of covariate data in statistical models and introduces the ideas of statistical interaction and capturing non-linear effects of continuous covariates. (le.ac.uk)
  • This module will Introduce you to multilevel modelling for the analysis of hierarchical and repeated measures data for both continuous and binary outcomes. (le.ac.uk)
  • All standout statistical models are built on strong and relevant data so before you start to think about modelling, it is necessary to extract and verify the relevant data. (experian.co.uk)
  • Once the modelling team has acquired the data, they need to fully understand it and design a development sample. (experian.co.uk)
  • The ideal scenario is where there is a lot of data available to build linear or logistic regression models. (experian.co.uk)
  • Experian's modelling teams have access to high quality data and the experience and skills to use several different modelling methodologies. (experian.co.uk)
  • Sometimes, though, we are able to compare model predictions with real data - predicted sales versus actual sales, for example. (kdnuggets.com)
  • A statistical model is a class of mathematical model , which embodies a set of assumptions concerning the generation of some sample data , and similar data from a larger population . (kdnuggets.com)
  • A statistical model represents, often in considerably idealized form, the data-generating process. (kdnuggets.com)
  • There are other important categorizations as well, for instance between time-series or longitudinal modeling, in which our data span two or more points in time, and cross-sectional modeling, in which we are only have data for one slice in time. (kdnuggets.com)
  • Marketing mix modeling uses time-series data whereas most marketing research surveys are cross sectional. (kdnuggets.com)
  • Some multi-level models fall between these cracks by combining cross-sectional data with time-series or longitudinal data in one model. (kdnuggets.com)
  • Though complex, models for spatial and spatiotemporal data are relevant to specialized corners of marketing research. (kdnuggets.com)
  • Seeking a Research Scientist who will employ skills and experience to improve, create and innovate data-driven modeling approaches for our price and promotion solutions, while anticipating and charting future research needs. (kdnuggets.com)
  • The assumptions embodied by a statistical model describe a set of probability distributions, some of which are assumed to adequately approximate the distribution from which a particular data set is sampled. (wikipedia.org)
  • An admissible model must be consistent with all the data points. (wikipedia.org)
  • Thus, a straight line (heighti = b0 + b1agei) cannot be the equation for a model of the data. (wikipedia.org)
  • The line cannot be the equation for a model, unless it exactly fits all the data points-i.e. all the data points lie perfectly on the line. (wikipedia.org)
  • The error term, εi, must be included in the equation, so that the model is consistent with all the data points. (wikipedia.org)
  • For the first time, recent methodologic research on boosting functional data and on the application of boosting techniques in advanced survival modelling is reviewed. (hindawi.com)
  • In the paper titled "A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data" A. Bommert et al. (hindawi.com)
  • The cross-country data have been employed for the period 1980-2000 for fitting the model. (jhu.edu)
  • The Bank of England has constructed a 'suite of statistical forecasting models' (the 'Suite') providing judgement-free statistical forecasts of inflation and output growth as inputs into the forecasting process, and to offer measures of relevant news in the data. (repec.org)
  • The book provides not only a clear understanding of principles of model construction but also a working knowledge of how to implement these models using real data. (eastwestcenter.org)
  • Supported by numerous tables and graphs, using real survey data, as well as an appendix of computer programs for the statistical packages SAS, BMDP, and LIMDEP, the book is an ideal primer for understanding and using statistical models in analytical work. (eastwestcenter.org)
  • Statistical Tools- R, SPSS , EXCEL,MINITAB I provide a bunch of services on statistics and data analytics. (freelancer.com)
  • Carry out a variety of advanced statistical analyses including generalized additive models, mixed effects models, multiple imputation, machine learning, and missing data techniques using R. Each chapter starts with conceptual background information about the techniques, includes multiple examples using R to achieve results, and concludes with a case study. (springer.com)
  • Written by Matt and Joshua F. Wiley, Advanced R Statistical Programming and Data Models shows you how to conduct data analysis using the popular R language. (springer.com)
  • In statistics and data science, Joshua focuses on biostatistics and is interested in reproducible research and graphical displays of data and statistical models. (springer.com)
  • Such is the importance of this wealth of data, we have devised a reliable statistical model to enable the courts to evaluate fingerprint evidence within a framework similar to that which underpins DNA evidence. (redorbit.com)
  • The main focus is on the statistical method: choosing the one that is appropriate for your data followed by the model specification and interpretation of the results. (imperial.ac.uk)
  • some familiarity with statistics and SPSS, to at least the level of Introduction to Statistics Using SPSS or Data Management & Statistical Analysis Using SPSS . (imperial.ac.uk)
  • A statistical modeling approach is proposed for use in searching large microarray data sets for genes that have a transcriptional response to a stimulus. (pnas.org)
  • Corresponding data analyses provide gene-specific information, and the approach provides a means for evaluating the statistical significance of such information. (pnas.org)
  • If the statistical model provides an adequate representation of the expression data for a specific gene, then the corresponding model parameter estimates can provide certain response characteristics for that gene. (pnas.org)
  • For comparison, we have used statistical modeling to look for regularly oscillating profiles within these large data sets. (pnas.org)
  • As part of these analyses, you prepare and explore data, select variables, and create diagnostic plots related to model specification and model assumption validation. (sas.com)
  • Conduct simple regression analysis and basic ANOVA tests and assess the applicability of the models used to the data. (southampton.ac.uk)
  • Wang "downloaded quarterly accounting data for all firms in Compustat, the most widely-used dataset in corporate finance that contains data on over 20,000 firms from SEC filings" and looked at the statistical distribution of leading digits in various pieces of financial information. (columbia.edu)
  • To overcome the problem of data snooping, we extend the scheme based on the use of the reality check with modifications apt to compare nested models. (aimsciences.org)
  • Some applications of the proposed procedure to simulated and real data sets show that it allows to select parsimonious neural network models with the highest predictive accuracy. (aimsciences.org)
  • The topics range from investigating information processing in chemical and biological networks to studying statistical and information-theoretic techniques for analyzing chemical structures to employing data analysis and machine learning techniques for QSAR/QSPR. (wiley.com)
  • This book is intended for postgraduates and statisticians whose research involves longitudinal study, multivariate analysis and statistical diagnostics, and also for scientists who analyze longitudinal data and repeated measures. (booktopia.com.au)
  • The authors provide theoretical details on the model fittings and also emphasize the application of growth curve models to practical data analysis, which are reflected in the analysis of practical examples given in each chapter. (booktopia.com.au)
  • The author brings a fresh approach to the understanding of statistical concepts by integrating throughout Minitab software, providing valuable insight into computer simulation and problem-solving techniquesRosenblatt clearly treats the subject matter by carefully wording the explanations and by having readers work with computer-generated data with properties specified by readers. (booktopia.com.au)
  • Furthermore, the model could be used as an instrument in analysis of the quality of experimental data. (mdpi.com)
  • The results obtained by applying the model with six parameters for deviations of rank sums suggest that the data of the experiment no. 8 are questionable. (mdpi.com)
  • In this Mastery Series, you'll choose three courses (out of five) to learn how to apply linear models to all sorts of data - regression for continuous data, then extensions for categorical and count data, as well as more complex data structures like clustered and hierarchical data. (statistics.com)
  • In this Mastery Series (choose 3 of 5 courses), you'll learn regression, and generalized linear models (GLM) extensions of linear models to cover categorical and count data, plus mixed models to cover clustered and hierarchical data. (statistics.com)
  • This Mastery Series is for you if you are a researcher or analyst who needs to construct statistical models of data. (statistics.com)
  • This course will teach you how multiple linear regression models are derived, assumptions in the models, how to test whether data meets assumptions, and develop strategies for building and understanding useful models. (statistics.com)
  • This course will explain the theory of generalized linear models (GLM), outline the algorithms used for GLM estimation, and explain how to determine which algorithm to use for a given data analysis. (statistics.com)
  • This course will teach you the basic theory of linear and non-linear mixed effects models, hierarchical linear models, algorithms used for estimation, primarily for models involving normally distributed errors, and examples of data analysis. (statistics.com)
  • This course will teach you regression models for count data, models with a response or dependent variable data in the form of a count or rate, Poisson regression, the foundation for modeling counts, and extensions and modifications to the basic model. (statistics.com)
  • This course will show you how to use R to create statistical models and use them to analyze data. (statistics.com)
  • This paper describes the risk of injury to the rider in a crash using a statistical model based on real-world accident data. (sae.org)
  • Understanding this kind of data requires powerful statistical techniques for capturing the structure of the neural population responses and their relation with external stimuli or behavioral observations. (frontiersin.org)
  • These statistical models are often used to test hypotheses and make inferences about ecological theories and management decisions based on available data. (ucsb.edu)
  • Working directly with NCEAS informatics staff, we will produce a web‐based guide regarding the utility of each package for particular applications that includes annotated model code for each package, the data sets used in the applications, and peer‐reviewed articles. (ucsb.edu)
  • The data derived from these studies will be used for statistical analyses to more accurately predict drug efficacy. (meduniwien.ac.at)
  • This will involve statistical techniques to filter out relevant biomarkers from the plethora of data. (meduniwien.ac.at)
  • Objective To examine the appropriateness of different statistical models in analysing falls count data. (bmj.com)
  • The evaluation procedure presented in this paper provides a defensible guideline to appropriately model falls or similar count data with excess zeros. (bmj.com)
  • The model as developed predicted EtO exposures within 1.1 parts per million (ppm) of the validation data set with a standard deviation of 3.7ppm. (cdc.gov)
  • The authors conclude that the model as developed outperformed the panel of industrial hygienists relative to the validation data in terms of both bias and precision. (cdc.gov)
  • Published in the Proceedings of the National Academy of Sciences , the model now gives researchers a tool that extends past observing static networks at a single snapshot in time, which is hugely beneficial since network data are usually dynamic. (phys.org)
  • The model is really flexible, and we are already starting to use it with fMRI data to understand how regions of the brain interconnect and change over time," said Fuchen Liu, a Ph.D. student in the Department of Statistics and Data Science. (phys.org)
  • In this work, we consider statistical diagnostic for general transformation models with right censored data based on empirical likelihood. (scirp.org)
  • Wang, S. , Deng, X. and Zheng, L. (2014) Statistical Diagnosis for General Transformation Model with Right Censored Data Based on Empirical Likelihood. (scirp.org)
  • Li, J.B., Huang, Z.S. and Lian, H. (2013) Empirical Likelihood Influence for General Transformation Models with Right Censored Data. (scirp.org)
  • Dabrowska, D. and Doksum, K. (1988) Partial Likelihood in Transformation Models with Censoring Data. (scirp.org)
  • Just a couple of general comments: (1) Any model that makes probabilistic predictions can be judged on its own terms by comparing to actual data. (andrewgelman.com)
  • The paper is mostly about computation but it has an interesting discussion of some general ideas about how to model this sort of data. (andrewgelman.com)
  • 2) our setup maps directly onto the 2 parameter IRT model from educational testing, about which much is known… In this sense our approach is a little more model-driven than data-driven (i.e., contrast naive MDS or factor analysis or clustering etc). (andrewgelman.com)
  • My experience is that anything too data-driven in this field tends to run into trouble within political science because it while it is one thing to toss more elaborate statistical setups at the roll call data, they tend to lack the clear theoretical underpinnings of the Euclidean spatial voting model. (andrewgelman.com)
  • What behavioral/political assumptions or processes suggest that we ought to do this when we model the data? (andrewgelman.com)
  • We illustrate the methodology by evaluating five baseline models using data from 29 buildings. (osti.gov)
  • Doubt over the trustworthiness of published empirical results is not unwarranted and is often a result of statistical mis-specification: invalid probabilistic assumptions imposed on data. (cambridge.org)
  • Organized in distinct sections which will provide both introduction and advanced understanding according to the level of the reader, this book will prove a valuable resource to either statisticians involved in ICU studies, or ICU physicians who need to model statistical data. (wiley.com)
  • From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. (spie.org)
  • This volume contains the Proceedings of the Advanced Symposium on Multivariate Modeling and Data Analysis held at the 64th Annual Heeting of the Virginia Academy of Sciences (VAS)--American Statistical Association's Vir- ginia Chapter at James Madison University in Harrisonburg. (springer.com)
  • vii viii PREFACE Papers presented at the Symposium by e1l11lJinent researchers in the field were geared not Just for specialists in statistics, but an attempt has been made to achieve a well balanced and uniform coverage of different areas in multi- variate modeling and data analysis. (springer.com)
  • In this workshop series, we introduce various types of regression models and how they are implemented in R. We will cover linear regression, ANOVA, ANCOVA and mixed effects models for continuous response data, logistic regression binary response data, and Poisson and Negative Binomial regression for count response data. (eventbrite.ca)
  • Abstract: The Traumatic Brain Injury Model Systems National Data and Statistical Center (NDSC) provides innovative technologies, training, and resources to the Traumatic Brain Injury Model Systems (TBIMS). (craighospital.org)
  • The key contribution of this paper was that it can make sense to fit such an additive model, rather than trying to model the autocorrelation structure of the data directly. (andrewgelman.com)
  • In any case, you'd probably have to adapt this to your particular problem, but that's fine: you could fit the model, then create random simulations from the fitted model and see how they differ from the data, in order to get insight into where to go next. (andrewgelman.com)
  • It also teaches how to look at the data with graphs, a very important part of teaching statistical data analysis to non-statisticians (and biostatistics graduate students! (cambridge.org)
  • an excellent reference book for health researchers who are unfamiliar with details of any statistical methodology. (waterstones.com)
  • Regardless of the modelling methodology employed, the key here is to build the most predictive, accurate models whilst ensuring that the results do not disregard business logic or objectives. (experian.co.uk)
  • The modern term "statistical learning" for this fusion of methodology from different scientific areas could already be found in the scientific literature (see Vapnik [ 1 , 2 ]), but its meaning was slightly different from today. (hindawi.com)
  • During recent years, considerable research has been devoted to exploring this combination of state-of-the-art statistical methodology with machine learning techniques. (hindawi.com)
  • This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole--building energy savings. (osti.gov)
  • All statistical hypothesis tests and all statistical estimators are derived from statistical models. (wikipedia.org)
  • The concept of power in statistical theory is defined as the probability of rejecting the null hypothesis given that the null hypothesis is false. (gsu.edu)
  • In the context of structural equation modeling, the null hypothesis is defined by the specification of fixed and free elements in relevant parameter matrices of the model equations. (gsu.edu)
  • The null hypothesis is assessed by forming a discrepancy function between the model-implied set of moments (mean vector and/or covariance matrix) and the sample moments. (gsu.edu)
  • Various discrepancy functions can be formed depending on the particular minimization algorithm being used (e.g. maximum likelihood), but the goal remains the same - namely to derive a test statistic that has a known distribution, and then compare the obtained value of the test statistic against tabled values in order to render a decision vis-a-vis the null hypothesis. (gsu.edu)
  • Note that if the null hypothesis is true for that parameter, then the likelihood ratio chi-square for the model would be zero with degrees-of-freedom equaling the degrees-of-freedom of the model. (gsu.edu)
  • If the null hypothesis is false for that parameter, then the likelihood ratio chi-square will be some positive number reflecting the specification error incurred by fixing that parameter to the value chosen in the initial model. (gsu.edu)
  • This number is the noncentrality parameter (NCP) of the noncentral chi-square distribution , which is the distribution of the test statistic when the null hypothesis is false. (gsu.edu)
  • That is, the square of the T-value (in LISREL) or the Wald test (in EQS) can be used to assess the power of an estimated parameter in the model, against a null hypothesis that the value of the parameter is zero. (gsu.edu)
  • You'll delve into the preconditions or hypothesis for various statistical tests and techniques and work through concrete examples using R for a variety of these next-level analytics. (springer.com)
  • Empirical evaluation of the competing models was performed using model selection criteria and goodness-of-fit through simulation. (bmj.com)
  • This is probably due to the broad range of applications of the Gibbs ensemble theory in equilibrium statistical mechanics whose form is exponential and also due to the usefulness for curve fittings with two parameters tuning. (mdpi.com)
  • A stochastic model, on the other hand, possesses some inherent randomness and we can only estimate the answer. (kdnuggets.com)
  • First, estimate the model of interest. (gsu.edu)
  • Third, re-estimate the initial model with each estimated parameter fixed at their estimated value and choose an "alternative" fixed value for the parameter of interest. (gsu.edu)
  • The proposed model is employed to estimate the transition probabilities, the factors that contribute to transitions in economic performance, and other relevant characteristics. (jhu.edu)
  • Evolutionary divergence of humans from chimpanzees likely occurred some 8 million years ago rather than the 5 million year estimate widely accepted by scientists, a new statistical model suggests. (phys.org)
  • Such modeling techniques, which are widely used in science and commerce, take into account more overall information than earlier processes used to estimate evolutionary history using just a few individual fossil dates, Martin said. (phys.org)
  • His research activities were first mass spectrometry and then moved to chemometrics - mainly the application of multivariate statistical analysis for chemistry related problems, such as spectra-structure relationships and structureproperty relationships. (wiley.com)
  • Multivariate statistical analysis has come a long way and currently it is in an evolutionary stage in the era of high-speed computation and computer technology. (springer.com)
  • The set Θ {\displaystyle \Theta } defines the parameters of the model. (wikipedia.org)
  • In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. (wikipedia.org)
  • In general terms, this approach involves modeling the association of a generic response with a specific experimental variable, for example, timing, cell type, temperature, or drug dosage, using a set of interpretable parameters. (pnas.org)
  • For example, model parameters may describe the magnitude, duration, or timing of the response. (pnas.org)
  • The key to this emerging statistical standard is that it uses sensitivities to process and environmental device parameters to holistically model variations around nominal operating points. (edacafe.com)
  • An ensemble formulation for the Gompertz growth function within the framework of statistical mechanics is presented, where the two growth parameters are assumed to be statistically distributed. (mdpi.com)
  • This is an important step towards improving the reliability of predictive models in precision medicine and assisting the development of individualised treatments. (meduniwien.ac.at)
  • This authoritative, state-of-the-art reference is ideal for graduate students, researchers and practitioners in business and finance seeking to broaden their skills of understanding of econometric time series models. (ecampus.com)
  • NEW YORK (GenomeWeb) - Using a pan-cancer analysis called allele-specific copy number analysis of tumors (ASCAT), researchers at the Francis Crick Institute, the University of Leuven, and their colleagues developed a new type statistical model, which they were able to use to identify 27 new tumor suppressing genes. (genomeweb.com)
  • Working professionals, researchers, or students who are familiar with R and basic statistical techniques such as linear regression and who want to learn how to use R to perform more advanced analytics. (springer.com)
  • Through consulting at Elkhart Group Limited and former work at the UCLA Statistical Consulting Group, he has supported a wide array of clients ranging from graduate students, to experienced researchers, and biotechnology companies. (springer.com)
  • Researchers involved in the study have devised a statistical model to enable the weight of fingerprint evidence to be quantified, paving the way for its full inclusion in the criminal identification process. (redorbit.com)
  • NEW YORK: A novel statistical model that can accurately predict the time and duration of floods has been developed by researchers, including one of Indian origin. (dailyexcelsior.com)
  • Researchers said the model can help mitigate potential risk imposed by longer duration floods on critical infrastructure systems such as flood control dams, bridges and power plants. (dailyexcelsior.com)
  • Researchers at Carnegie Mellon University have developed a new dynamic statistical model to visualize changing patterns in networks, including gene expression during developmental periods of the brain. (phys.org)
  • The Stata statistical software package is again used to perform the analyses, this time employing the much improved version 10 with its intuitive point and click as well as character-based commands. (whsmith.co.uk)
  • Via external validation, an independent dataset can assess how well the model performs. (nih.gov)
  • This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. (nih.gov)
  • Thus, for any estimated model, it is a simple matter to look at these indices in relation to tabled values of the noncentral chi-square distribution in order to assess power. (gsu.edu)
  • A series of chromatographic response functions were proposed and implemented in order to assess and validate the models. (mdpi.com)
  • A secondary objective is to provide a theoretical basis for the analysis and extension of information criteria via a statistical functional approach. (springer.com)
  • This power-based approach to model modification was advocated by Kaplan (1990 , with subsequent commentary). (gsu.edu)
  • We offer a novel approach to this problem: instead of focusing on index measures, we develop a model that predicts the entire distribution of party vote-shares and, thus, does not require any index measure. (ssrn.com)
  • Featuring an approach that focuses on model specification and interpretation, this innovative work-designed specifically for students and professionals in need of a working knowledge of the subject-is a practice-oriented guide to learning how to use these models in analytical work. (eastwestcenter.org)
  • We illustrate our approach by applying it to an extended model of the three stage cascade, which forms the core of the ERK signal transduction pathway. (psu.edu)
  • This user-friendly approach integrates statistical and graphical analysis tools that are available in SAS/STAT and provides complete statistical solutions without writing SAS code or using a point-and-click approach. (sas.com)
  • Using a statistical approach to timing analysis allows designers to unlock the true potential of smaller process technologies by reducing the pessimism that can rob chip performance in traditional design methodologies. (edacafe.com)
  • The thermodynamics approach considered an energy balance among the different cell activities at some fixed time [ 9 ] and a stochastic model incorporating environmental fluctuations was investigated in [ 10 ]. (mdpi.com)
  • In spite of the above studies and interests in the Gompertz function itself, a statistical ensemble approach to the model is still lacking. (mdpi.com)
  • A common approach is to adopt some low-dimensional equations for a resolved vector and to model the effects of unresolved variables by some kind of noise, the result being a stochastic model. (cea.fr)
  • In this talk I will describe a model reduction approach that uses an optimization procedure to fit a canonical statistical model to an underlying Hamiltonian dynamics. (cea.fr)
  • field components may be modeled as narrow band random processes. (ni.com)
  • Today's Significance paper, which publishes in advance of the full study in the Journal of the Royal Statistical Society: Series A later this year, highlights this subjectivity in current processes, calling for changes in the way such key evidence is allowed to be presented. (redorbit.com)
  • Modeling strategies that omit interactions may result in misleading estimates of absolute treatment benefit for individual patients with the potential hazard of suboptimal decision making. (nih.gov)
  • simulation tools based on statistical models are sometimes mistaken for deterministic models by naive users because of their user-friendly interfaces. (kdnuggets.com)
  • For model stability, the authors investigate, analytically and in a simulation study, various stability measures and conclude that the Pearson correlation has the best properties. (hindawi.com)
  • The simulative stochastic model checker MC2 has been inspired by the idea of approx260 M. Heiner, D. Gilbert, and R. Donaldson imative LTL checking of deterministic simulation runs, proposed in [AP. (psu.edu)
  • Recent tool features include time-bounded reachability analysis for uniform CTMDPs and CSL model checking by discrete-event simulation. (psu.edu)
  • The use of the statistical software Minitab is integrated throughout the book, giving readers valuable experience with computer simulation and problem-solving techniques. (booktopia.com.au)
  • A statistical model is presented for computing probabilities that proteins are present in a sample on the basis of peptides assigned to tandem mass (MS/MS) spectra acquired from a proteolytic digest of the sample. (nih.gov)
  • Using peptide assignments to spectra generated from a sample of 18 purified proteins, as well as complex H. influenzae and Halobacterium samples, the model is shown to produce probabilities that are accurate and have high power to discriminate correct from incorrect protein identifications. (nih.gov)
  • Logistic regression modeling technique was used to clarify the relationship among probabilities of minor, serious, fatal injury risk to the rider, and the influence of risk factors in accidents involving opposing vehicle contact point, motorcycle contact point, opposing vehicle speed, motorcycle speed, relative heading angle of impact, and helmet use. (sae.org)
  • We present a probabilistic extension of logic programs below that allows for both relational probabilistic models and compact descriptions of conditional probabilities. (ubc.ca)
  • A relational probability model ( RPM ) or probabilistic relational model is a model in which the probabilities are specified on the relations, independently of the actual individuals. (ubc.ca)
  • The biomarkers identified in this way can then be used to develop models to predict the subgroups of patients for whom treatment with a newly developed drug will be more effective than the standard treatment. (meduniwien.ac.at)
  • Three steps were presented to be used in developing a model to predict exposure levels of ethylene-oxide (75218) (EtO) in the sterilization industry. (cdc.gov)
  • When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. (osti.gov)
  • In addition, the UMass team will develop, implement, and evaluate a model that predicts relations in a similarly integrated way and will extend the integrated model to predict events along with coreference and relations. (umass.edu)
  • The aim of these examples is to help the student to conceptually appreciate the problem and realistically formulate a simple mathematical model for its solution. (abebooks.com)