Data Interpretation, Statistical: Application of statistical procedures to analyze specific observed or assumed facts from a particular study.Plant Bark: The outer layer of the woody parts of plants.Reproducibility of Results: The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.Software: Sequential operating programs and data which instruct the functioning of a digital computer.Oligonucleotide Array Sequence Analysis: Hybridization of a nucleic acid sample to a very large set of OLIGONUCLEOTIDE PROBES, which have been attached individually in columns and rows to a solid support, to determine a BASE SEQUENCE, or to detect variations in a gene sequence, GENE EXPRESSION, or for GENE MAPPING.Algorithms: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.Computer Simulation: Computer-based representation of physical systems and phenomena such as chemical processes.Gene Expression Profiling: The determination of the pattern of genes expressed at the level of GENETIC TRANSCRIPTION, under specific circumstances or in a specific cell.Computational Biology: A field of biology concerned with the development of techniques for the collection and manipulation of biological data, and the use of such data to make biological discoveries or predictions. This field encompasses all computational methods and theories for solving biological problems including manipulation of models and datasets.Sequence Analysis, DNA: A multistage process that includes cloning, physical mapping, subcloning, determination of the DNA SEQUENCE, and information analysis.Polymerase Chain Reaction: In vitro method for producing large amounts of specific DNA or RNA fragments of defined length and sequence from small amounts of short oligonucleotide flanking sequences (primers). The essential steps include thermal denaturation of the double-stranded target molecules, annealing of the primers to their complementary sequences, and extension of the annealed primers by enzymatic synthesis with DNA polymerase. The reaction is efficient, specific, and extremely sensitive. Uses for the reaction include disease diagnosis, detection of difficult-to-isolate pathogens, mutation analysis, genetic testing, DNA sequencing, and analyzing evolutionary relationships.Sensitivity and Specificity: Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)Observer Variation: The failure by the observer to measure or identify a phenomenon accurately, which results in an error. Sources for this may be due to the observer's missing an abnormality, or to faulty technique resulting in incorrect test measurement, or to misinterpretation of the data. Two varieties are inter-observer variation (the amount observers vary from one another when reporting on the same material) and intra-observer variation (the amount one observer varies between observations when reporting more than once on the same material).Image Interpretation, Computer-Assisted: Methods developed to aid in the interpretation of ultrasound, radiographic images, etc., for diagnosis of disease.Radiology: A specialty concerned with the use of x-ray and other forms of radiant energy in the diagnosis and treatment of disease.Time Factors: Elements of limited time intervals, contributing to particular results or situations.Diagnostic Errors: Incorrect diagnoses after clinical examination or technical diagnostic procedures.Models, Biological: Theoretical representations that simulate the behavior or activity of biological processes or diseases. For disease models in living animals, DISEASE MODELS, ANIMAL is available. Biological models include the use of mathematical equations, computers, and other electronic equipment.

*  Uncategorized - Statistical Data
Statistics as a science deals with all aspects of data including collection, organization, interpretation and conclusion making ... This means that where there are large or even small bodies of data there are always statistical methods applied to that data to ... This means that statistical data and analysis as well as predictions are all just informed probabilities and they will never be ... Then smart data analysts use statistical analysis and try to find every customer buying patterns so they can then intervene and ...
*  Data interpretation Jobs - Data interpretation Job Vacancy -
Find Latest Data interpretation Job vacancies for Freshers & Experienced across Top Companies. ... Apply to 34 Data interpretation Jobs on, UAE's Best Online Job Portal. ... Summary: You will be responsible for solving problems that require data science (big data/coding/machine learning/statistical) ... Keyskills: data... Data Engineer. Summary: Global client of ours is seeking a Lead Data Engineer to join their dedicated ...
*  Some benefits of dichotomization in psychiatric and criminological research - PDF
SOME NOTES ON STATISTICAL INTERPRETATION. Below I provide some basic notes on statistical interpretation for some selected ... Quantitative Data Analysis: Choosing a statistical test Prepared by the Office of Planning, Assessment, Research and Quality ... 1 SOME NOTES ON STATISTICAL INTERPRETATION Below I provide some basic notes on statistical interpretation for some selected ... there are also statistical solutions. Statistical solutions Our aim is not to give a detailed exposition of statistical ...
*  Basic descriptive and inferrential statistical data analysis using Stata
4. Interpretation of output from Stata Entry Requirement: Professional 3-year Bachelor's degree/Advanced Diploma or Equivalent ... 1. Descriptive statistical methods for categorical and continuous data. 2. Constructing graphs and charts to display data. 3. ... Basic descriptive and inferrential statistical data analysis using Stata. Estimated cost: 0 - R500 ... with the necessary skills to perform basic analysis of categorical and continuous quantitiatve data using Stata. This will be ...
*  Learning From Data: Semiparametric Models Versus Faith-based... : Epidemiology
... but also ignoring the data themselves, and thus failing to adapt a model to the new information the data provide. Statistical ... In this case, such estimators have meaningful interpretation even if some model assumptions are incorrect when those arising ... by analyses of a majority of researchers using statistical inference from data to support scientific hypotheses. ... what can be learned from data and what aspects of some models are simply not deducible from data. ...
*  Statistical Interpretation of Toxicity Data | SpringerLink
Many of the topics discussed in this chapter pertain to experimental data in general, but the context of their use and examples ... The discussion focuses on the statistical interpretation of data rather than on the statistical procedures used in the data ... Safety Factor Toxicity Data Linear Extrapolation Tumor Rate Statistical Interpretation These keywords were added by machine and ... Gaylor D.W. (1987) Statistical Interpretation of Toxicity Data. In: Tardiff R.G., Rodricks J.V. (eds) Toxic Substances and ...
*  Browsing Faculty and Researcher Data and Papers by Subject "Data Interpretation, Statistical"
... 0-9. A. B. C. D. E. F. G. H. I. ... Statistical evaluation of coincident prolactin and luteinizing hormone pulses during the normal menstrual cycle  Steiner, ... First, we sought to develop statistical criteria by which it could be established that the coincident occurrence of pulses of ...
*  ASQ/ANSI/ISO 16269-7:2001: Statistical interpretation of data - Part 7: Median - Estimation and confidence intervals | ASQ
ASQ/ANSI/ISO 16269-7:2001: Statistical interpretation of data - Part 7: Median - Estimation and confidence intervals. PDF, 20 ... ASQ/ANSI/ISO 16269-4:2010: Statistical interpretation of data - Part 4: Detection and treatment of outliers ... ASQ/ANSI/ISO 16269-7:2001: Statistical interpretation of data - Part 7: Median - Estimation and confidence intervals ... ASQ/ANSI/ISO 16269-7:2001: Statistical interpretation of data - Part 7: Median - Estimation and confidence intervals ...
*  Statistical interpretation of medical data of patients with alcohol abuse - Open Medicine - Volume 7, Issue 4 (2012) - PSJD -...
Multivariate statistical methods (cluster analysis and principal components analysis) were used to assess the data collection. ... 18] D. L. Massart and L. Kaufman: The interpretation of analytical chemical data by the use of cluster analysis, J. Wiley & ... 4] D. A. Leon and J. McCambridge: "Liver cirrhosis mortality rates in Britain from 1950 to 2002: an analysis of routine data", ... An attempt is made to assess a set of biochemical, kinetic and anthropometric data for patients suffering from alcohol abuse ( ...
*  WHO IRIS: Epidemiological and sociocultural study of burn patients in Alexandria, Egypt
Data Interpretation, Statistical. Demography. Social Factors. Subject: Burns. URI: ...
*  Volume 3, Issue 1, January 2008: Knowledge Translation and Systematic Reviews | National Rehabilitation Information Center
Data Collection. *Data Interpretation, Statistical. *Decision Making. *Decision Support Techniques. *Dissemination. *Education/ ... The data collection has started in August 2006 and the results will be published independently of the study's outcome. TRIAL ... We describe a Health Canada-funded randomized trial in which quantitative and qualitative data will be gathered in 20 general ... National Spinal Cord Injury Statistical Center (NSCISC). Project Number: H133A060039. ...
*  Center for Theoretical & Mathematical Sciences » People
Data Interpretation, Statistical • Developmental Biology • Diet • Dietary Supplements • Diffusion • Dinitrophenols • Diptera • ... Statistical • Models, Theoretical • Molecular Biology • Molecular Epidemiology • Molecular Sequence Data • Molting • ...
*  Ruth A. Anderson, Professor Emerita
Administrative Personnel • Communication • Data Interpretation, Statistical • Decision Making • Decision Making, Organizational ... Day, L. and Anderson, R. A. and Mueller, C. and Hunt-McKinney, S. and Porter K. and Corazzini, K. N., Data collection and ... Thomas, J. B. and McDaniel Jr, R. R. and Anderson, R. A., Hospitals as interpretation systems., Health services research, vol. ... Hughes, L. C. and Anderson, R. A., Issues regarding aggregation of data in nursing systems research., Journal of nursing ...
*  Pankaj K. Agarwal, RJR Nabisco Professor of Computer Science and Professor of Mathematics and Chair of Computer Science and...
Data Interpretation, Statistical • DNA Transposable Elements • Ecology • Ecosystem • Forecasting • Models, Biological • Models ... and data structures.. Keywords: Adaptation, Biological • Algorithms • Amino Acid Sequence • Approximation algorithms • Base ... Chemical • Models, Molecular • Models, Statistical • Models, Theoretical • Navigation • Plant Transpiration • Plants • Protein ...
*  General anesthesia with sevoflurane decreases myocardial blood volume and hyperemic blood flow in healthy humans.
Data related to the influence of general anesthesia on the normal myocardial circulation are limited. In t ... Data Interpretation, Statistical. Echocardiography / methods. Female. Heart / drug effects*. Humans. Hyperemia / ... Data related to the influence of general anesthesia on the normal myocardial circulation are limited. In this study, we ...
*  Analysis of 87 patients with Löfgren's syndrome and the pattern of seasonality of subacute sarcoidosis.
Data Interpretation, Statistical. Female. Humans. Male. Middle Aged. Sarcoidosis / diagnosis, pathology*. Seasons*. Sex Factors ...
*  Informative Dorfman screening.
Data Interpretation, Statistical*. Gonorrhea / epidemiology*. Humans. Mass Screening / methods*. Nebraska / epidemiology. Risk ... Previous Document: Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution.. Next Document: ... We apply our methods to chlamydia and gonorrhea data collected recently in Nebraska as part of the Infertility Prevention ...
*  Measurement invariance of the perceived stress scale and latent mean differences across gender and time.
Data Interpretation, Statistical. Factor Analysis, Statistical. Female. Humans. Longitudinal Studies. Male. Middle Aged. ... These findings may aid in the interpretation of results when examining stressors and counter-stress in clinical samples where ...
*  Variance matters: the shape of a datum.
... choice data are most often plotted and analyzed as logarithmic transforms of ratios of responses and of ratios of reinforcers ... Data Interpretation, Statistical*. Discrimination (Psychology) / physiology. Humans. Models, Psychological*. Models, ... However, linear regression of this type requires that the log choice data be normally distributed, of equal variance for each ... In the quantitative analysis of behaviour, choice data are most often plotted and analyzed as logarithmic transforms of ratios ...
*  Incision-to-delivery interval and neonatal wellbeing during cesarean section.
Data Interpretation, Statistical. Female. Fetal Hypoxia / etiology*. Gestational Age. Humans. Infant, Newborn. Pregnancy. ...
*  Saúde Pública - Data on the migration of health-care workers: sources, uses, and challenges Data on the migration of health...
Data interpretation, Statistical; Developing countries; Developed countries; (source: MeSH, NLM). ... There is also a conflict between the wide range of potential sources of data and the poor statistical evidence on the migration ... Using existing data more effectively. Many countries claim they lack data on health-personnel migration, but many sources could ... Each data source has its own strengths and limitations. It is better to use existing data than wait for an ideal system to be ...
*  A systematic review of models to predict recruitment to multicentre clinical trials
Clinical Trials Data Monitoring Committees; Data Interpretation, Statistical; Statistics; Poisson Distribution; normal ... Data was extracted from the identified papers using a pre-defined data-extraction form. Data was extracted by KB initially and ... As more trial specific accrual data becomes available, the prediction can be refined by bringing this data into the calculation ... Summary of Study Data Extraction. summary of data extraction from selected papers. ...
*  Attitudes to Discrimination in Scotland
Data interpretation: statistical modelling. For the more complex analysis in this report we have used logistic regression ... The data in the report are taken from a module of questions asked in the 2002 Scottish Social Attitudes Survey. The survey ... Data were weighted to take account of the fact that not all households or individuals had the same probability of selection for ... All the percentages presented in this report are based on weighted data, the unweighted sample sizes are shown in the tables. ...

Carl Barks: "United States Social Security Death Index," index, FamilySearch ( theory: Generalizability theory, or G Theory, is a statistical framework for conceptualizing, investigating, and designing reliable observations. It is used to determine the reliability (i.Mac OS X Server 1.0Cellular microarray: A cellular microarray is a laboratory tool that allows for the multiplex interrogation of living cells on the surface of a solid support. The support, sometimes called a "chip", is spotted with varying materials, such as antibodies, proteins, or lipids, which can interact with the cells, leading to their capture on specific spots.Clonal Selection Algorithm: In artificial immune systems, Clonal selection algorithms are a class of algorithms inspired by the clonal selection theory of acquired immunity that explains how B and T lymphocytes improve their response to antigens over time called affinity maturation. These algorithms focus on the Darwinian attributes of the theory where selection is inspired by the affinity of antigen-antibody interactions, reproduction is inspired by cell division, and variation is inspired by somatic hypermutation.Interval boundary element method: Interval boundary element method is classical boundary element method with the interval parameters.
Gene signature: A gene signature is a group of genes in a cell whose combined expression patternItadani H, Mizuarai S, Kotani H. Can systems biology understand pathway activation?PSI Protein Classifier: PSI Protein Classifier is a program generalizing the results of both successive and independent iterations of the PSI-BLAST program. PSI Protein Classifier determines belonging of the found by PSI-BLAST proteins to the known families.DNA sequencer: A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine).Thermal cyclerAssay sensitivity: Assay sensitivity is a property of a clinical trial defined as the ability of a trial to distinguish an effective treatment from a less effective or ineffective intervention. Without assay sensitivity, a trial is not internally valid and is not capable of comparing the efficacy of two interventions.Thomas KolbTemporal analysis of products: Temporal Analysis of Products (TAP), (TAP-2), (TAP-3) is an experimental technique for studyingPrescription cascade: Prescription cascade refers to the process whereby the side effects of drugs are misdiagnosed as symptoms of another problem resulting in further prescriptions and further side effects and unanticipated drug interactions. This may lead to further misdiagnoses and further symptoms.Matrix model: == Mathematics and physics ==

(1/9497) A method for calculating age-weighted death proportions for comparison purposes.

OBJECTIVE: To introduce a method for calculating age-weighted death proportions (wDP) for comparison purposes. MATERIALS AND METHODS: A methodological study using secondary data from the municipality of Sao Paulo, Brazil (1980-1994) was carried out. First, deaths are weighted in terms of years of potential life lost before the age of 100 years. Then, in order to eliminate distortion of comparisons among proportions of years of potential life lost before the age of 100 years (pYPLL-100), the denominator is set to that of a standard age distribution of deaths for all causes. Conventional death proportions (DP), pYPLL-100, and wDP were calculated. RESULTS: Populations in which deaths from a particular cause occur at older ages exhibit lower wDP than those in which deaths occur at younger ages. The sum of all cause-specific wDP equals one only when the test population has exactly the same age distribution of deaths for all causes as that of the standard population. CONCLUSION: Age-weighted death proportions improve the information given by conventional DP, and are strongly recommended for comparison purposes.  (+info)

(2/9497) A review of statistical methods for estimating the risk of vertical human immunodeficiency virus transmission.

BACKGROUND: Estimation of the risk of vertical transmission of human immunodeficiency virus (HIV) has been complicated by the lack of a reliable diagnostic test for paediatric HIV infection. METHODS: A literature search was conducted to identify all statistical methods that have been used to estimate HIV vertical transmission risk. Although the focus of this article is the analysis of birth cohort studies, ad hoc studies are also reviewed. CONCLUSIONS: The standard method for estimating HIV vertical transmission risk is biased and inefficient. Various alternative analytical approaches have been proposed but all involve simplifying assumptions and some are difficult to implement. However, early diagnosis/exclusion of infection is now possible because of improvements in polymerase chain reaction technology and complex estimation methods should no longer be required. The best way to analyse studies conducted in breastfeeding populations is still unclear and deserves attention in view of the many intervention studies being planned or conducted in developing countries.  (+info)

(3/9497) Statistical inference by confidence intervals: issues of interpretation and utilization.

This article examines the role of the confidence interval (CI) in statistical inference and its advantages over conventional hypothesis testing, particularly when data are applied in the context of clinical practice. A CI provides a range of population values with which a sample statistic is consistent at a given level of confidence (usually 95%). Conventional hypothesis testing serves to either reject or retain a null hypothesis. A CI, while also functioning as a hypothesis test, provides additional information on the variability of an observed sample statistic (ie, its precision) and on its probable relationship to the value of this statistic in the population from which the sample was drawn (ie, its accuracy). Thus, the CI focuses attention on the magnitude and the probability of a treatment or other effect. It thereby assists in determining the clinical usefulness and importance of, as well as the statistical significance of, findings. The CI is appropriate for both parametric and nonparametric analyses and for both individual studies and aggregated data in meta-analyses. It is recommended that, when inferential statistical analysis is performed, CIs should accompany point estimates and conventional hypothesis tests wherever possible.  (+info)

(4/9497) Incidence and duration of hospitalizations among persons with AIDS: an event history approach.

OBJECTIVE: To analyze hospitalization patterns of persons with AIDS (PWAs) in a multi-state/multi-episode continuous time duration framework. DATA SOURCES: PWAs on Medicaid identified through a match between the state's AIDS Registry and Medicaid eligibility files; hospital admission and discharge dates identified through Medicaid claims. STUDY DESIGN: Using a Weibull event history framework, we model the hazard of transition between hospitalized and community spells, incorporating the competing risk of death in each of these states. Simulations are used to translate these parameters into readily interpretable estimates of length of stay, the probability that a hospitalization will end in death, and the probability that a nonhospitalized person will be hospitalized within 90 days. PRINCIPAL FINDINGS: In multivariate analyses, participation in a Medicaid waiver program offering case management and home care was associated with hospital stays 1.3 days shorter than for nonparticipants. African American race and Hispanic ethnicity were associated with hospital stays 1.2 days and 1.0 day longer than for non-Hispanic whites; African Americans also experienced more frequent hospital admissions. Residents of the high-HIV-prevalence area of the state had more frequent admissions and stays two days longer than those residing elsewhere in the state. Older PWAs experienced less frequent hospital admissions but longer stays, with hospitalizations of 55-year-olds lasting 8.25 days longer than those of 25-year-olds. CONCLUSIONS: Much socioeconomic and geographic variability exists both in the incidence and in the duration of hospitalization among persons with AIDS in New Jersey. Event history analysis provides a useful statistical framework for analysis of these variations, deals appropriately with data in which duration of observation varies from individual to individual, and permits the competing risk of death to be incorporated into the model. Transition models of this type have broad applicability in modeling the risk and duration of hospitalization in chronic illnesses.  (+info)

(5/9497) Quantitative study of the variability of hepatic iron concentrations.

BACKGROUND: The hepatic iron concentration (HIC) is widely used in clinical practice and in research; however, data on the variability of HIC among biopsy sites are limited. One aim of the present study was to determine the variability of HIC within both healthy and cirrhotic livers. METHODS: Using colorimetric methods, we determined HIC in multiple large (microtome) and small (biopsy-sized) paraffin-embedded samples in 11 resected livers with end-stage cirrhosis. HIC was also measured in multiple fresh samples taken within 5 mm of each other ("local" samples) and taken at sites 3-5 cm apart ("remote" samples) from six livers with end-stage cirrhosis and two healthy autopsy livers. RESULTS: The within-organ SD of HIC was 13-1553 microg/g (CV, 3.6-55%) for microtome samples and 60-2851 microg/g (CV, 15-73%) for biopsy-sized samples. High variability of HIC was associated with mild to moderate iron overload, because the HIC SD increased with increasing mean HIC (P <0.002). Livers with mean HIC >1000 microg/g exhibited significant biological variability in HIC between sites separated by 3-5 cm (remote sites; P <0.05). The SD was larger for biopsy-sized samples than for microtome samples (P = 0.02). CONCLUSION: Ideally, multiple hepatic sites would be sampled to obtain a representative mean HIC.  (+info)

(6/9497) A simulation study of confounding in generalized linear models for air pollution epidemiology.

Confounding between the model covariates and causal variables (which may or may not be included as model covariates) is a well-known problem in regression models used in air pollution epidemiology. This problem is usually acknowledged but hardly ever investigated, especially in the context of generalized linear models. Using synthetic data sets, the present study shows how model overfit, underfit, and misfit in the presence of correlated causal variables in a Poisson regression model affect the estimated coefficients of the covariates and their confidence levels. The study also shows how this effect changes with the ranges of the covariates and the sample size. There is qualitative agreement between these study results and the corresponding expressions in the large-sample limit for the ordinary linear models. Confounding of covariates in an overfitted model (with covariates encompassing more than just the causal variables) does not bias the estimated coefficients but reduces their significance. The effect of model underfit (with some causal variables excluded as covariates) or misfit (with covariates encompassing only noncausal variables), on the other hand, leads to not only erroneous estimated coefficients, but a misguided confidence, represented by large t-values, that the estimated coefficients are significant. The results of this study indicate that models which use only one or two air quality variables, such as particulate matter [less than and equal to] 10 microm and sulfur dioxide, are probably unreliable, and that models containing several correlated and toxic or potentially toxic air quality variables should also be investigated in order to minimize the situation of model underfit or misfit.  (+info)

(7/9497) Wavelet transform to quantify heart rate variability and to assess its instantaneous changes.

Heart rate variability is a recognized parameter for assessing autonomous nervous system activity. Fourier transform, the most commonly used method to analyze variability, does not offer an easy assessment of its dynamics because of limitations inherent in its stationary hypothesis. Conversely, wavelet transform allows analysis of nonstationary signals. We compared the respective yields of Fourier and wavelet transforms in analyzing heart rate variability during dynamic changes in autonomous nervous system balance induced by atropine and propranolol. Fourier and wavelet transforms were applied to sequences of heart rate intervals in six subjects receiving increasing doses of atropine and propranolol. At the lowest doses of atropine administered, heart rate variability increased, followed by a progressive decrease with higher doses. With the first dose of propranolol, there was a significant increase in heart rate variability, which progressively disappeared after the last dose. Wavelet transform gave significantly better quantitative analysis of heart rate variability than did Fourier transform during autonomous nervous system adaptations induced by both agents and provided novel temporally localized information.  (+info)

(8/9497) Excess of high activity monoamine oxidase A gene promoter alleles in female patients with panic disorder.

A genetic contribution to the pathogenesis of panic disorder has been demonstrated by clinical genetic studies. Molecular genetic studies have focused on candidate genes suggested by the molecular mechanisms implied in the action of drugs utilized for therapy or in challenge tests. One class of drugs effective in the treatment of panic disorder is represented by monoamine oxidase A inhibitors. Therefore, the monoamine oxidase A gene on chromosome X is a prime candidate gene. In the present study we investigated a novel repeat polymorphism in the promoter of the monoamine oxidase A gene for association with panic disorder in two independent samples (German sample, n = 80; Italian sample, n = 129). Two alleles (3 and 4 repeats) were most common and constituted >97% of the observed alleles. Functional characterization in a luciferase assay demonstrated that the longer alleles (3a, 4 and 5) were more active than allele 3. Among females of both the German and the Italian samples of panic disorder patients (combined, n = 209) the longer alleles (3a, 4 and 5) were significantly more frequent than among females of the corresponding control samples (combined, n = 190, chi2 = 10.27, df = 1, P = 0.001). Together with the observation that inhibition of monoamine oxidase A is clinically effective in the treatment of panic disorder these findings suggest that increased monoamine oxidase A activity is a risk factor for panic disorder in female patients.  (+info)

  • variance
  • Variance matters: the shape of a datum. (
  • However, linear regression of this type requires that the log choice data be normally distributed, of equal variance for each log reinforcer ratio, and that the x (log reinforcer ratio) measures be fixed with no variance. (
  • We argue that, while log transformed choice data may be normally distributed, log reinforcer ratios do have variance, and because these measures derive from a binomial process, log reinforcer ratio distributions will be non-normal and skewed to more extreme values. (
  • If an individual series of observations is generated from simulations that employ a given variance of noise that equals the observed variance of our data series of interest, and a given length (say, 100 points), a large number of such simulated series (say, 100,000 series) can be generated. (
  • The tolerance interval differs from a confidence interval in that the confidence interval bounds a single-valued population parameter (the mean or the variance, for example) with some confidence, while the tolerance interval bounds the range of data values that includes a specific proportion of the population. (
  • assess
  • An attempt is made to assess a set of biochemical, kinetic and anthropometric data for patients suffering from alcohol abuse (alcoholics) and healthy patients (non-alcoholics). (
  • The control chart, also known as the 'Shewhart chart' or 'process-behavior chart' is a statistical tool intended to assess the nature of variation in a process and to facilitate forecasting and management. (
  • population
  • A tolerance interval is a statistical interval within which, with some confidence level, a specified proportion of a sampled population falls. (
  • Baines arrived in India in 1870, approximately halfway through the five-year-long attempt to collect statistical population data, which was the first such exercise by the Raj administration. (
  • His obituary in the Journal of the Royal Statistical Society describes the changes as being "first the separation of caste from religion and, secondly, the substitution of the population subsisting by an occupation for that exercising it. (
  • work
  • Baines spent much of his time organising the censuses and also analysing and producing reports based on their data, which were "widely recognised as the work of a brilliant ethnographer and statistician", according to an obituary published in Nature. (
  • Results
  • These 100,000 series can then be analysed individually to calculate estimated trends in each series, and these results establish a distribution of estimated trends that are to be expected from such random data - see diagram. (
  • Many protocols include provisions for avoiding bias in the interpretation of results. (
  • approach
  • clearly such a constructed series would be trend-free, so as with the approach of using simulated data these series can be used to generate borderline trend values V and −V. In the above discussion the distribution of trends was calculated by simulation, from a large number of trials. (
  • The control charts: a statistical approach to the study of manufacturing process variation for the purpose of improving the economic effectiveness of the process. (
  • main
  • The main goal is to identify the data set structure, finding groups of similarity among the clinical parameters or among the patients. (
  • This paper presents information on the uses of statistics and those who use them, the strengths and limitations of the main data sources, and other challenges that need to be met to obtain good evidence on the migration of health workers. (
  • process
  • ISO/TR 14468:2010 assesses a measurement process where the characteristic(s) being measured is (are) in the form of attribute data (including nominal and ordinal data). (
  • error
  • They also lead to model comparisons, which assume equal normally distributed error around every data point, being incorrect. (
  • confidence
  • If such data are processed to produce a 95% confidence interval for the mean mileage of the model, it is, for example, possible to use it to project the mean or total gasoline consumption for the manufactured fleet of such autos over their first 5,000 miles of use. (
  • visual
  • Levey-Jennings chart is a graph that quality control data is plotted on to give a visual indication whether a laboratory test is working well.The distance from the mean is measured in standard deviations (SD). (
  • sample
  • All the percentages presented in this report are based on weighted data, the unweighted sample sizes are shown in the tables. (
  • produce
  • Given a set of data and the desire to produce some kind of model of those data, there are a variety of functions that can be chosen for the fit. (
  • general
  • Many of the topics discussed in this chapter pertain to experimental data in general, but the context of their use and examples given are in the field of toxicology. (
  • Data related to the influence of general anesthesia on the normal myocardial circulation are limited. (