ROC Curve: A graphic means for assessing the ability of a screening test to discriminate between healthy and diseased persons; may also be used in other studies, e.g., distinguishing stimuli responses as to a faint stimuli or nonstimuli.Area Under Curve: A statistical means of summarizing information from a series of measurements on one individual. It is frequently used in clinical pharmacology where the AUC from serum levels can be interpreted as the total uptake of whatever has been administered. As a plot of the concentration of a drug against time, after a single dose of medicine, producing a standard shape curve, it is a means of comparing the bioavailability of the same drug made by different companies. (From Winslade, Dictionary of Clinical Research, 1992)Sensitivity and Specificity: Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)Predictive Value of Tests: In screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test.Reproducibility of Results: The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.Biological Markers: Measurable and quantifiable biological parameters (e.g., specific enzyme concentration, specific hormone concentration, specific gene phenotype distribution in a population, presence of biological substances) which serve as indices for health- and physiology-related assessments, such as disease risk, psychiatric disorders, environmental exposure and its effects, disease diagnosis, metabolic processes, substance abuse, pregnancy, cell line development, epidemiologic studies, etc.Algorithms: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.Models, Statistical: Statistical formulations or analyses which, when applied to data and found to fit the data, are then used to verify the assumptions and parameters used in the analysis. Examples of statistical models are the linear model, binomial model, polynomial model, two-parameter model, etc.Prospective Studies: Observation of a population for a sufficient number of persons over a sufficient number of years to generate incidence or mortality rates subsequent to the selection of the study group.Observer Variation: The failure by the observer to measure or identify a phenomenon accurately, which results in an error. Sources for this may be due to the observer's missing an abnormality, or to faulty technique resulting in incorrect test measurement, or to misinterpretation of the data. Two varieties are inter-observer variation (the amount observers vary from one another when reporting on the same material) and intra-observer variation (the amount one observer varies between observations when reporting more than once on the same material).Diagnosis, Computer-Assisted: Application of computer programs designed to assist the physician in solving a diagnostic problem.Retrospective Studies: Studies used to test etiologic hypotheses in which inferences about an exposure to putative causal factors are derived from data relating to characteristics of persons under study or to events or experiences in their past. The essential feature is that some of the persons under study have the disease or outcome of interest and their characteristics are compared with those of unaffected persons.Prognosis: A prediction of the probable outcome of a disease based on a individual's condition and the usual course of the disease as seen in similar situations.Neural Networks (Computer): A computer architecture, implementable in either hardware or software, modeled after biological neural networks. Like the biological system in which the processing capability is a result of the interconnection strengths between arrays of nonlinear processing nodes, computerized neural networks, often called perceptrons or multilayer connectionist models, consist of neuron-like units. A homogeneous group of units makes up a layer. These networks are good at pattern recognition. They are adaptive, performing tasks by example, and thus are better for decision-making than are linear learning machines or cluster analysis. They do not require explicit programming.Logistic Models: Statistical models which describe the relationship between a qualitative dependent variable (that is, one which can take only certain discrete values, such as the presence or absence of a disease) and an independent variable. A common application is in epidemiology for estimating an individual's risk (probability of a disease) as a function of a given risk factor.False Positive Reactions: Positive test results in subjects who do not possess the attribute for which the test is conducted. The labeling of healthy persons as diseased when screening in the detection of disease. (Last, A Dictionary of Epidemiology, 2d ed)Radiography: Examination of any part of the body for diagnostic purposes by means of X-RAYS or GAMMA RAYS, recording the image on a sensitized surface (such as photographic film).Data Interpretation, Statistical: Application of statistical procedures to analyze specific observed or assumed facts from a particular study.Severity of Illness Index: Levels within a diagnostic group which are established by various measurement criteria applied to the seriousness of a patient's disorder.Risk Assessment: The qualitative or quantitative estimation of the likelihood of adverse effects that may result from exposure to specified health hazards or from the absence of beneficial influences. (Last, Dictionary of Epidemiology, 1988)Biometry: The use of statistical and mathematical methods to analyze biological observations and phenomena.Image Interpretation, Computer-Assisted: Methods developed to aid in the interpretation of ultrasound, radiographic images, etc., for diagnosis of disease.Likelihood Functions: Functions constructed from a statistical model and a set of observed data which give the probability of that data for various values of the unknown model parameters. Those parameter values that maximize the probability are the maximum likelihood estimates of the parameters.Diagnostic Techniques, Endocrine: Methods and procedures for the diagnosis of diseases or dysfunction of the endocrine glands or demonstration of their physiological processes.Diagnostic Techniques and Procedures: Methods, procedures, and tests performed to diagnose disease, disordered function, or disability.Epidemiologic Methods: Research techniques that focus on study designs and data gathering methods in human and animal populations.Diagnostic Tests, Routine: Diagnostic procedures, such as laboratory tests and x-rays, routinely performed on all individuals or specified categories of individuals in a specified situation, e.g., patients being admitted to the hospital. These include routine tests administered to neonates.Tumor Markers, Biological: Molecular products metabolized and secreted by neoplastic tissue and characterized biochemically in cells or body fluids. They are indicators of tumor stage and grade as well as useful for monitoring responses to treatment and predicting recurrence. Many chemical groups are represented including hormones, antigens, amino and nucleic acids, enzymes, polyamines, and specific cell membrane proteins and lipids.Radiographic Image Enhancement: Improvement in the quality of an x-ray image by use of an intensifying screen, tube, or filter and by optimum exposure techniques. Digital processing methods are often employed.Risk Factors: An aspect of personal behavior or lifestyle, environmental exposure, or inborn or inherited characteristic, which, on the basis of epidemiologic evidence, is known to be associated with a health-related condition considered important to prevent.Case-Control Studies: Studies which start with the identification of persons with a disease of interest and a control (comparison, referent) group without the disease. The relationship of an attribute to the disease is examined by comparing diseased and non-diseased persons with regard to the frequency or levels of the attribute in each group.Discriminant Analysis: A statistical analytic technique used with discrete dependent variables, concerned with separating sets of observed values and allocating new values. It is sometimes used instead of regression analysis.Computer Simulation: Computer-based representation of physical systems and phenomena such as chemical processes.Time Factors: Elements of limited time intervals, contributing to particular results or situations.Diagnostic Techniques, Ophthalmological: Methods and procedures for the diagnosis of diseases of the eye or of vision disorders.Artificial Intelligence: Theory and development of COMPUTER SYSTEMS which perform tasks that normally require human intelligence. Such tasks may include speech recognition, LEARNING; VISUAL PERCEPTION; MATHEMATICAL COMPUTING; reasoning, PROBLEM SOLVING, DECISION-MAKING, and translation of language.Labor Onset: The beginning of true OBSTETRIC LABOR which is characterized by the cyclic uterine contractions of increasing frequency, duration, and strength causing CERVICAL DILATATION to begin (LABOR STAGE, FIRST ).Reference Values: The range or frequency distribution of a measurement in a population (of organisms, organs or things) that has not been selected for the presence of disease or abnormality.Liver Cirrhosis: Liver disease in which the normal microcirculation, the gross vascular anatomy, and the hepatic architecture have been variably destroyed and altered with fibrous septa surrounding regenerated or regenerating parenchymal nodules.Decision Support Techniques: Mathematical or statistical procedures used as aids in making a decision. They are frequently used in medical decision-making.Solitary Pulmonary Nodule: A single lung lesion that is characterized by a small round mass of tissue, usually less than 1 cm in diameter, and can be detected by chest radiography. A solitary pulmonary nodule can be associated with neoplasm, tuberculosis, cyst, or other anomalies in the lung, the CHEST WALL, or the PLEURA.Radiographic Image Interpretation, Computer-Assisted: Computer systems or networks designed to provide radiographic interpretive information.Immunoassay: A technique using antibodies for identifying or quantifying a substance. Usually the substance being studied serves as antigen both in antibody production and in measurement of antibody by the test substance.Cohort Studies: Studies in which subsets of a defined population are identified. These groups may or may not be exposed to factors hypothesized to influence the probability of the occurrence of a particular disease or other outcome. Cohorts are defined populations which, as a whole, are followed in an attempt to determine distinguishing subgroup characteristics.Cross-Sectional Studies: Studies in which the presence or absence of disease or other health-related variables are determined in each member of the study population or in a representative sample at one particular time. This contrasts with LONGITUDINAL STUDIES which are followed over a period of time.Image Enhancement: Improvement of the quality of a picture by various techniques, including computer processing, digital filtering, echocardiographic techniques, light and ultrastructural MICROSCOPY, fluorescence spectrometry and microscopy, scintigraphy, and in vitro image processing at the molecular level.Multivariate Analysis: A set of techniques used when variation in several variables has to be studied simultaneously. In statistics, multivariate analysis is interpreted as any analytic method that allows simultaneous study of two or more dependent variables.Treatment Outcome: Evaluation undertaken to assess the results or consequences of management and procedures used in combating disease in order to determine the efficacy, effectiveness, safety, and practicability of these interventions in individual cases or series.Enzyme-Linked Immunosorbent Assay: An immunoassay utilizing an antibody labeled with an enzyme marker such as horseradish peroxidase. While either the enzyme or the antibody is bound to an immunosorbent substrate, they both retain their biologic activity; the change in enzyme activity as a result of the enzyme-antibody-antigen reaction is proportional to the concentration of the antigen and can be measured spectrophotometrically or with the naked eye. Many variations of the method have been developed.Calibration: Determination, by measurement or comparison with a standard, of the correct value of each scale reading on a meter or other measuring instrument; or determination of the settings of a control device that correspond to particular values of voltage, current, frequency or other output.Optic Nerve Diseases: Conditions which produce injury or dysfunction of the second cranial or optic nerve, which is generally considered a component of the central nervous system. Damage to optic nerve fibers may occur at or near their origin in the retina, at the optic disk, or in the nerve, optic chiasm, optic tract, or lateral geniculate nuclei. Clinical manifestations may include decreased visual acuity and contrast sensitivity, impaired color vision, and an afferent pupillary defect.Tomography, X-Ray Computed: Tomography using x-ray transmission and a computer algorithm to reconstruct the image.Nephelometry and Turbidimetry: Chemical analysis based on the phenomenon whereby light, passing through a medium with dispersed particles of a different refractive index from that of the medium, is attenuated in intensity by scattering. In turbidimetry, the intensity of light transmitted through the medium, the unscattered light, is measured. In nephelometry, the intensity of the scattered light is measured, usually, but not necessarily, at right angles to the incident light beam.Reference Standards: A basis of value established for the measure of quantity, weight, extent or quality, e.g. weight standards, standard solutions, methods, techniques, and procedures used in diagnosis and therapy.Support Vector Machines: Learning algorithms which are a set of related supervised computer learning methods that analyze data and recognize patterns, and used for classification and regression analysis.Natriuretic Peptide, Brain: A PEPTIDE that is secreted by the BRAIN and the HEART ATRIA, stored mainly in cardiac ventricular MYOCARDIUM. It can cause NATRIURESIS; DIURESIS; VASODILATION; and inhibits secretion of RENIN and ALDOSTERONE. It improves heart function. It contains 32 AMINO ACIDS.Cervical Ripening: A change in the CERVIX UTERI with respect to its readiness to relax. The cervix normally becomes softer, more flexible, more distensible, and shorter in the final weeks of PREGNANCY. These cervical changes can also be chemically induced (LABOR, INDUCED).Probability: The study of chance processes or the relative frequency characterizing a chance process.Mass Screening: Organized periodic procedures performed on large groups of people for the purpose of detecting disease.Early Diagnosis: Methods to determine in patients the nature of a disease or disorder at its early stage of progression. Generally, early diagnosis improves PROGNOSIS and TREATMENT OUTCOME.

Validation of the Rockall risk scoring system in upper gastrointestinal bleeding. (1/7831)

BACKGROUND: Several scoring systems have been developed to predict the risk of rebleeding or death in patients with upper gastrointestinal bleeding (UGIB). These risk scoring systems have not been validated in a new patient population outside the clinical context of the original study. AIMS: To assess internal and external validity of a simple risk scoring system recently developed by Rockall and coworkers. METHODS: Calibration and discrimination were assessed as measures of validity of the scoring system. Internal validity was assessed using an independent, but similar patient sample studied by Rockall and coworkers, after developing the scoring system (Rockall's validation sample). External validity was assessed using patients admitted to several hospitals in Amsterdam (Vreeburg's validation sample). Calibration was evaluated by a chi2 goodness of fit test, and discrimination was evaluated by calculating the area under the receiver operating characteristic (ROC) curve. RESULTS: Calibration indicated a poor fit in both validation samples for the prediction of rebleeding (p<0.0001, Vreeburg; p=0.007, Rockall), but a better fit for the prediction of mortality in both validation samples (p=0.2, Vreeburg; p=0.3, Rockall). The areas under the ROC curves were rather low in both validation samples for the prediction of rebleeding (0.61, Vreeburg; 0.70, Rockall), but higher for the prediction of mortality (0.73, Vreeburg; 0.81, Rockall). CONCLUSIONS: The risk scoring system developed by Rockall and coworkers is a clinically useful scoring system for stratifying patients with acute UGIB into high and low risk categories for mortality. For the prediction of rebleeding, however, the performance of this scoring system was unsatisfactory.  (+info)

Computed radiography dual energy subtraction: performance evaluation when detecting low-contrast lung nodules in an anthropomorphic phantom. (2/7831)

A dedicated chest computed radiography (CR) system has an option of energy subtraction (ES) acquisition. Two imaging plates, rather than one, are separated by a copper filter to give a high-energy and low-energy image. This study compares the diagnostic accuracy of conventional computed radiography to that of ES obtained with two radiographic techniques. One soft tissue only image was obtained at the conventional CR technique (s = 254) and the second was obtained at twice the radiation exposure (s = 131) to reduce noise. An anthropomorphic phantom with superimposed low-contrast lung nodules was imaged 53 times for each radiographic technique. Fifteen images had no nodules; 38 images had a total of 90 nodules placed on the phantom. Three chest radiologists read the three sets of images in a receiver operating characteristic (ROC) study. Significant differences in Az were only found between (1) the higher exposure energy subtracted images and the conventional dose energy subtracted images (P = .095, 90% confidence), and (2) the conventional CR and the energy subtracted image obtained at the same technique (P = .024, 98% confidence). As a result of this study, energy subtracted images cannot be substituted for conventional CR images when detecting low-contrast nodules, even when twice the exposure is used to obtain them.  (+info)

Computerized analysis of abnormal asymmetry in digital chest radiographs: evaluation of potential utility. (3/7831)

The purpose of this study was to develop and test a computerized method for the fully automated analysis of abnormal asymmetry in digital posteroanterior (PA) chest radiographs. An automated lung segmentation method was used to identify the aerated lung regions in 600 chest radiographs. Minimal a priori lung morphology information was required for this gray-level thresholding-based segmentation. Consequently, segmentation was applicable to grossly abnormal cases. The relative areas of segmented right and left lung regions in each image were compared with the corresponding area distributions of normal images to determine the presence of abnormal asymmetry. Computerized diagnoses were compared with image ratings assigned by a radiologist. The ability of the automated method to distinguish normal from asymmetrically abnormal cases was evaluated by using receiver operating characteristic (ROC) analysis, which yielded an area under the ROC curve of 0.84. This automated method demonstrated promising performance in its ability to detect abnormal asymmetry in PA chest images. We believe this method could play a role in a picture archiving and communications (PACS) environment to immediately identify abnormal cases and to function as one component of a multifaceted computer-aided diagnostic scheme.  (+info)

Dose-response slope of forced oscillation and forced expiratory parameters in bronchial challenge testing. (4/7831)

In population studies, the provocative dose (PD) of bronchoconstrictor causing a significant decrement in lung function cannot be calculated for most subjects. Dose-response curves for carbachol were examined to determine whether this relationship can be summarized by means of a continuous index likely to be calculable for all subjects, namely the two-point dose response slope (DRS) of mean resistance (Rm) and resistance at 10 Hz (R10) measured by the forced oscillation technique (FOT). Five doses of carbachol (320 microg each) were inhaled by 71 patients referred for investigation of asthma (n=16), chronic cough (n=15), nasal polyposis (n=8), chronic rhinitis (n=8), dyspnoea (n=8), urticaria (n=5), post-anaphylactic shock (n=4) and miscellaneous conditions (n=7). FOT resistance and forced expiratory volume in one second (FEV1) were measured in close succession. The PD of carbachol leading to a fall in FEV1 > or = 20% (PD20) or a rise in Rm or R10 > or = 47% (PD47,Rm and PD47,R10) were calculated by interpolation. DRS for FEV1 (DRSFEV1), Rm (DRSRm) and R10 (DRSR10) were obtained as the percentage change at last dose divided by the total dose of carbachol. The sensitivity (Se) and specificity (Sp) of DRSRm, DRS10 delta%Rm and delta%R10 in detecting spirometric bronchial hyperresponsiveness (BHR, fall in FEV1 > or = 20%) were assessed by receiver operating characteristic (ROC) curves. There were 23 (32%) "spirometric" reactors. PD20 correlated strongly with DRSFEV1 (r=-0.962; p=0.0001); PD47,Rm correlated significantly with DRSRm (r=-0.648; p=0.0001) and PD47,R10 with DRSR10 (r=-0.552; p=0.0001). DRSFEV1 correlated significantly with both DRSRm (r=0.700; p=0.0001) and DRSR10 (r=0.784; p=0.0001). The Se and Sp of the various FOT indices to correctly detect spirometric BHR were as follows: DRSRm: Se=91.3%, Sp=81.2%; DRSR10: Se=91.3%, Sp=95.8%; delta%Rm: Se=86.9%, Sp=52.1%; and delta%R10: Se=91.3%, Sp=58.3%. Dose-response slopes of indices of forced oscillation technique resistance, especially the dose-response slope of resistance at 10Hz are proposed as simple quantitative indices of bronchial responsiveness which can be calculated for all subjects and that may be useful in occupational epidemiology.  (+info)

Relationship of glucose and insulin levels to the risk of myocardial infarction: a case-control study. (5/7831)

OBJECTIVE: To assess the relationship between dysglycemia and myocardial infarction in nondiabetic individuals. BACKGROUND: Nondiabetic hyperglycemia may be an important cardiac risk factor. The relationship between myocardial infarction and glucose, insulin, abdominal obesity, lipids and hypertension was therefore studied in South Asians-a group at high risk for coronary heart disease and diabetes. METHODS: Demographics, waist/hip ratio, fasting blood glucose (FBG), insulin, lipids and glucose tolerance were measured in 300 consecutive patients with a first myocardial infarction and 300 matched controls. RESULTS: Cases were more likely to have diabetes (OR 5.49; 95% CI 3.34, 9.01), impaired glucose tolerance (OR 4.08; 95% CI 2.31, 7.20) or impaired fasting glucose (OR 3.22; 95% CI 1.51, 6.85) than controls. Cases were 3.4 (95% CI 1.9, 5.8) and 6.0 (95% CI 3.3, 10.9) times more likely to have an FBG in the third and fourth quartile (5.2-6.3 and >6.3 mmol/1); after removing subjects with diabetes, impaired glucose tolerance and impaired fasting glucose, cases were 2.7 times (95% CI 1.5-4.8) more likely to have an FBG >5.2 mmol/l. A fasting glucose of 4.9 mmol/l best distinguished cases from controls (OR 3.42; 95% CI 2.42, 4.83). Glucose, abdominal obesity, lipids, hypertension and smoking were independent multivariate risk factors for myocardial infarction. In subjects without glucose intolerance, a 1.2 mmol/l (21 mg/dl) increase in postprandial glucose was independently associated with an increase in the odds of a myocardial infarction of 1.58 (95% CI 1.18, 2.12). CONCLUSIONS: A moderately elevated glucose level is a continuous risk factor for MI in nondiabetic South Asians with either normal or impaired glucose tolerance.  (+info)

13N-ammonia myocardial blood flow and uptake: relation to functional outcome of asynergic regions after revascularization. (6/7831)

OBJECTIVES: In this study we determined whether 13N-ammonia uptake measured late after injection provides additional insight into myocardial viability beyond its value as a myocardial blood flow tracer. BACKGROUND: Myocardial accumulation of 13N-ammonia is dependent on both regional blood flow and metabolic trapping. METHODS: Twenty-six patients with chronic coronary artery disease and left ventricular dysfunction underwent prerevascularization 13N-ammonia and 18F-deoxyglucose (FDG) positron emission tomography, and thallium single-photon emission computed tomography. Pre- and postrevascularization wall-motion abnormalities were assessed using gated cardiac magnetic resonance imaging or gated radionuclide angiography. RESULTS: Wall motion improved in 61 of 107 (57%) initially asynergic regions and remained abnormal in 46 after revascularization. Mean absolute myocardial blood flow was significantly higher in regions that improved compared to regions that did not improve after revascularization (0.63+/-0.27 vs. 0.52+/-0.25 ml/min/g, p < 0.04). Similarly, the magnitude of late 13N-ammonia uptake and FDG uptake was significantly higher in regions that improved (90+/-20% and 94+/-25%, respectively) compared to regions that did not improve after revascularization (67+/-24% and 71+/-25%, p < 0.001 for both, respectively). However, late 13N-ammonia uptake was a significantly better predictor of functional improvement after revascularization (area under the receiver operating characteristic [ROC] curve = 0.79) when compared to absolute blood flow (area under the ROC curve = 0.63, p < 0.05). In addition, there was a linear relationship between late 13N-ammonia uptake and FDG uptake (r = 0.68, p < 0.001) as well as thallium uptake (r = 0.76, p < 0.001) in all asynergic regions. CONCLUSIONS: These data suggest that beyond its value as a perfusion tracer, late 13N-ammonia uptake provides useful information regarding functional recovery after revascularization. The parallel relationship among 13N-ammonia, FDG, and thallium uptake supports the concept that uptake of 13N-ammonia as measured from the late images may provide important insight regarding cell membrane integrity and myocardial viability.  (+info)

Functional status and quality of life in patients with heart failure undergoing coronary bypass surgery after assessment of myocardial viability. (7/7831)

OBJECTIVES: The aim of this study was to evaluate whether preoperative clinical and test data could be used to predict the effects of myocardial revascularization on functional status and quality of life in patients with heart failure and ischemic LV dysfunction. BACKGROUND: Revascularization of viable myocardial segments has been shown to improve regional and global LV function. The effects of revascularization on exercise capacity and quality of life (QOL) are not well defined. METHODS: Sixty three patients (51 men, age 66+/-9 years) with moderate or worse LV dysfunction (LVEF 0.28+/-0.07) and symptomatic heart failure were studied before and after coronary artery bypass surgery. All patients underwent preoperative positron emission tomography (PET) using FDG and Rb-82 before and after dipyridamole stress; the extent of viable myocardium by PET was defined by the number of segments with metabolism-perfusion mismatch or ischemia. Dobutamine echocardiography (DbE) was performed in 47 patients; viability was defined by augmentation at low dose or the development of new or worsening wall motion abnormalities. Functional class, exercise testing and a QOL score (Nottingham Health Profile) were obtained at baseline and follow-up. RESULTS: Patients had wall motion abnormalities in 83+/-18% of LV segments. A mismatch pattern was identified in 12+/-15% of LV segments, and PET evidence of viability was detected in 30+/-21% of the LV. Viability was reported in 43+/-18% of the LV by DbE. The difference between pre- and postoperative exercise capacity ranged from a reduction of 2.8 to an augmentation of 5.2 METS. The degree of improvement of exercise capacity correlated with the extent of viability by PET (r = 0.54, p = 0.0001) but not the extent of viable myocardium by DbE (r = 0.02, p = 0.92). The area under the ROC curve for PET (0.76) exceeded that for DbE (0.66). In a multiple linear regression, the extent of viability by PET and nitrate use were the only independent predictors of improvement of exercise capacity (model r = 0.63, p = 0.0001). Change in Functional Class correlated weakly with the change in exercise capacity (r = 0.25), extent of viable myocardium by PET (r = 0.23) and extent of viability by DbE (r = 0.31). Four components of the quality of life score (energy, pain, emotion and mobility status) significantly improved over follow-up, but no correlations could be identified between quality of life scores and the results of preoperative testing or changes in exercise capacity. CONCLUSIONS: In patients with LV dysfunction, improvement of exercise capacity correlates with the extent of viable myocardium. Quality of life improves in most patients undergoing revascularization. However, its measurement by this index does not correlate with changes in other parameters nor is it readily predictable.  (+info)

Cardiac metaiodobenzylguanidine uptake in patients with moderate chronic heart failure: relationship with peak oxygen uptake and prognosis. (8/7831)

OBJECTIVES: This prospective study was undertaken to correlate early and late metaiodobenzylguanidine (MIBG) cardiac uptake with cardiac hemodynamics and exercise capacity in patients with heart failure and to compare their prognostic values with that of peak oxygen uptake (VO2). BACKGROUND: The cardiac fixation of MIBG reflects presynaptic uptake and is reduced in heart failure. Whether it is related to exercise capacity and has better prognostic value than peak VO2 is unknown. METHODS: Ninety-three patients with heart failure (ejection fraction <45%) were studied with planar MIBG imaging, cardiopulmonary exercise tests and hemodynamics (n = 44). Early (20 min) and late (4 h) MIBG acquisition, as well as their ratio (washout, WO) were determined. Prognostic value was assessed by survival curves (Kaplan-Meier method) and uni- and multivariate Cox analyses. RESULTS: Late cardiac MIBG uptake was reduced (131+/-20%, normal values 192+/-42%) and correlated with ejection fraction (r = 0.49), cardiac index (r = 0.40) and pulmonary wedge pressure (r = -0.35). There was a significant correlation between peak VO2 and MIBG uptake (r = 0.41, p < 0.0001). With a mean follow-up of 10+/-8 months, both late MIBG uptake (p = 0.04) and peak VO2 (p < 0.0001) were predictive of death or heart transplantation, but only peak VO2 emerged by multivariate analysis. Neither early MIBG uptake nor WO yielded significant insights beyond those provided by late MIBG uptake. CONCLUSIONS: Metaiodobenzylguanidine uptake has prognostic value in patients with wide ranges of heart failure, but peak VO2 remains the most powerful prognostic index.  (+info)

  • Receiver operating characteristic (ROC) curve analysis is an important test for assessing the diagnostic accuracy (or discrimination performance) of quantitative tests throughout the whole range of their possible values, and it helps to identify the optimal cutoff value. (
  • ROC curve analysis may also serve to estimate the accuracy of multivariate risk scores aimed at categorizing individuals as affected/unaffected by a given disease/condition. (
  • However, one should be aware that, when applied to prognostic questions, ROC curves don't consider time to event and right censoring, and may therefore produce results that differ from those provided by classical survival analysis techniques like Kaplan-Meier or Cox regression analyses. (
  • We have also discussed ROC curve analysis in Python at the end of this blog. (
  • To address this issue, we applied classical ROC analysis (see Methods). (
  • The optimization step is based on ROC analysis. (
  • Estimates the pooled (unadjusted) Receiver Operating Characteristic (ROC) curve, the covariate-adjusted ROC (AROC) curve, and the covariate-specific/conditional ROC (cROC) curve by different methods, both Bayesian and frequentist. (
  • ROC and UniODA methods are illustrated and compared for an application involving prediction of Cesarean delivery. (
  • Also, it provides functions to obtain ROC-based optimal cutpoints utilizing several criteria. (
  • This note discusses the difference between an ROC-defined optimal discriminant threshold, and the optimal cutpoint identified by UniODA that maximizes ESS for the sample. (
  • ROC curves plot the true positive rate vs. the false positive rate for different values of a threshold. (
  • Now, if we were to create a bunch of values for this threshold in-between 0 and 1, say 1000 trials evenly spaced, we would get lots of these ROC points, and that's where we get the ROC curve from. (
  • The ROC curve shows us the tradeoff in the true positive rate and false positive rate for varying values of that threshold. (
  • If a curve is all the way up and to the left, you have a classifier that for some threshold perfectly labeled every point in the test data, and your AUC is 1. (
  • So, in order to visualise which threshold is best suited for the classifier we plot the ROC curve. (
  • The ROC curve is made by sliding the event-definition threshold in the model output, calculating certain metrics and making a graph of the results. (
  • Here, a new model assessment tool is introduced, called the sliding threshold of observation for numeric evaluation (STONE) curve. (
  • The STONE curve is created by sliding the event definition threshold not only for the model output but also simultaneously for the data values. (
  • While the ROC curve is still a highly valuable tool for optimizing the prediction of known and pre-classified events, it is argued here that the STONE curve is better for assessing model prediction of a continuous-valued data set. (
  • The shape of the ROC curve also contains additional information about how cluster overlap is distributed, and this information can be used by the biologist to choose useful data mining cut-offs that mark discontinuities and cluster substructure (see below and Figure 4 ). (
  • A ROC curve is a plot of the false alarm rate (also known as probability of false detection or POFD) on the x-axis, versus the hit-rate (also known as probability of detection-yes or PODy) on the y-axis. (
  • An ROC curve only requires two quantities: for each observation, you need the observed binary response and a predicted probability. (
  • This means that a model which has some very desirable probabilities (i.e. its posterior probabilities match the true probability) has a cap on its performance, and therefore an uncalibrated model could "dominate" in terms of ROC AUC. (
  • If \((FP,TP)\) is a point in ROC space then the cost-loss relationship \((c, L)\) is linear and satisfies \[ L = (1-\pi) c FP + \pi (1-c) (1 - TP) \] where \(c\) is the cost of a false positive and \(\pi\) the prior probability of the positive class 1 . (
  • Notice that ROC is an excellent tool for assessing class separation, but it tells us nothing about the accuracy of the predicted class probabilities (for instance, whether cases with a predicted 5% probability of membership in the target class really belong to the target class 5% of the time). (
  • The pAUC of both empirical curves is printed in the middle of the plot, with the p-value of the difference computed by a bootstrap test on the right. (
  • Empirical ROC curve of WFNS is shown in grey with three smoothing methods: binormal (blue), density (green) and normal distribution fit (red). (
  • If you want to review the basic constructions of an ROC curve, you can see a previous article that constructs an empirical ROC curve from first principles . (
  • Points making up the empirical ROC curve (does not apply to Format 5). (
  • So, notice that you want your curve for whatever forecast you're making to be above the diagonal, otherwise, you have no skill. (
  • The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test. (
  • Since TPR and FPR are both p, a random classifier (baseline) will have a ROC curve of slope 1 (the diagonal) and an AUC of 0.5. (
  • It other words this is the J is the maximum vertical distance between the ROC curve and the diagonal. (
  • Generally, random models will run up the diagonal, and the more the ROC curve bulges toward the top-left corner, the better the model separates the target class from the background class. (
  • ROC curve of three predictors of peptide cleaving in the proteasome. (
  • I've been looking into the relationships between losses, divergences and other measures of predictors and problems recently and came across a 2006 paper by Drummond and Holte entitled Cost Curves: An improved method for visualizing classifier performance . (
  • An ROC curve graphically summarizes the tradeoff between true positives and true negatives for a rule or model that predicts a binary response variable. (
  • from the specified model in the MODEL statement, from specified models in ROC statements, or from input variables which act as [predicted probabilities] . (
  • for example, you can fit a random-intercept model by using PROC GLIMMIX or use survey weights in PROC SURVEYLOGISTIC, then use the predicted values from those models to produce an ROC curve for the comparisons. (
  • This page contains JROCFIT and JLABROC4, programs for fitting receiver operating characteristic (ROC) curves using the maximum likelihood fit of a binormal model. (
  • Maximum likelihood estimation of receiver operating characteristic (ROC) curves using the "proper" binormal model can be interpreted in terms of Bayesian estimation as assuming a flat joint prior distribution on the c and d a parameters. (
  • We propose a Bayesian implementation of the "proper" binormal ROC curve-fitting model with a prior distribution that is marginally flat on AUC and conditionally flat over c . (
  • ROC curves can be directly computed for any 1 how to cook regular rice The Red curve on ROC curve diagram below is the same model as the example for the Gains chart: The Y axis measures the rate (as a percentage) of correctly predicted customers with a positive response. (
  • The first plot The first plot displays the ROC curve for the final model while the second plot displays the ROC curve. (
  • What are the possible drawbacks of using ROC curve to judge whether to use the model or not? (
  • In this sense, the ROC AUC answers the question of how well the model discriminates between the two classes. (
  • But ROC AUC would treat both events as if they have the same weight -- obviously any reasonable model should be able to distinguish between these two types of error. (
  • Many commentators have noticed empirically that a test of the two ROC areas often produces a non-significant result when a corresponding Wald test from the underlying regression model is significant. (
  • The more 'up and to the left' the ROC curve of a model is, the better the model. (
  • In the second curve, you would choose the second class as the positive class, and toro how to change the fuses Actually I integrated the code into my result function while testing the model, and generated my (x, y)s. other packages like Orange or Weka create ROC curves but not as flexible as your code. (
  • Actually I integrated the code into my result function while testing the model, and generated my (x, y)s. other packages like Orange or Weka create ROC curves but not as flexible as your code. (
  • You may see some variance here in the plot since the sample is small, but the ROC curve will be a straight line in case of a random model. (
  • With a ROC curve , you're trying to find a good model that optimizes the trade off between the False Positive Rate (FPR) and True Positive Rate (TPR) . (
  • By following these simple guidelines, interpretation of ROC curves will be less difficult and they can then be interpreted more reliably when writing, reviewing, or analyzing scientific papers. (
  • 2009), which you can download from the Stata Journal typing, in Stata, findit roccurve and installing the latest version of the package. (
  • The PROC LOGISTIC documentation provides formulas used for constructing an ROC curve . (
  • Although PROC LOGISTIC creates many tables, I've used the ODS SELECT statement to suppress all output except for the ROC curve. (
  • In the 1950s, psychologists start using ROC when studying the relationship between psychological experience and physical stimuli. (