Bayesian adaptive estimation of psychometric slope and threshold. (17/6254)

We introduce a new Bayesian adaptive method for acquisition of both threshold and slope of the psychometric function. The method updates posterior probabilities in the two-dimensional parameter space of psychometric functions and makes predictions based on the expected mean threshold and slope values. On each trial it sets the stimulus intensity that maximizes the expected information to be gained by completion of that trial. The method was evaluated in computer simulations and in a psychophysical experiment using the two-alternative forced-choice (2AFC) paradigm. Threshold estimation within 2 dB (23%) precision requires less than 30 trials for a typical 2AFC detection task. To get the slope estimate with the same precision takes about 300 trials.  (+info)

Automated diagnosis of data-model conflicts using metadata. (18/6254)

The authors describe a methodology for helping computational biologists diagnose discrepancies they encounter between experimental data and the predictions of scientific models. The authors call these discrepancies data-model conflicts. They have built a prototype system to help scientists resolve these conflicts in a more systematic, evidence-based manner. In computational biology, data-model conflicts are the result of complex computations in which data and models are transformed and evaluated. Increasingly, the data, models, and tools employed in these computations come from diverse and distributed resources, contributing to a widening gap between the scientist and the original context in which these resources were produced. This contextual rift can contribute to the misuse of scientific data or tools and amplifies the problem of diagnosing data-model conflicts. The authors' hypothesis is that systematic collection of metadata about a computational process can help bridge the contextual rift and provide information for supporting automated diagnosis of these conflicts. The methodology involves three major steps. First, the authors decompose the data-model evaluation process into abstract functional components. Next, they use this process decomposition to enumerate the possible causes of the data-model conflict and direct the acquisition of diagnostically relevant metadata. Finally, they use evidence statically and dynamically generated from the metadata collected to identify the most likely causes of the given conflict. They describe how these methods are implemented in a knowledge-based system called GRENDEL and show how GRENDEL can be used to help diagnose conflicts between experimental data and computationally built structural models of the 30S ribosomal subunit.  (+info)

Ad hoc classification of radiology reports. (19/6254)

OBJECTIVE: The task of ad hoc classification is to automatically place a large number of text documents into nonstandard categories that are determined by a user. The authors examine the use of statistical information retrieval techniques for ad hoc classification of dictated mammography reports. DESIGN: The authors' approach is the automated generation of a classification algorithm based on positive and negative evidence that is extracted from relevance-judged documents. Test documents are sorted into three conceptual bins: membership in a user-defined class, exclusion from the user-defined class, and uncertain. Documentation of absent findings through the use of negation and conjunction, a hallmark of interpretive test results, is managed by expansion and tokenization of these phrases. MEASUREMENTS: Classifier performance is evaluated using a single measure, the F measure, which provides a weighted combination of recall and precision of document sorting into true positive and true negative bins. RESULTS: Single terms are the most effective text feature in the classification profile, with some improvement provided by the addition of pairs of unordered terms to the profile. Excessive iterations of automated classifier enhancement degrade performance because of overtraining. Performance is best when the proportions of relevant and irrelevant documents in the training collection are close to equal. Special handling of negation phrases improves performance when the number of terms in the classification profile is limited. CONCLUSIONS: The ad hoc classifier system is a promising approach for the classification of large collections of medical documents. NegExpander can distinguish positive evidence from negative evidence when the negative evidence plays an important role in the classification.  (+info)

Pharmacokinetics of isepamicin during continuous venovenous hemodiafiltration. (20/6254)

The objective of this study was to analyze the pharmacokinetics of isepamicin during continuous venovenous hemodiafiltration. Six patients received 15 mg of isepamicin per kg of body weight. The mean isepamicin concentration peak in serum was 62.88 +/- 18.20 mg/liter 0.5 h after the infusion. The elimination half-life was 7. 91 +/- 0.83 h. The mean total body clearance was 1.75 +/- 0.28 liters/h, and dialysate outlet (DO) clearance was 2.76 +/- 0.59 liters/h. The mean volume of distribution was 19.83 +/- 2.95 liters. The elimination half-life, DO clearance, and volume of distribution were almost constant. In this group of patients, the initial dosage of 15 mg/kg appeared to be adequate, but the dosage interval should be determined by monitoring residual isepamicin concentrations in plasma.  (+info)

Automatic identification of pneumonia related concepts on chest x-ray reports. (21/6254)

A medical language processing system called SymText, two other automated methods, and a lay person were compared against an internal medicine resident for their ability to identify pneumonia related concepts on chest x-ray reports. Sensitivity (recall), specificity, and positive predictive value (precision) are reported with respect to an independent panel of physicians. Overall the performance of SymText was similar to the physician and superior to the other methods. The automatic encoding of pneumonia concepts will support clinical research, decision making, computerized clinical protocols, and quality assurance in a radiology department.  (+info)

Analysis of biomedical text for chemical names: a comparison of three methods. (22/6254)

At the National Library of Medicine (NLM), a variety of biomedical vocabularies are found in data pertinent to its mission. In addition to standard medical terminology, there are specialized vocabularies including that of chemical nomenclature. Normal language tools including the lexically based ones used by the Unified Medical Language System (UMLS) to manipulate and normalize text do not work well on chemical nomenclature. In order to improve NLM's capabilities in chemical text processing, two approaches to the problem of recognizing chemical nomenclature were explored. The first approach was a lexical one and consisted of analyzing text for the presence of a fixed set of chemical segments. The approach was extended with general chemical patterns and also with terms from NLM's indexing vocabulary, MeSH, and the NLM SPECIALIST lexicon. The second approach applied Bayesian classification to n-grams of text via two different methods. The single lexical method and two statistical methods were tested against data from the 1999 UMLS Metathesaurus. One of the statistical methods had an overall classification accuracy of 97%.  (+info)

An integrated decision support system for diagnosing and managing patients with community-acquired pneumonia. (23/6254)

Decision support systems that integrate guidelines have become popular applications to reduce variation and deliver cost-effective care. However, adverse characteristics of decision support systems, such as additional and time-consuming data entry or manually identifying eligible patients, result in a "behavioral bottleneck" that prevents decision support systems to become part of the clinical routine. This paper describes the design and the implementation of an integrated decision support system that explores a novel approach for bypassing the behavioral bottleneck. The real-time decision support system does not require health care providers to enter additional data and consists of a diagnostic and a management component.  (+info)

Comparing expert systems for identifying chest x-ray reports that support pneumonia. (24/6254)

We compare the performance of four computerized methods in identifying chest x-ray reports that support acute bacterial pneumonia. Two of the computerized techniques are constructed from expert knowledge, and two learn rules and structure from data. The two machine learning systems perform as well as the expert constructed systems. All of the computerized techniques perform better than a baseline keyword search and a lay person, and perform as well as a physician. We conclude that machine learning can be used to identify chest x-ray reports that support pneumonia.  (+info)