Automated outbreak detection: a quantitative retrospective analysis. (1/288)

An automated early warning system has been developed and used for detecting clusters of human infection with enteric pathogens. The method used requires no specific disease modelling, and has the potential for extension to other epidemiological applications. A compound smoothing technique is used to determine baseline 'normal' incidence of disease from past data, and a warning threshold for current data is produced by combining a statistically determined increment from the baseline with a fixed minimum threshold. A retrospective study of salmonella infections over 3 years has been conducted. Over this period, the automated system achieved > 90% sensitivity, with a positive predictive value consistently > 50%, demonstrating the effectiveness of the combination of statistical and heuristic methods for cluster detection. We suggest that quantitative measurements are of considerable utility in evaluating the performance of such systems.  (+info)

Evaluating computerized health information systems: hardware, software and human ware: experiences from the Northern Province, South Africa. (2/288)

Despite enormous investment world-wide in computerized health information systems their overall benefits and costs have rarely been fully assessed. A major new initiative in South Africa provides the opportunity to evaluate the introduction of information technology from a global perspective and assess its impact on public health. The Northern Province is implementing a comprehensive integrated hospital information system (HIS) in all of its 42 hospitals. These include two mental health institutions, eight regional hospitals (two acting as a tertiary complex with teaching responsibilities) and 32 district hospitals. The overall goal of the HIS is to improve the efficiency and effectiveness of health (and welfare) services through the creation and use of information, for clinical, administrative and monitoring purposes. This multi-site implementation is being undertaken as a single project at a cost of R130 million (which represents 2.5 per cent of the health and welfare budget on an annual basis). The implementation process commenced on 1 September 1998 with the introduction of the system into Mankweng Hospital as the pilot site and is to be completed in the year 2001. An evaluation programme has been designed to maximize the likelihood of success of the implementation phase (formative evaluation) as well as providing an overall assessment of its benefits and costs (summative evaluation). The evaluation was designed as a form of health technology assessment; the system will have to prove its worth (in terms of cost-effectiveness) relative to other interventions. This is more extensive than the traditional form of technical assessment of hardware and software functionality, and moves into assessing the day-to-day utility of the system, the clinical and managerial environment in which it is situated (humanware), and ultimately its effects on the quality of patient care and public health. In keeping with new South African legislation the evaluation process sought to involve as many stakeholders as possible at the same time as creating a methodologically rigorous study that lived within realistic resource limits. The design chosen for the summative assessment was a randomized controlled trial (RCT) in which 24 district hospitals will receive the HIS either early or late. This is the first attempt to carry out an RCT evaluation of a multi-site implementation of an HIS in the world. Within this design the evaluation will utilize a range of qualitative and quantitative techniques over varying time scales, each addressing specific aims of the evaluation programme. In addition, it will attempt to provide an overview of the general impact on people and organizations of introducing high-technology solutions into a relatively unprepared environment. The study should help to stimulate an evaluation culture in the health and welfare services in the Northern Province as well as building the capacity to undertake such evaluations in the future.  (+info)

An institution-based process to ensure clinical software quality. (3/288)

Clinical software can have a major impact on the delivery of care. It is imperative that clinical software undergo regular quality review, to evaluate the clinical correctness of the specification, the technical correctness of the software, problems that have arisen, and maintenance of the software as conditions change. We have developed a process using existing hospital review groups to perform clinical review, and using a project specification form and analysis of likely problem areas to effect technical review.  (+info)

Assurance: the power behind PCASSO security. (4/288)

The need for security protection in Internet-based healthcare applications is generally acknowledged. Most healthcare applications that use the Internet have at least implemented some kind of encryption. Most applications also enforce user authentication and access control policies, and many audit user actions. However, most fall short on providing strong assurances that the security mechanisms are behaving as expected and that they cannot be subverted. While no system can claim to be totally "bulletproof," PCASSO provides assurance of correct operation through formal, disciplined design and development methodologies, as well as through functional and penetration testing. Through its security mechanisms, backed by strong system assurances, PCASSO is demonstrating "safe" use of public data networks for health care.  (+info)

Risk-adjusting acute myocardial infarction mortality: are APR-DRGs the right tool? (5/288)

OBJECTIVE: To determine if a widely used proprietary risk-adjustment system, APR-DRGs, misadjusts for severity of illness and misclassifies provider performance. DATA SOURCES: (1) Discharge abstracts for 116,174 noninstitutionalized adults with acute myocardial infarction (AMI) admitted to nonfederal California hospitals in 1991-1993; (2) inpatient medical records for a stratified probability sample of 974 patients with AMIs admitted to 30 California hospitals between July 31, 1990 and May 31, 1991. STUDY DESIGN: Using the 1991-1993 data set, we evaluated the predictive performance of APR-DRGs Version 12. Using the 1990/1991 validation sample, we assessed the effect of assigning APR-DRGs based on different sources of ICD-9-CM data. DATA COLLECTION/EXTRACTION METHODS: Trained, blinded coders reabstracted all ICD-9-CM diagnoses and procedures, and established the timing of each diagnosis. APR-DRG Risk of Mortality and Severity of Illness classes were assigned based on (1) all hospital-reported diagnoses, (2) all reabstracted diagnoses, and (3) reabstracted diagnoses present at admission. The outcome variables were 30-day mortality in the 1991-1993 data set and 30-day inpatient mortality in the 1990/1991 validation sample. PRINCIPAL FINDINGS: The APR-DRG Risk of Mortality class was a strong predictor of death (c = .831-.847), but was further enhanced by adding age and sex. Reabstracting diagnoses improved the apparent performance of APR-DRGs (c = .93 versus c = .87), while using only the diagnoses present at admission decreased apparent performance (c = .74). Reabstracting diagnoses had less effect on hospitals' expected mortality rates (r = .83-.85) than using diagnoses present at admission instead of all reabstracted diagnoses (r = .72-.77). There was fair agreement in classifying hospital performance based on these three sets of diagnostic data (K = 0.35-0.38). CONCUSIONS: The APR-DRG Risk of Mortality system is a powerful risk-adjustment tool, largely because it includes all relevant diagnoses, regardless of timing. Although some late diagnoses may not be preventable, APR-DRGs appear suitable only if one assumes that none is preventable.  (+info)

FramePlus: aligning DNA to protein sequences. (6/288)

MOTIVATION: Automated annotation of Expressed Sequence Tags (ESTs) is becoming increasingly important as EST databases continue to grow rapidly. A common approach to annotation is to align the gene fragments against well-documented databases of protein sequences. The sensitivity of the alignment algorithm is key to the success of such methods. RESULTS: This paper introduces a new algorithm, FramePlus, for DNA-protein sequence alignment. The SCOP database was used to develop a general framework for testing the sensitivity of such alignment algorithms when searching large databases. Using this framework, the performance of FramePlus was found to be somewhat better than other algorithms in the presence of moderate and high rates of frameshift errors, and comparable to Translated Search in the absence of sequencing errors. AVAILABILITY: The source code for FramePlus and the testing datasets are freely available at ftp.compugen.co.il/pub/research. CONTACT: [email protected]  (+info)

Finding prokaryotic genes by the 'frame-by-frame' algorithm: targeting gene starts and overlapping genes. (7/288)

MOTIVATION: Tightly packed prokaryotic genes frequently overlap with each other. This feature, rarely seen in eukaryotic DNA, makes detection of translation initiation sites and, therefore, exact predictions of prokaryotic genes notoriously difficult. Improving the accuracy of precise gene prediction in prokaryotic genomic DNA remains an important open problem. RESULTS: A software program implementing a new algorithm utilizing a uniform Hidden Markov Model for prokaryotic gene prediction was developed. The algorithm analyzes a given DNA sequence in each of six possible global reading frames independently. Twelve complete prokaryotic genomes were analyzed using the new tool. The accuracy of gene finding, predicting locations of protein-coding ORFs, as well as the accuracy of precise gene prediction, and detecting the whole gene including translation initiation codon were assessed by comparison with existing annotation. It was shown that in terms of gene finding, the program performs at least as well as the previously developed tools, such as GeneMark and GLIMMER. In terms of precise gene prediction the new program was shown to be more accurate, by several percentage points, than earlier developed tools, such as GeneMark.hmm, ECOPARSE and ORPHEUS. The results of testing the program indicated the possibility of systematic bias in start codon annotation in several early sequenced prokaryotic genomes. AVAILABILITY: The new gene-finding program can be accessed through the Web site: http:@dixie.biology.gatech.edu/GeneMark/fbf.cgi CONTACT: [email protected]  (+info)

Evaluation of gene prediction software using a genomic data set: application to Arabidopsis thaliana sequences. (8/288)

MOTIVATION: The annotation of the Arabidopsis thaliana genome remains a problem in terms of time and quality. To improve the annotation process, we want to choose the most appropriate tools to use inside a computer-assisted annotation platform. We therefore need evaluation of prediction programs with Arabidopsis sequences containing multiple genes. RESULTS: We have developed AraSet, a data set of contigs of validated genes, enabling the evaluation of multi-gene models for the Arabidopsis genome. Besides conventional metrics to evaluate gene prediction at the site and the exon levels, new measures were introduced for the prediction at the protein sequence level as well as for the evaluation of gene models. This evaluation method is of general interest and could apply to any new gene prediction software and to any eukaryotic genome. The GeneMark.hmm program appears to be the most accurate software at all three levels for the Arabidopsis genomic sequences. Gene modeling could be further improved by combination of prediction software. AVAILABILITY: The AraSet sequence set, the Perl programs and complementary results and notes are available at http://sphinx.rug.ac.be:8080/biocomp/napav/. CONTACT: [email protected]  (+info)