Preclinical safety evaluation of human gene therapy products. (1/4780)

Human gene therapy products include naked DNA and viral as well as non-viral vectors containing nucleic acids. There is limited experience on the preclinical toxicity studies necessary for the safety evaluation of these products, which have been outlined in several recently released guidelines. Requirements for the preclinical safety evaluation of human gene therapy products are both specific and non-specific. All key preclinical studies should be performed in compliance with Good Laboratory Practices. Non-specific requirements are in fact common to all pharmaceutical products. Critical specific issues to be addressed are: the safety evaluation of the vector and the toxicity of the expressed protein(s), which are the two components of gene therapy products, the quality of the test article, the selection of animal species, and the verification that the administration method successfully transports the gene of interest, with the vector, to the target site(s). The treatment schedule should mimic the intended human therapeutic design. The host's immune response against the gene therapy product has to be evaluated to detect possible adverse effects and immune neutralization by antibodies. The biodistribution of the gene of interest is also essential and can be evaluated by molecular biology techniques, such as PCR. Specific confinement is required for the safe manipulation of viral vectors.  (+info)

Accuracy of application of USDA beef quality and yield grades using the traditional system and the proposed seven-grade yield grade system. (2/4780)

Beef carcasses (n = 5,542) were evaluated by three USDA on-line graders and compared with the computed expert USDA quality (QG) and yield grades (YG) during 8-h shifts at a major beef-processing facility for a 2-wk period to evaluate the accuracy of applying USDA QG and YG within the traditional five-grade and the proposed seven-grade (segregating YG 2 and 3 into YG 2A, 2B, 3A, and 3B) YG systems. Quality grade distribution of the carcasses was 1.1% Prime, 50.0% Choice, 43.8% Select, and 5.1% No-Roll. Accuracy of applying QG was not affected (P>.05) by changing from the five-grade (91.5%) to either the seven-grade system, when determining only QG (94.3%), or the seven-grade system, when determining QG and YG (95.0%). Calculated expert YG successfully segregated carcasses into their respective YG, but on-line graders could not differentiate between YG 4 and 5 in the seven-grade systems. The application of YG in the five-grade system was more accurate (P<.05) than either of the seven-grade systems. A trend existed for on-line graders to undergrade carcasses as the numerical YG increased. Total accuracy of applying YG decreased by 19.4 to 21.8% when switching from the five-grade to the seven-grade system. The segmentation of USDA YG 2 and 3 into YG 2A, 2B, 3A, and 3B resulted in a decrease in the ability of on-line graders to accurately apply the YG.  (+info)

Validation of measures of food insecurity and hunger. (3/4780)

The most recent survey effort to determine the extent of food insecurity and hunger in the United States, the Food Security Supplement, included a series of questions to assess this complex phenomenon. The primary measure developed from this Food Security Supplement was based on measurement concepts, methods and items from two previously developed measures. This paper presents the evidence available that questionnaire-based measures, in particular the national food security measure, provide valid measurement of food insecurity and hunger for population and individual uses. The paper discusses basic ideas about measurement and criteria for establishing validity of measures and then uses these criteria to structure an examination of the research results available to establish the validity of food security measures. The results show that the construction of the national food security measure is well grounded in our understanding of food insecurity and hunger, its performance is consistent with that understanding, it is precise within usual performance standards, dependable, accurate at both group and individual levels within reasonable performance standards, and its accuracy is attributable to the well-grounded understanding. These results provide strong evidence that the Food Security Supplement provides valid measurement of food insecurity and hunger for population and individual uses. Further validation research is required for subgroups of the population, not yet studied for validation purposes, to establish validity for monitoring population changes in prevalence and to develop and validate robust and contextually sensitive measures in a variety of countries that reflect how people experience and think about food insecurity and hunger.  (+info)

An assessment of the operation of an external quality assessment (EQA) scheme in histopathology in the South Thames (West) region: 1995-1998. (4/4780)

AIMS: To describe the design and organisation of a voluntary regional external quality assessment (EQA) scheme in histopathology, and to record the results obtained over a three year period. METHODS: A protocol is presented in which circulation of EQA slides alternated with teaching sessions. Procedures for the choice of suitable cases, evaluation of submitted diagnoses, and feedback of results to participants are described. The use of teaching sessions, complementary to the slide circulations, and dealing with current diagnostic problems is also outlined. RESULTS: Participation rates in the nine slide circulations varied between 66% and 89%, mean 85%. Overall scores were predictably high but 4% of returns, from 10 pathologists, were unsatisfactory. These low scores were typically isolated or intermittent and none of the participants fulfilled agreed criteria for chronic poor performers. CONCLUSIONS: This scheme has been well supported and overall performances have been satisfactory. The design was sufficiently discriminatory to reveal a few low scores which are analysed in detail. Prompt feedback of results to participants with identification of all "incomplete" and "wrong" diagnoses is essential. Involvement of local histopathologists in designing, running, and monitoring such schemes is important.  (+info)

European interlaboratory comparison of breath 13CO2 analysis. (5/4780)

The BIOMED I programme Stable Isotopes in Gastroenterology and Nutrition (SIGN) has focused upon evaluation and standardisation of stable isotope breath tests using 13C labelled substrates. The programme dealt with comparison of 13C substrates, test meals, test conditions, analysis techniques, and calculation procedures. Analytical techniques applied for 13CO2 analysis were evaluated by taking an inventory of instrumentation, calibration protocols, and analysis procedures. Two ring tests were initiated measuring 13C abundances of carbonate materials. Evaluating the data it was found that seven different models of isotope ratio mass spectrometers (IRMS) were used by the participants applying both the dual inlet system and the continuous flow configuration. Eight different brands of certified 13C reference materials were used with a 13C abundance varying from delta 13CPDB -37.2 to +2.0/1000. CO2 was liberated from certified material by three techniques and different working standards were used varying from -47.4 to +0.4/1000 in their delta 13CPDB value. The standard deviations (SDs) found for all measurements by all participants were 0.25/1000 and 0.50/1000 for two carbonates used in the ring tests. The individual variation for the single participants varied from 0.02 /1000 (dual inlet system) to 0.14/1000 (continuous flow system). The measurement of the difference between two carbonates showed a SD of 0.33/1000 calculated for all participants. Internal precision of IRMS as indicated by the specifications of the different instrument suppliers is < 0.3/1000 for continuous flow systems. In this respect it can be concluded that all participants are working well within the instrument specifications even including sample preparation. Increased overall interlaboratory variation is therefore likely to be due to non-instrumental conditions. It is possible that consistent differences in sample handling leading to isotope fractionation are the causes for interlaboratory variation. Breath analysis does not require sample preparation. As such, interlaboratory variation will be less than observed for the carbonate samples and within the range indicated as internal precision for continuous flow instruments. From this it is concluded that pure analytical interlaboratory variation is acceptable despite the many differences in instrumentation and analytical protocols. Coordinated metabolic studies appear possible, in which different European laboratories perform 13CO2 analysis. Evaluation of compatibility of the analytical systems remains advisable, however.  (+info)

A policy of quality control assessment helps to reduce the risk of intraoperative stroke during carotid endarterectomy. (6/4780)

OBJECTIVES: A pilot study in our unit suggested that a combination of transcranial Doppler (TCD) plus completion angioscopy reduced incidence of intra-operative stroke (i.e. patients recovering from anaesthesia with a new deficit) during carotid endarterectomy (CEA). The aim of the current study was to see whether routine implementation of this policy was both feasible and associated with a continued reduction in the rate of intraoperative stroke (IOS). MATERIALS AND METHODS: Prospective study in 252 consecutive patients undergoing carotid endarterectomy between March 1995 and December 1996. RESULTS: Continuous TCD monitoring was possible in 229 patients (91%), while 238 patients (94%) underwent angioscopic examination. Overall, angioscopy identified an intimal flap requiring correction in six patients (2.5%), whilst intraluminal thrombus was removed in a further six patients (2.5%). No patient in this series recovered from anaesthesia with an IOS, but the rate of postoperative stroke was 2.8%. CONCLUSIONS: Our policy of TCD plus angioscopy has continued to contribute towards a sustained reduction in the risk of IOS following CEA, but requires access to reliable equipment and technical support. However, a policy of intraoperative quality control assessment may not necessarily alter the rate of postoperative stroke.  (+info)

Effect of different lots of Mueller-Hinton agar on the interpretation of the gentamicin susceptibility of Pseudomonas aeruginosa. (7/4780)

Population distributions and quality control data for strains of Pseudomonas aeruginosa tested for gentamicin susceptibility on six lots of Mueller-Hinton agar were analyzed. The lots of agar were used in three University of Washington hospitals from April 1975 through October 1977. The analyses indicated that the performance of members of the P. aeruginosa populations in each hospital closely followed the performance of the quality control strain, P. aeruginosa ATCC 27853, when tested on each lot of Mueller-Hinton medium. The variability of zone diameters with the P. aeruginosa populations and the quality control strain indicated that a fixed indeterminate range (13 to 16 mm) of gentamicin susceptibility was not applicable to these organisms as it was with the Enterobacteriaceae. Variability in gentamicin susceptibility results was demonstrated in both minimal inhibitory concentration and disk diffusion tests when eight selected P. aeruginosa strains and the quality control strain were tested on each lot of medium. This variation in susceptibility to gentamicin was not related to the total Ca(2+), Mg(2+), or Zn(2+) content of each lot of medium. The data demonstrated that a moving indeterminate range of gentamicin susceptibility, 3 to 6 mm below the mean zone diameter of the quality control strain, was a suitable criterion for strains tested on a single medium lot. These results illustrate the importance of defining stringent performance standards for media used in the susceptibility testing of P. aeruginosa with gentamicin and other aminoglycoside antibiotics.  (+info)

A new method of developing expert consensus practice guidelines. (8/4780)

To improve the quality of medical care while reducing costs, it is necessary to standardize best practice habits at the most crucial clinical decision points. Because many pertinent questions encountered in everyday practice are not well answered by the available research, expert consensus is a valuable bridge between clinical research and clinical practice. Previous methods of developing expert consensus have been limited by their relative lack of quantification, specificity, representativeness, and implementation. This article describes a new method of developing, documenting, and disseminating expert consensus guidelines that meets these concerns. This method has already been applied to four disorders in psychiatry and could be equally useful for other medical conditions. Leading clinical researchers studying a given disorder complete a survey soliciting their opinions on its most important disease management questions that are not covered well by definitive research. The survey response rates among the experts for the four different psychiatric disorders have each exceeded 85%. The views of the clinical researchers are validated by surveying separately a large group of practicing clinicians to ensure that the guideline recommendations are widely generalizable. All of the suggestions made in the guideline are derived from, and referenced to, the experts' survey responses using criteria that were established a priori for defining first-, second-, and third-line choices. Analysis of survey results suggests that this method of quantifying expert responses achieves a high level of reliability and reproducibility. This survey method is probably the best available means for standardizing practice for decisions points not well covered by research.  (+info)