Five pediatric head and brain mathematical models for use in internal dosimetry. (33/4146)

Mathematical models of the head and brain currently used in pediatric neuroimaging dosimetry lack the anatomic detail needed to provide the necessary physical data for suborgan brain dosimetry. To overcome this limitation, the Medical Internal Radiation Dose (MIRD) Committee of the Society of Nuclear Medicine recently adopted a detailed dosimetric model of the head and brain for the adult. METHODS: New head and brain models have been developed for a newborn, 1, 5, 10 and 15 y old for use in internal dosimetry. These models are based on the MIRD adult head and brain model and on published head and brain dimensions. They contain the same eight brain subregions and the same head regions as the adult model. These new models were coupled with the Monte Carlo transport code EGS4, and absorbed fractions of energy were calculated for 14 sources of monoenergetic photons and electrons in the energy range of 10 keV-4 MeV. These absorbed fractions were then used along with radionuclide decay data to generate S values for all ages for 99mTc, considering 12 source and 15 target regions. RESULTS: Explicit transport of positrons was also considered with separation of the annihilation photons component to the absorbed fraction of energy in the calculation of S values for positron-emitting radionuclides. No statistically significant differences were found when S values were calculated for positron-emitting radionuclides under explicit consideration of the annihilation event compared with the traditional assumption of a uniform distribution of 0.511-MeV photons. CONCLUSION: The need for electron transport within the suborgan brain regions of these pediatric phantoms was reflected by the relatively fast decrease of the self-absorbed fraction within many of the brain subregions, with increasing particle energy. This series of five dosimetric head and brain models will allow more precise dosimetry of radiopharmaceuticals in pediatric nuclear medicine brain procedures.  (+info)

Monte Carlo simulation of the heterotypic aggregation kinetics of platelets and neutrophils. (34/4146)

The heterotypic aggregation of cell mixtures or colloidal particles such as proteins occurs in a variety of settings such as thrombosis, immunology, cell separations, and diagnostics. Using the set of population balance equations (PBEs) to predict dynamic aggregate size and composition distributions is not feasible. The stochastic algorithm of Gillespie for chemical reactions (. J. Comput. Phys. 22:403-434) was reformulated to simulate the kinetic behavior of aggregating systems. The resulting Monte Carlo (MC) algorithm permits exact calculation of the decay rates of monomers and the temporally evolving distribution of sizes and compositions of the aggregates. Moreover, it permits calculation of all moments of these distributions. Using this method, we explored the heterotypic aggregation of fully activated platelets and neutrophils in a linear shear flow of shear rate G = 335 s(-1). At plasma concentrations, the half-lives of homotypically aggregating platelet and neutrophil singlets were 8.5 and 2.4 s, respectively. However, for heterotypic aggregation, the half-lives for platelets and neutrophils decreased to 2.0 and 0.11 s, respectively, demonstrating that flowing neutrophils accelerate capture of platelets and growth of aggregates. The required number of calculations per time step of the MC algorithm was typically a small fraction of Omega(1/2), where Omega is the initial number of particles in the system, making this the fastest MC method available. The speed of the algorithm makes feasible the deconvolution of kernels for general biological heterotypic aggregation processes.  (+info)

Statistical limitations in functional neuroimaging. II. Signal detection and statistical inference. (35/4146)

The field of functional neuroimaging (FNI) methodology has developed into a mature but evolving area of knowledge and its applications have been extensive. A general problem in the analysis of FNI data is finding a signal embedded in noise. This is sometimes called signal detection. Signal detection theory focuses in general on issues relating to the optimization of conditions for separating the signal from noise. When methods from probability theory and mathematical statistics are directly applied in this procedure it is also called statistical inference. In this paper we briefly discuss some aspects of signal detection theory relevant to FNI and, in addition, some common approaches to statistical inference used in FNI. Low-pass filtering in relation to functional-anatomical variability and some effects of filtering on signal detection of interest to FNI are discussed. Also, some general aspects of hypothesis testing and statistical inference are discussed. This includes the need for characterizing the signal in data when the null hypothesis is rejected, the problem of multiple comparisons that is central to FNI data analysis, omnibus tests and some issues related to statistical power in the context of FNI. In turn, random field, scale space, non-parametric and Monte Carlo approaches are reviewed, representing the most common approaches to statistical inference used in FNI. Complementary to these issues an overview and discussion of non-inferential descriptive methods, common statistical models and the problem of model selection is given in a companion paper. In general, model selection is an important prelude to subsequent statistical inference. The emphasis in both papers is on the assumptions and inherent limitations of the methods presented. Most of the methods described here generally serve their purposes well when the inherent assumptions and limitations are taken into account. Significant differences in results between different methods are most apparent in extreme parameter ranges, for example at low effective degrees of freedom or at small spatial autocorrelation. In such situations or in situations when assumptions and approximations are seriously violated it is of central importance to choose the most suitable method in order to obtain valid results.  (+info)

Heightened sensitivity of a lattice of membrane receptors. (36/4146)

Receptor proteins in both eukaryotic and prokaryotic cells have been found to form two-dimensional clusters in the plasma membrane. In this study, we examine the proposition that such clusters might show coordinated responses because of the spread of conformational states from one receptor to its neighbors. A Monte Carlo simulation was developed in which receptors flipped in probabilistic fashion between an active and an inactive state. Conformational energies depended on (i) ligand binding, (ii) a chemical modification of the receptor conferring adaptation, and (iii) the activity of neighboring receptors. Rate constants were based on data from known biological receptors, especially the bacterial Tar receptor, and on theoretical constraints derived from an analogous Ising model. The simulated system showed a greatly enhanced sensitivity to external signals compared with a corresponding set of uncoupled receptors and was operational over a much wider range of ambient concentrations. These and other properties should make a lattice of conformationally coupled receptors ideally suited to act as a "nose" by which a cell can detect and respond to extracellular stimuli.  (+info)

Economic analysis of step-wise treatment of gastro-oesophageal reflux disease. (37/4146)

BACKGROUND: To expose patients with gastro-oesophageal reflux disease (GERD) to the least amount of medication and to reduce health expenditures, it is recommended that their treatment is started with a small dose of an antisecretory or prokinetic medication. If patients fail to respond, the dose is increased in several consecutive steps or the initial regimen is changed to a more potent medication until the patients become asymptomatic. Although such treatment strategy is widely recommended, its impact on health expenditures has not been evaluated. METHODS: The economic analysis compares the medication costs of competing medical treatment strategies, using two different sets of cost data. Medication costs are estimated from the average wholesale prices (AWP) and from the lowest discount prices charged to governmental health institutions. A decision tree is used to model the step-wise treatment of GERD. In a Monte Carlo simulation, all transition probabilities built into the model are varied over a wide range. A threshold analysis evaluates the relationship between the cost of an individual medication and its therapeutic success rate. RESULTS: In a governmental health care system, a step-wise strategy saves on average $916 per patient every 5 years (range: $443-$1628) in comparison with a strategy utilizing only the most potent medication. In a cost environment relying on AWP, the average savings amount to $256 (-$206 to +$1561). The smaller the cost difference between two consecutive treatment steps, the longer one needs to follow the patients to reap the benefit of the small cost difference. However, even a small cost difference can turn into tangible cost savings, if a large enough fraction of GERD patients responds to the initial step of a less potent but also less expensive medication. CONCLUSIONS: The economic analysis suggests that a step-wise utilization of increasingly more potent and more expensive medications to treat GERD would result in appreciable cost savings.  (+info)

Method revealing latent periodicity of the nucleotide sequences modified for a case of small samples. (38/4146)

An earlier reported method for revealing latent periodicity of the nucleotide sequences has been considerably modified in a case of small samples, by applying a Monte Carlo method. This improved method has been used to search for the latent periodicity of some nucleotide sequences of the EMBL data bank. The existence of the nucleotide sequences' latent periodicity has been shown for some genes. The results obtained have implied that periodicity of gene structure is projected onto the periodicity of primary amino acid sequences and, further, onto spatial protein conformation. Even though the periodic structure of gene sequences has been eroded, it is still retained in primary and/or spatial structures of corresponding proteins. Furthermore, in a few cases the study of genes' periodicity has suggested their possible evolutionary origin by multifold duplications of some gene's fragments.  (+info)

Locating regions of differential variability in DNA and protein sequences. (39/4146)

In the comparison of DNA and protein sequences between species or between paralogues or among individuals within a species or population, there is often some indication that different regions of the sequence are divergent or polymorphic to different degrees, indicating differential constraint or diversifying selection operating in different regions of the sequence. The problem is to test statistically whether the observed regional differences in the density of variant sites represent real differences and then to estimate as accurately as possible the location of the differential regions. A method is given for testing and locating regions of differential variation. The method consists of calculating G(x(k)) = k/n - x(k)/N, where x(k) is the position of the kth variant site along the sequence, n is the total number of variant sites, and N is the total sequence length. The estimated region is the longest stretch of adjacent sequence for which G(x(k)) is monotonically increasing (a hot spot) or decreasing (a cold spot). Critical values of this length for tests of significance are given, a sequential method is developed for locating multiple differential regions, and the power of the method against various alternatives is explored. The method locates the endpoints of hot spots and cold spots of variation with high accuracy.  (+info)

Linkage disequilibrium test implies a large effective population number for HIV in vivo. (40/4146)

The effective size of the HIV population in vivo, although critically important for the prediction of appearance of drug-resistant variants, is currently unknown. To address this issue, we have developed a simple virus population model, within which the relative importance of stochastic factors and purifying selection for genetic evolution differs over, at least, three broad intervals of the effective population size, with approximate boundaries given by the inverse selection coefficient and the inverse mutation rate per base per cycle. Random drift and selection dominate the smallest (stochastic) and largest (deterministic) population intervals, respectively. In the intermediate (selection-drift) interval, random drift controls weakly diverse populations, whereas strongly diverse populations are controlled by selection. To estimate the effective size of the HIV population in vivo, we tested 200 pro sequences isolated from 11 HIV-infected patients for the presence of a linkage disequilibrium effect which must exist only in small populations. This analysis demonstrated a steady-state virus population of 10(5) infected cells or more, which is either in or at the border of the deterministic regime with respect to evolution of separate bases.  (+info)