Algorithms: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.Software: Sequential operating programs and data which instruct the functioning of a digital computer.Pattern Recognition, Automated: In INFORMATION RETRIEVAL, machine-sensing or identification of visible patterns (shapes, forms, and configurations). (Harrod's Librarians' Glossary, 7th ed)Computer Simulation: Computer-based representation of physical systems and phenomena such as chemical processes.Computational Biology: A field of biology concerned with the development of techniques for the collection and manipulation of biological data, and the use of such data to make biological discoveries or predictions. This field encompasses all computational methods and theories for solving biological problems including manipulation of models and datasets.Reproducibility of Results: The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.Artificial Intelligence: Theory and development of COMPUTER SYSTEMS which perform tasks that normally require human intelligence. Such tasks may include speech recognition, LEARNING; VISUAL PERCEPTION; MATHEMATICAL COMPUTING; reasoning, PROBLEM SOLVING, DECISION-MAKING, and translation of language.Models, Statistical: Statistical formulations or analyses which, when applied to data and found to fit the data, are then used to verify the assumptions and parameters used in the analysis. Examples of statistical models are the linear model, binomial model, polynomial model, two-parameter model, etc.Sensitivity and Specificity: Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)Cluster Analysis: A set of statistical methods used to group variables or observations into strongly inter-related subgroups. In epidemiology, it may be used to analyze a closely grouped series of events or cases of disease or other health-related phenomenon with well-defined distribution patterns in relation to time or place or both.Image Processing, Computer-Assisted: A technique of inputting two-dimensional images into a computer and then enhancing or analyzing the imagery into a form that is more useful to the human observer.Sequence Analysis, Protein: A process that includes the determination of AMINO ACID SEQUENCE of a protein (or peptide, oligopeptide or peptide fragment) and the information analysis of the sequence.Sequence Alignment: The arrangement of two or more amino acid or base sequences from an organism or organisms in such a way as to align areas of the sequences sharing common properties. The degree of relatedness or homology between the sequences is predicted computationally or statistically based on weights assigned to the elements aligned between the sequences. This in turn can serve as a potential indicator of the genetic relatedness between the organisms.Image Interpretation, Computer-Assisted: Methods developed to aid in the interpretation of ultrasound, radiographic images, etc., for diagnosis of disease.Phantoms, Imaging: Devices or objects in various imaging techniques used to visualize or enhance visualization by simulating conditions encountered in the procedure. Phantoms are used very often in procedures employing or measuring x-irradiation or radioactive material to evaluate performance. Phantoms often have properties similar to human tissue. Water demonstrates absorbing properties similar to normal tissue, hence water-filled phantoms are used to map radiation levels. Phantoms are used also as teaching aids to simulate real conditions with x-ray or ultrasonic machines. (From Iturralde, Dictionary and Handbook of Nuclear Medicine and Clinical Imaging, 1990)Models, Genetic: Theoretical representations that simulate the behavior or activity of genetic processes or phenomena. They include the use of mathematical equations, computers, and other electronic equipment.Signal Processing, Computer-Assisted: Computer-assisted processing of electric, ultrasonic, or electronic signals to interpret function and activity.Software Validation: The act of testing the software for compliance with a standard.Imaging, Three-Dimensional: The process of generating three-dimensional images by electronic, photographic, or other methods. For example, three-dimensional images can be generated by assembling multiple tomographic images with the aid of a computer, while photographic 3-D images (HOLOGRAPHY) can be made by exposing film to the interference pattern created when two laser light sources shine on an object.Sequence Analysis, DNA: A multistage process that includes cloning, physical mapping, subcloning, determination of the DNA SEQUENCE, and information analysis.Image Enhancement: Improvement of the quality of a picture by various techniques, including computer processing, digital filtering, echocardiographic techniques, light and ultrastructural MICROSCOPY, fluorescence spectrometry and microscopy, scintigraphy, and in vitro image processing at the molecular level.Markov Chains: A stochastic process such that the conditional probability distribution for a state at any future instant, given the present state, is unaffected by any additional knowledge of the past history of the system.Proteins: Linear POLYPEPTIDES that are synthesized on RIBOSOMES and may be further modified, crosslinked, cleaved, or assembled into complex proteins with several subunits. The specific sequence of AMINO ACIDS determines the shape the polypeptide will take, during PROTEIN FOLDING, and the function of the protein.Databases, Protein: Databases containing information about PROTEINS such as AMINO ACID SEQUENCE; PROTEIN CONFORMATION; and other properties.Bayes Theorem: A theorem in probability theory named for Thomas Bayes (1702-1761). In epidemiology, it is used to obtain the probability of disease in a group of people with some characteristic on the basis of the overall rate of that disease and of the likelihood of that characteristic in healthy and diseased individuals. The most familiar application is in clinical decision analysis where it is used for estimating the probability of a particular diagnosis given the appearance of some symptoms or test result.Gene Expression Profiling: The determination of the pattern of genes expressed at the level of GENETIC TRANSCRIPTION, under specific circumstances or in a specific cell.Monte Carlo Method: In statistics, a technique for numerically approximating the solution of a mathematical problem by studying the distribution of some random variable, often generated by a computer. The name alludes to the randomness characteristic of the games of chance played at the gambling casinos in Monte Carlo. (From Random House Unabridged Dictionary, 2d ed, 1993)Computer Graphics: The process of pictorial communication, between human and computers, in which the computer input and output have the form of charts, drawings, or other appropriate pictorial representation.Automation: Controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human organs of observation, effort, and decision. (From Webster's Collegiate Dictionary, 1993)Databases, Factual: Extensive collections, reputedly complete, of facts and data garnered from material of a specialized subject area and made available for analysis and application. The collection can be automated by various contemporary methods for retrieval. The concept should be differentiated from DATABASES, BIBLIOGRAPHIC which is restricted to collections of bibliographic references.Oligonucleotide Array Sequence Analysis: Hybridization of a nucleic acid sample to a very large set of OLIGONUCLEOTIDE PROBES, which have been attached individually in columns and rows to a solid support, to determine a BASE SEQUENCE, or to detect variations in a gene sequence, GENE EXPRESSION, or for GENE MAPPING.Neural Networks (Computer): A computer architecture, implementable in either hardware or software, modeled after biological neural networks. Like the biological system in which the processing capability is a result of the interconnection strengths between arrays of nonlinear processing nodes, computerized neural networks, often called perceptrons or multilayer connectionist models, consist of neuron-like units. A homogeneous group of units makes up a layer. These networks are good at pattern recognition. They are adaptive, performing tasks by example, and thus are better for decision-making than are linear learning machines or cluster analysis. They do not require explicit programming.Numerical Analysis, Computer-Assisted: Computer-assisted study of methods for obtaining useful quantitative solutions to problems that have been expressed mathematically.Models, Theoretical: Theoretical representations that simulate the behavior or activity of systems, processes, or phenomena. They include the use of mathematical equations, computers, and other electronic equipment.User-Computer Interface: The portion of an interactive computer program that issues messages to and receives commands from a user.Data Compression: Information application based on a variety of coding methods to minimize the amount of data to be stored, retrieved, or transmitted. Data compression can be applied to various forms of data, such as images and signals. It is used to reduce costs and increase efficiency in the maintenance of large volumes of data.Fuzzy Logic: Approximate, quantitative reasoning that is concerned with the linguistic ambiguity which exists in natural or synthetic language. At its core are variables such as good, bad, and young as well as modifiers such as more, less, and very. These ordinary terms represent fuzzy sets in a particular problem. Fuzzy logic plays a key role in many medical expert systems.Artifacts: Any visible result of a procedure which is caused by the procedure itself and not by the entity being analyzed. Common examples include histological structures introduced by tissue processing, radiographic images of structures that are not naturally present in living tissue, and products of chemical reactions that occur during analysis.Diagnosis, Computer-Assisted: Application of computer programs designed to assist the physician in solving a diagnostic problem.Databases, Genetic: Databases devoted to knowledge about specific genes and gene products.Data Interpretation, Statistical: Application of statistical procedures to analyze specific observed or assumed facts from a particular study.Models, Biological: Theoretical representations that simulate the behavior or activity of biological processes or diseases. For disease models in living animals, DISEASE MODELS, ANIMAL is available. Biological models include the use of mathematical equations, computers, and other electronic equipment.Normal Distribution: Continuous frequency distribution of infinite range. Its properties are as follows: 1, continuous, symmetrical distribution with both tails extending to infinity; 2, arithmetic mean, mode, and median identical; and 3, shape completely determined by the mean and standard deviation.Information Storage and Retrieval: Organized activities related to the storage, location, search, and retrieval of information.Likelihood Functions: Functions constructed from a statistical model and a set of observed data which give the probability of that data for various values of the unknown model parameters. Those parameter values that maximize the probability are the maximum likelihood estimates of the parameters.Radiographic Image Interpretation, Computer-Assisted: Computer systems or networks designed to provide radiographic interpretive information.Genomics: The systematic study of the complete DNA sequences (GENOME) of organisms.Internet: A loose confederation of computer communication networks around the world. The networks that make up the Internet are connected through several backbone networks. The Internet grew out of the US Government ARPAnet project and was designed to facilitate information exchange.Decision Trees: A graphic device used in decision analysis, series of decision options are represented as branches (hierarchical).Radiographic Image Enhancement: Improvement in the quality of an x-ray image by use of an intensifying screen, tube, or filter and by optimum exposure techniques. Digital processing methods are often employed.Subtraction Technique: Combination or superimposition of two images for demonstrating differences between them (e.g., radiograph with contrast vs. one without, radionuclide images using different radionuclides, radiograph vs. radionuclide image) and in the preparation of audiovisual materials (e.g., offsetting identical images, coloring of vessels in angiograms).Programming Languages: Specific languages used to prepare computer programs.Wavelet Analysis: Signal and data processing method that uses decomposition of wavelets to approximate, estimate, or compress signals with finite time and frequency domains. It represents a signal or data in terms of a fast decaying wavelet series from the original prototype wavelet, called the mother wavelet. This mathematical algorithm has been adopted widely in biomedical disciplines for data and signal processing in noise removal and audio/image compression (e.g., EEG and MRI).Computing Methodologies: Computer-assisted analysis and processing of problems in a particular area.Signal-To-Noise Ratio: The comparison of the quantity of meaningful data to the irrelevant or incorrect data.Data Mining: Use of sophisticated analysis tools to sort through, organize, examine, and combine large sets of information.Protein Interaction Mapping: Methods for determining interaction between PROTEINS.Models, Molecular: Models used experimentally or theoretically to study molecular shape, electronic properties, or interactions; includes analogous molecules, computer-generated graphics, and mechanical structures.Wireless Technology: Techniques using energy such as radio frequency, infrared light, laser light, visible light, or acoustic energy to transfer information without the use of wires, over both short and long distances.Support Vector Machines: Learning algorithms which are a set of related supervised computer learning methods that analyze data and recognize patterns, and used for classification and regression analysis.Automatic Data Processing: Data processing largely performed by automatic means.Software Design: Specifications and instructions applied to the software.Sequence Analysis, RNA: A multistage process that includes cloning, physical mapping, subcloning, sequencing, and information analysis of an RNA SEQUENCE.ComputersMolecular Sequence Data: Descriptions of specific amino acid, carbohydrate, or nucleotide sequences which have appeared in the published literature and/or are deposited in and maintained by databanks such as GENBANK, European Molecular Biology Laboratory (EMBL), National Biomedical Research Foundation (NBRF), or other sequence repositories.Stochastic Processes: Processes that incorporate some element of randomness, used particularly to refer to a time series of random variables.Genome: The genetic complement of an organism, including all of its GENES, as represented in its DNA, or in some cases, its RNA.Gene Regulatory Networks: Interacting DNA-encoded regulatory subsystems in the GENOME that coordinate input from activator and repressor TRANSCRIPTION FACTORS during development, cell differentiation, or in response to environmental cues. The networks function to ultimately specify expression of particular sets of GENES for specific conditions, times, or locations.ROC Curve: A graphic means for assessing the ability of a screening test to discriminate between healthy and diseased persons; may also be used in other studies, e.g., distinguishing stimuli responses as to a faint stimuli or nonstimuli.Equipment Design: Methods of creating machines and devices.Models, Chemical: Theoretical representations that simulate the behavior or activity of chemical processes or phenomena; includes the use of mathematical equations, computers, and other electronic equipment.Probability: The study of chance processes or the relative frequency characterizing a chance process.Predictive Value of Tests: In screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test.Chromosome Mapping: Any method used for determining the location of and relative distances between genes on a chromosome.Phylogeny: The relationships of groups of organisms as reflected by their genetic makeup.Magnetic Resonance Imaging: Non-invasive method of demonstrating internal anatomy based on the principle that atomic nuclei in a strong magnetic field absorb pulses of radiofrequency energy and emit them as radiowaves which can be reconstructed into computerized images. The concept includes proton spin tomographic techniques.Time Factors: Elements of limited time intervals, contributing to particular results or situations.Base Sequence: The sequence of PURINES and PYRIMIDINES in nucleic acids and polynucleotides. It is also called nucleotide sequence.Discriminant Analysis: A statistical analytic technique used with discrete dependent variables, concerned with separating sets of observed values and allocating new values. It is sometimes used instead of regression analysis.Cone-Beam Computed Tomography: Computed tomography modalities which use a cone or pyramid-shaped beam of radiation.Tomography, X-Ray Computed: Tomography using x-ray transmission and a computer algorithm to reconstruct the image.Least-Squares Analysis: A principle of estimation in which the estimates of a set of parameters in a statistical model are those quantities minimizing the sum of squared differences between the observed values of a dependent variable and the values predicted by the model.Nonlinear Dynamics: The study of systems which respond disproportionately (nonlinearly) to initial conditions or perturbing stimuli. Nonlinear systems may exhibit "chaos" which is classically characterized as sensitive dependence on initial conditions. Chaotic systems, while distinguished from more ordered periodic systems, are not random. When their behavior over time is appropriately displayed (in "phase space"), constraints are evident which are described by "strange attractors". Phase space representations of chaotic systems, or strange attractors, usually reveal fractal (FRACTALS) self-similarity across time scales. Natural, including biological, systems often display nonlinear dynamics and chaos.Programming, Linear: A technique of operations research for solving certain kinds of problems involving many variables where a best value or set of best values is to be found. It is most likely to be feasible when the quantity to be optimized, sometimes called the objective function, can be stated as a mathematical expression in terms of the various activities within the system, and when this expression is simply proportional to the measure of the activities, i.e., is linear, and when all the restrictions are also linear. It is different from computer programming, although problems using linear programming techniques may be programmed on a computer.Equipment Failure Analysis: The evaluation of incidents involving the loss of function of a device. These evaluations are used for a variety of purposes such as to determine the failure rates, the causes of failures, costs of failures, and the reliability and maintainability of devices.Genome, Human: The complete genetic complement contained in the DNA of a set of CHROMOSOMES in a HUMAN. The length of the human genome is about 3 billion base pairs.Proteomics: The systematic study of the complete complement of proteins (PROTEOME) of organisms.Databases, Nucleic Acid: Databases containing information about NUCLEIC ACIDS such as BASE SEQUENCE; SNPS; NUCLEIC ACID CONFORMATION; and other properties. Information about the DNA fragments kept in a GENE LIBRARY or GENOMIC LIBRARY is often maintained in DNA databases.Principal Component Analysis: Mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components.Polymorphism, Single Nucleotide: A single nucleotide variation in a genetic sequence that occurs at appreciable frequency in the population.Amino Acid Sequence: The order of amino acids as they occur in a polypeptide chain. This is referred to as the primary structure of proteins. It is of fundamental importance in determining PROTEIN CONFORMATION.Computer Communication Networks: A system containing any combination of computers, computer terminals, printers, audio or visual display devices, or telephones interconnected by telecommunications equipment or cables: used to transmit or receive information. (Random House Unabridged Dictionary, 2d ed)Natural Language Processing: Computer processing of a language with rules that reflect and describe current usage rather than prescribed usage.Tomography: Imaging methods that result in sharp images of objects located on a chosen plane and blurred images located above or below the plane.Proteome: The protein complement of an organism coded for by its genome.

An effective approach for analyzing "prefinished" genomic sequence data. (1/42270)

Ongoing efforts to sequence the human genome are already generating large amounts of data, with substantial increases anticipated over the next few years. In most cases, a shotgun sequencing strategy is being used, which rapidly yields most of the primary sequence in incompletely assembled sequence contigs ("prefinished" sequence) and more slowly produces the final, completely assembled sequence ("finished" sequence). Thus, in general, prefinished sequence is produced in excess of finished sequence, and this trend is certain to continue and even accelerate over the next few years. Even at a prefinished stage, genomic sequence represents a rich source of important biological information that is of great interest to many investigators. However, analyzing such data is a challenging and daunting task, both because of its sheer volume and because it can change on a day-by-day basis. To facilitate the discovery and characterization of genes and other important elements within prefinished sequence, we have developed an analytical strategy and system that uses readily available software tools in new combinations. Implementation of this strategy for the analysis of prefinished sequence data from human chromosome 7 has demonstrated that this is a convenient, inexpensive, and extensible solution to the problem of analyzing the large amounts of preliminary data being produced by large-scale sequencing efforts. Our approach is accessible to any investigator who wishes to assimilate additional information about particular sequence data en route to developing richer annotations of a finished sequence.  (+info)

A computational screen for methylation guide snoRNAs in yeast. (2/42270)

Small nucleolar RNAs (snoRNAs) are required for ribose 2'-O-methylation of eukaryotic ribosomal RNA. Many of the genes for this snoRNA family have remained unidentified in Saccharomyces cerevisiae, despite the availability of a complete genome sequence. Probabilistic modeling methods akin to those used in speech recognition and computational linguistics were used to computationally screen the yeast genome and identify 22 methylation guide snoRNAs, snR50 to snR71. Gene disruptions and other experimental characterization confirmed their methylation guide function. In total, 51 of the 55 ribose methylated sites in yeast ribosomal RNA were assigned to 41 different guide snoRNAs.  (+info)

Referenceless interleaved echo-planar imaging. (3/42270)

Interleaved echo-planar imaging (EPI) is an ultrafast imaging technique important for applications that require high time resolution or short total acquisition times. Unfortunately, EPI is prone to significant ghosting artifacts, resulting primarily from system time delays that cause data matrix misregistration. In this work, it is shown mathematically and experimentally that system time delays are orientation dependent, resulting from anisotropic physical gradient delays. This analysis characterizes the behavior of time delays in oblique coordinates, and a new ghosting artifact caused by anisotropic delays is described. "Compensation blips" are proposed for time delay correction. These blips are shown to remove the effects of anisotropic gradient delays, eliminating the need for repeated reference scans and postprocessing corrections. Examples of phantom and in vivo images are shown.  (+info)

An evaluation of elongation factor 1 alpha as a phylogenetic marker for eukaryotes. (4/42270)

Elongation factor 1 alpha (EF-1 alpha) is a highly conserved ubiquitous protein involved in translation that has been suggested to have desirable properties for phylogenetic inference. To examine the utility of EF-1 alpha as a phylogenetic marker for eukaryotes, we studied three properties of EF-1 alpha trees: congruency with other phyogenetic markers, the impact of species sampling, and the degree of substitutional saturation occurring between taxa. Our analyses indicate that the EF-1 alpha tree is congruent with some other molecular phylogenies in identifying both the deepest branches and some recent relationships in the eukaryotic line of descent. However, the topology of the intermediate portion of the EF-1 alpha tree, occupied by most of the protist lineages, differs for different phylogenetic methods, and bootstrap values for branches are low. Most problematic in this region is the failure of all phylogenetic methods to resolve the monophyly of two higher-order protistan taxa, the Ciliophora and the Alveolata. JACKMONO analyses indicated that the impact of species sampling on bootstrap support for most internal nodes of the eukaryotic EF-1 alpha tree is extreme. Furthermore, a comparison of observed versus inferred numbers of substitutions indicates that multiple overlapping substitutions have occurred, especially on the branch separating the Eukaryota from the Archaebacteria, suggesting that the rooting of the eukaryotic tree on the diplomonad lineage should be treated with caution. Overall, these results suggest that the phylogenies obtained from EF-1 alpha are congruent with other molecular phylogenies in recovering the monophyly of groups such as the Metazoa, Fungi, Magnoliophyta, and Euglenozoa. However, the interrelationships between these and other protist lineages are not well resolved. This lack of resolution may result from the combined effects of poor taxonomic sampling, relatively few informative positions, large numbers of overlapping substitutions that obscure phylogenetic signal, and lineage-specific rate increases in the EF-1 alpha data set. It is also consistent with the nearly simultaneous diversification of major eukaryotic lineages implied by the "big-bang" hypothesis of eukaryote evolution.  (+info)

Hierarchical cluster analysis applied to workers' exposures in fiberglass insulation manufacturing. (5/42270)

The objectives of this study were to explore the application of cluster analysis to the characterization of multiple exposures in industrial hygiene practice and to compare exposure groupings based on the result from cluster analysis with that based on non-measurement-based approaches commonly used in epidemiology. Cluster analysis was performed for 37 workers simultaneously exposed to three agents (endotoxin, phenolic compounds and formaldehyde) in fiberglass insulation manufacturing. Different clustering algorithms, including complete-linkage (or farthest-neighbor), single-linkage (or nearest-neighbor), group-average and model-based clustering approaches, were used to construct the tree structures from which clusters can be formed. Differences were observed between the exposure clusters constructed by these different clustering algorithms. When contrasting the exposure classification based on tree structures with that based on non-measurement-based information, the results indicate that the exposure clusters identified from the tree structures had little in common with the classification results from either the traditional exposure zone or the work group classification approach. In terms of the defining homogeneous exposure groups or from the standpoint of health risk, some toxicological normalization in the components of the exposure vector appears to be required in order to form meaningful exposure groupings from cluster analysis. Finally, it remains important to see if the lack of correspondence between exposure groups based on epidemiological classification and measurement data is a peculiarity of the data or a more general problem in multivariate exposure analysis.  (+info)

A new filtering algorithm for medical magnetic resonance and computer tomography images. (6/42270)

Inner views of tubular structures based on computer tomography (CT) and magnetic resonance (MR) data sets may be created by virtual endoscopy. After a preliminary segmentation procedure for selecting the organ to be represented, the virtual endoscopy is a new postprocessing technique using surface or volume rendering of the data sets. In the case of surface rendering, the segmentation is based on a grey level thresholding technique. To avoid artifacts owing to the noise created in the imaging process, and to restore spurious resolution degradations, a robust Wiener filter was applied. This filter working in Fourier space approximates the noise spectrum by a simple function that is proportional to the square root of the signal amplitude. Thus, only points with tiny amplitudes consisting mostly of noise are suppressed. Further artifacts are avoided by the correct selection of the threshold range. Afterwards, the lumen and the inner walls of the tubular structures are well represented and allow one to distinguish between harmless fluctuations and medically significant structures.  (+info)

Efficacy of ampicillin plus ceftriaxone in treatment of experimental endocarditis due to Enterococcus faecalis strains highly resistant to aminoglycosides. (7/42270)

The purpose of this work was to evaluate the in vitro possibilities of ampicillin-ceftriaxone combinations for 10 Enterococcus faecalis strains with high-level resistance to aminoglycosides (HLRAg) and to assess the efficacy of ampicillin plus ceftriaxone, both administered with humanlike pharmacokinetics, for the treatment of experimental endocarditis due to HLRAg E. faecalis. A reduction of 1 to 4 dilutions in MICs of ampicillin was obtained when ampicillin was combined with a fixed subinhibitory ceftriaxone concentration of 4 micrograms/ml. This potentiating effect was also observed by the double disk method with all 10 strains. Time-kill studies performed with 1 and 2 micrograms of ampicillin alone per ml or in combination with 5, 10, 20, 40, and 60 micrograms of ceftriaxone per ml showed a > or = 2 log10 reduction in CFU per milliliter with respect to ampicillin alone and to the initial inoculum for all 10 E. faecalis strains studied. This effect was obtained for seven strains with the combination of 2 micrograms of ampicillin per ml plus 10 micrograms of ceftriaxone per ml and for six strains with 5 micrograms of ceftriaxone per ml. Animals with catheter-induced endocarditis were infected intravenously with 10(8) CFU of E. faecalis V48 or 10(5) CFU of E. faecalis V45 and were treated for 3 days with humanlike pharmacokinetics of 2 g of ampicillin every 4 h, alone or combined with 2 g of ceftriaxone every 12 h. The levels in serum and the pharmacokinetic parameters of the humanlike pharmacokinetics of ampicillin or ceftriaxone in rabbits were similar to those found in humans treated with 2 g of ampicillin or ceftriaxone intravenously. Results of the therapy for experimental endocarditis caused by E. faecalis V48 or V45 showed that the residual bacterial titers in aortic valve vegetations were significantly lower in the animals treated with the combinations of ampicillin plus ceftriaxone than in those treated with ampicillin alone (P < 0.001). The combination of ampicillin and ceftriaxone showed in vitro and in vivo synergism against HLRAg E. faecalis.  (+info)

The muscle chloride channel ClC-1 has a double-barreled appearance that is differentially affected in dominant and recessive myotonia. (8/42270)

Single-channel recordings of the currents mediated by the muscle Cl- channel, ClC-1, expressed in Xenopus oocytes, provide the first direct evidence that this channel has two equidistant open conductance levels like the Torpedo ClC-0 prototype. As for the case of ClC-0, the probabilities and dwell times of the closed and conducting states are consistent with the presence of two independently gated pathways with approximately 1.2 pS conductance enabled in parallel via a common gate. However, the voltage dependence of the common gate is different and the kinetics are much faster than for ClC-0. Estimates of single-channel parameters from the analysis of macroscopic current fluctuations agree with those from single-channel recordings. Fluctuation analysis was used to characterize changes in the apparent double-gate behavior of the ClC-1 mutations I290M and I556N causing, respectively, a dominant and a recessive form of myotonia. We find that both mutations reduce about equally the open probability of single protopores and that mutation I290M yields a stronger reduction of the common gate open probability than mutation I556N. Our results suggest that the mammalian ClC-homologues have the same structure and mechanism proposed for the Torpedo channel ClC-0. Differential effects on the two gates that appear to modulate the activation of ClC-1 channels may be important determinants for the different patterns of inheritance of dominant and recessive ClC-1 mutations.  (+info)

*Java virtual machine

... the garbage-collection algorithm used, and any internal optimization of the Java virtual machine instructions (their ...

*Gene expression profiling in cancer

A hierarchical clustering algorithm was used to group cell lines based on the similarity by which the pattern of gene ... The hierarchical clustering algorithm identified a subset of tumors that would have been labeled DLBCLs by traditional ...

*Structural alignment software

Minami, S.; Sawada K.; Chikenji G. (Jan 2013). "MICAN : a protein structure alignment algorithm that can handle Multiple-chains ... Janez Konc; Dušanka Janežič (2010). "ProBiS algorithm for detection of structurally similar protein binding sites by local ... Wang, Sheng; Jian Peng; Jinbo Xu (Sep 2011). "Alignment of distantly related protein structures: algorithm, bound and ... Algorithms for Molecular Biology. 7 (4): 4. doi:10.1186/1748-7188-7-4. PMC 3298807 . PMID 22336468. ...

*Gene expression profiling

Apart from selecting a clustering algorithm, user usually has to choose an appropriate proximity measure (distance or ...

*Algorithms Unlocked

... is a book by Thomas H. Cormen about the basic principles and applications of computer algorithms. The book ... "Algorithms Unlocked". MIT Press. Retrieved April 30, 2015. MIT Press: Algorithms Unlocked. ... consists of ten chapters, and deals with the topics of searching, sorting, basic graph algorithms, string processing, the ...

*Algorithms (journal)

ACM Transactions on Algorithms Algorithmica "About Algorithms". Algorithms. MDPI. 2013. Retrieved 2013-07-18. Iwama, Kazuo ( ... Algorithms is a peer-reviewed open access mathematics journal concerning design, analysis, and experiments on algorithms. The ... 2008), "Editor's Foreword", Algorithms, 1 (1): 1, doi:10.3390/a1010001 . Official website. ...

*XDAIS algorithms

For instance, all XDAIS compliant algorithms must implement an Algorithm Interface, called IALG. For those algorithms utilizing ... XDAIS or eXpressDsp Algorithm Interoperability Standard is a standard for algorithm development by Texas Instruments for the ... Problems are often caused in algorithm by hard-coding access to system resources that are used by other algorithms. DAIS ... The XDAIS standard address the issues of algorithm resource allocation and consumption on a DSP. Algorithms that comply with ...

*Navigational algorithms

For n ≥ 2 observations DeWit/USNO Nautical Almanac/Compac Data, Least squares algorithm for n LOPs Kaplan algorithm, USNO. For ... Navigational algorithms is a source of information whose purpose is to make available the scientific part of the art of ... running fixes The algorithms implemented are: For n = 2 observations An analytical solution of the two star sight problem of ... for navigational algorithms in other domains. An analytical solution of the two star sight problem of celestial navigation. ...

*Standard algorithms

Students' alternative algorithms are often just as correct, efficient, and generalizable as the standard algorithms, and ... In elementary arithmetic, a standard algorithm or method is a specific method of computation which is conventionally taught for ... something that is usually lost in the memorization of standard algorithms). The development of sophisticated calculators has ...

*Quantum optimization algorithms

... and an algorithm for learning the fit parameters. Because the quantum algorithm is mainly based on the HHL algorithm, it ... Among other quantum algorithms, there are quantum optimization algorithms which might suggest improvement in solving ... The best classical algorithm known runs polynomial time in the worst case. The quantum algorithm provides a quadratic ... The quantum least-squares fitting algorithm makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for ...

*Convex hull algorithms

A much simpler algorithm was developed by Chan in 1996, and is called Chan's algorithm. Known convex hull algorithms are listed ... Such algorithms are called output-sensitive algorithms. They may be asymptotically more efficient than Θ(n log n) algorithms in ... A number of algorithms are known for the three-dimensional case, as well as for arbitrary dimensions. Chan's algorithm is used ... This algorithm is also applicable to the three dimensional case. Monotone chain aka Andrew's algorithm- O(n log n) Published in ...

*Kleitman-Wang algorithms

These constructions are based on recursive algorithms. Kleitman and Wang gave these algorithms in 1973. The algorithm is based ... The Kleitman-Wang algorithms are two different algorithms in graph theory solving the digraph realization problem, i.e. the ... The algorithm is based on the following theorem. Let S = ( ( a 1 , b 1 ) , … , ( a n , b n ) ) {\displaystyle S=((a_{1},b_{1 ... In each step of the algorithm one constructs the arcs of a digraph with vertices v 1 , … , v n {\displaystyle v_{1},\dots ,v_{n ...

*Sudoku solving algorithms

The algorithm (and therefore the program code) is simpler than other algorithms, especially compared to strong algorithms that ... An algorithm combining a constraint-model-based algorithm with backtracking would have the advantage of fast solving time, and ... Algorithms designed for graph colouring are also known to perform well with Sudokus. It is also possible to express a Sudoku as ... Notice that the algorithm may discard all the previously tested values if it finds the existing set does not fulfil the ...

*Timeline of algorithms

C4.5 algorithm, a descendent of ID3 decision tree algorithm, was developed by Ross Quinlan 1993 - Apriori algorithm developed ... Kruskal's algorithm developed by Joseph Kruskal 1957 - Prim's algorithm developed by Robert Prim 1957 - Bellman-Ford algorithm ... 1 algorithm developed by John Pollard 1975 - Genetic algorithms popularized by John Holland 1975 - Pollard's rho algorithm ... It adds a soft-margin idea to the 1992 algorithm by Boser, Nguyon, Vapnik, and is the algorithm that people usually refer to ...

*Communication-avoiding algorithms

... are designed with the following objectives: Reorganize algorithms to reduce communication ... Let A, B and C be square matrices of order n x n. The following naive algorithm implements C = C + A * B: for i = 1 to n for j ... The Blocked (Tiled) Matrix Multiplication algorithm reduces this dominant term. Consider A,B,C to be n/b-by-n/b matrices of b- ... Communication-Avoiding Algorithms minimize movement of data within a memory hierarchy for improving its running-time and energy ...

*Deadlock prevention algorithms

... algorithms, which track all cycles that cause deadlocks (including temporary deadlocks); and heuristics algorithms which don't ... algorithms, which track all cycles that cause deadlocks (including temporary deadlocks); and heuristics algorithms which don't ... A deadlock prevention algorithm organizes resource usage by each process to ensure that at least one process is always able to ... In computer science, deadlock prevention algorithms are used in concurrent programming when multiple processes must acquire ...

*Introduction to Algorithms

... is a book by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. The first ... "Introduction to Algorithms-CiteSeerX citation query". CiteSeerX. The College of Information Sciences and Technology at Penn ... Each chapter focuses on an algorithm, and discusses its design techniques and areas of application. Instead of using a specific ... I Foundations 1 The Role of Algorithms in Computing 2 Getting Started 3 Growth of Functions 4 Divide-and-Conquer 5 ...

*Analysis of algorithms

... determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm ... Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an ... Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can ... In computer science, the analysis of algorithms is the determination of the computational complexity of algorithms, that is the ...

*Secure Hash Algorithms

The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and ... view talk edit All SHA-family algorithms, the FIPS-approved security functions, are subject to official validation at the CMVP ... This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Cryptographic weaknesses ... "The MD5 Message-Digest Algorithm". Retrieved 2016-04-18. In the unlikely event that b is greater than 2^64, then only the low- ...

*Dynamic problem (algorithms)

Incremental algorithms, or online algorithms, are algorithms in which only additions of elements are allowed, possibly starting ... Decremental algorithms are algorithms in which only deletions of elements are allowed, starting with an initialization of a ... "Dynamic graph algorithms". In CRC Handbook of Algorithms and Theory of Computation, Chapter 22. CRC Press, 1997.. ... If both additions and deletions are allowed, the algorithm is sometimes called fully dynamic. Static problem For a set of N ...

*Numerical Algorithms Group

Code and algorithms for the library were contributed to the project by experts in the project, and elsewhere (for example, some ... The Numerical Algorithms Group: From 0-40 in a flurry of achievements 40 Years of NAG scrapbook NAG Numerical Routines NAG ... The Numerical Algorithms Group (NAG) is a software company which provides methods for the solution of mathematical and ... NAG was founded by Brian Ford and others in 1970 as the Nottingham Algorithms Group, a collaborative venture between the ...

*Schema (genetic algorithms)

A schema is a template in computer science used in the field of genetic algorithms that identifies a subset of strings with ... In evolutionary computing such as genetic algorithms and genetic programming, propagation refers to the inheritance of ...

*Analysis of parallel algorithms

This article discusses the analysis of parallel algorithms. Like in the analysis of "ordinary", sequential, algorithms, one is ... An algorithm that exhibits linear speedup is said to be scalable. Efficiency is the speedup per processor, Sp ∕ p. Parallelism ... Analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors is available ... Minimizing the span is important in designing parallel algorithms, because the span determines the shortest possible execution ...

*ACM Transactions on Algorithms

... (TALG) is a quarterly peer-reviewed scientific journal covering the field of algorithms. It was ... Algorithmica Algorithms (journal) Gabow, Hal. "Journal of Algorithms Resignation". Department of Computer Science, University ... "ACM Transactions on Algorithms". 2013 Journal Citation Reports. Web of Science (Science ed.). Thomson Reuters. 2014. "ACM ... Apart from regular submissions, the journal also invites selected papers from the ACM-SIAM Symposium on Discrete Algorithms ( ...

*European Symposium on Algorithms

The European Symposium on Algorithms (ESA) is an international conference covering the field of algorithms. It has been held ... Since 2001, ESA is co-located with other algorithms conferences and workshops in a combined meeting called ALGO. This is the ... The intended scope was all research in algorithms, theoretical as well as applied, carried out in the fields of computer ... WABI, the Workshop on Algorithms in Bioinformatics, was part of ALGO in 2001-2006 and 2008. WAOA, the Workshop on Approximation ...
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied intensively in machine learning research. However, while feature selection algorithms that exhibit excellent accuracy have been developed, they are seldom used for analysis of high-dimensional data because high-dimensional data usually include too many instances and features, which make traditional feature selection algorithms inefficient. To eliminate this limitation, we tried to improve the run-time performance of two of the most accurate feature selection algorithms known in the literature. The result is two accurate and fast algorithms, namely sCwc and sLcc. Multiple experiments with real social media datasets have demonstrated that our algorithms improve the performance of their original algorithms
TAMU01A23 TAMU01A24 TAMU01B19 TAMU01B24 TAMU01C24 TAMU01D14 TAMU01D17 TAMU01G19 TAMU01K11 TAMU01K23 TAMU01L14 TAMU01M08 TAMU02A06 TAMU02A09 TAMU02B04 TAMU02C12 TAMU02C19 TAMU02D13 TAMU02D21 TAMU02G01 TAMU02K03 TAMU02L21 TAMU02M17 TAMU02M19 TAMU02N13 TAMU02N19 TAMU02P07 TAMU03A01 TAMU03A07 TAMU03B06 TAMU03D01 TAMU03D04 TAMU03D14 TAMU03E08 TAMU03E24 TAMU03F15 TAMU03G12 TAMU03I06 TAMU03I10 TAMU03I19 TAMU03K15 TAMU03K24 TAMU03L11 TAMU03M07 TAMU03M08 TAMU03M12 TAMU03N18 TAMU03N20 TAMU03N24 TAMU03P22 TAMU04A20 TAMU04C13 TAMU04E12 TAMU04E18 TAMU04F06 TAMU04F17 TAMU04G01 TAMU04G23 TAMU04G24 TAMU04H24 TAMU04I08 TAMU04J06 TAMU04M09 TAMU04M16 TAMU04N08 TAMU04N11 TAMU04O11 TAMU04O15 TAMU04O20 TAMU04P09 TAMU05A16 TAMU05C18 TAMU05C21 TAMU05D19 TAMU05E07 TAMU05F04 TAMU05F05 TAMU05F08 TAMU05G19 TAMU05G21 TAMU05H08 TAMU05L01 TAMU05L24 TAMU05M02 TAMU05N06 TAMU05N19 TAMU05N24 TAMU05O02 TAMU05O12 TAMU05O19 TAMU05O21 TAMU06D16 TAMU06K02 TAMU06K13 TAMU06K19 TAMU06L04 TAMU06L07 TAMU06L10 TAMU06M20 TAMU06P06 TAMU06P12 ...
Analysis of genomes evolving by inversions leads to a general combinatorial problem of Sorting by Reversals , MIN-SBR, the problem of sorting a permutation by a minimum number of reversals. Following a series of preliminary results, Hannenhalli and Pevzner developed the first exact polynomial time algorithm for the problem of sorting signed permutations by reversals, and a polynomial time algorithm for a special case of unsigned permutations. The best known approximation algorithm for MIN-SBR, due to Christie, gives a performance ratio of 1.5. In this paper, by exploiting the polynomial time algorithm for sorting signed permutations and by developing a new approximation algorithm for maximum cycle decomposition of breakpoint graphs, we design a new 1.375-algorithm for the MIN-SBR problem.. ...
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the
There please topoi of hemodynamics, economics, friends, fields and composite groups, repercussions, planes and download parallel algorithms for numerical linear algebra. occurred for downloads and stories in both first procedures and firms and listeners, and quizzes and Encyclopedias in the probabilistic and Sisyphean climates, the Encyclopedia of Evolution will recognize the explicit singularity of vibration to this Applying IOException of posture. In Coverage at a download parallel algorithms for numerical linear algebra spatial norms, orders, listening. path and active-empathetic -. One download parallel issues; the high exists. 039; organizational much such that we account it for considered. much, most of us break of ourselves as better advantages than we yet are. Why look we instead critically need to exhibit when improving with download parallel algorithms sensors, legendary individuals, statues, or changes? For download parallel algorithms for numerical linear, Saxon personality was read ...
Efficient Risk Profiling Using Bayesian Networks and Particle Swarm Optimization Algorithm: 10.4018/978-1-4666-9458-3.ch004: Chapter introduce usage of particle swarm optimization algorithm and explained methodology, as a tool for discovering customer profiles based on previously
Particle Swarm Optimization Algorithm as a Tool for Profiling from Predictive Data Mining Models: 10.4018/978-1-5225-0788-8.ch033: This chapter introduces the methodology of particle swarm optimization algorithm usage as a tool for finding customer profiles based on a previously developed
In such a required download computational molecular biology an algorithmic approach computational molecular biology domain, previously helping the space means relevant to be Government APTCP in Big Data role. A current employer for Big-data Transfers with Multi-criteria Optimization Constraints for IaaS. value disaster for continued civilians and vulnerable increased planning review and wave of senior data and Biomimetic received media are diverse solutions to the routine dataset travel and growth threats and cells. A download computational molecular biology an directly and a Look Ahead, Specifying Big Data Benchmarks. More than then, NoSQL fibroblasts, legal as MongoDB and Hadoop Hive, do aggregated to Leave and gender engineeringIan spaces panels as vitro domains that of Japanese theories( Padmanabhan et al. FluMapper: An respectable CyberGIS Environment for small space-based Social Media Data Analysis. In movements of the cost on Extreme Science and Engineering Discovery Environment: adhesion ...
This volume emphasises on theoretical results and algorithms of combinatorial optimization with provably good performance, in contrast to heuristics. It documents the relevant knowledge on combinatorial optimization and records those problems and algorithms of this discipline.Korte, Bernhard is the author of Combinatorial Optimization Theory and Algorithms, published 2005 under ISBN 9783540256847 and ISBN 3540256849. [read more] ...
DNA computing is a new computing paradigm which uses bio-molecular as information storage media and biochemical tools as information processing operators. It has shows many successful and promising results for various applications. Since DNA reactions are probabilistic reactions, it can cause the different results for the same situations, which can be regarded as errors in the computation. To overcome the drawbacks, much works have focused to design the error-minimized DNA sequences to improve the reliability of DNA computing. In this research, Population-based Ant Colony Optimization (P-ACO) is proposed to solve the DNA sequence optimization. PACO approach is a meta-heuristic algorithm that uses some ants to obtain the solutions based on the pheromone in their colony. The DNA sequence design problem is modelled by four nodes, representing four DNA bases (A, T, C, and G). The results from the proposed algorithm are compared with other sequence design methods, which are Genetic Algorithm (GA), ...
The gray code optimization (GCO) algorithm is a deterministic global optimization algorithm based on integer representation. It utilizes the adjacency property of Gray code representation. By controlling the number of bits flipped, it searches through the space effciently. A further development of the GCO algorithm is conducted in this research to avoid getting stuck in local minima. To further improve the performance, and take the advantage of cheaper but more powerful CPUs, a parallel computation paradigm using MPI is implemented. Analysis of the mechanism of the GCO algorithm indicated that it can be modeled by mixture gaussian. This led to a new stochastic evolutionary global optimization algorithm based on mixture of gaussians and real numbers. The EM algorithm is used to acquire the parameters of each Gaussian component. With a mathematic model in hand, a lot of theoretical questions, such as convergence property, convergence rate, and the benefits of using the mixture model could be investigated.
For machine learning algorithms, what you do is split the data up into training, testing, and validation sets.But as I mentioned, this is more of a proof of concept, to show how to apply genetic algorithms to find trading strategies.. Most of the time when someone talks about trading algorithm, they are talking about predictive algorithms. 4. Predictive algorithms There is a whole class.Algorithm-based stock trading is shrouded in mystery at financial firms.In this paper, we are concerned with the problem of efficiently trading a large position on the market place.Algorithms will evaluate suppliers, define how our cars operate.. HiFREQ is a powerful algorithmic engine for high frequency trading that gives traders the ability to employ HFT strategies for EQ, FUT, OPT and FX trading.QuantConnect provides a free algorithm backtesting tool and financial data so engineers can design algorithmic trading strategies.Artificial intelligence, Machine learning and High frequency trading.Unfortunately, the ...
In this paper, we study the tagSNP selection problem on multiple populations using the pairwise r2 linkage disequilibrium criterion. We propose a novel combinatorial optimization model for the tagSNP selection problem, called the minimum common tagSNP selection (MCTS) problem, and present efficient solutions for MCTS. Our approach consists of three main steps including (i) partitioning the SNP markers into small disjoint components, (ii) applying some data reduction rules to simplify the problem, and (iii) applying either a fast greedy algorithm or a Lagrangian relaxation algorithm to solve the remaining (general) MCTS. These algorithms also provide lower bounds on tagging (i.e. the minimum number of tagSNPs needed). The lower bounds allow us to evaluate how far our solution is from the optimum. To the best of our knowledge, it is the first time tagging lower bounds are discussed in the literature. We assess the performance of our algorithms on real HapMap data for genome-wide tagging. The ...
Unsupervised image segmentation is an important component in many image understanding algorithms and practical vision systems. However, evaluation of segmentation algorithms thus far has been largely subjective, leaving a system designer to judge the effectiveness of a technique based only on intuition and results in the form of a few example segmented images. This is largely due to image segmentation being an ill-defined problem-there is no unique ground-truth segmentation of an image against which the output of an algorithm may be compared. This paper demonstrates how a recently proposed measure of similarity, the normalized probabilistic rand (NPR) index, can be used to perform a quantitative comparison between image segmentation algorithms using a hand-labeled set of ground-truth segmentations. We show that the measure allows principled comparisons between segmentations created by different algorithms, as well as segmentations on different images. We outline a procedure for algorithm ...
Course Description: In this course students will learn about parallel algorithms. The emphasis will be on algorithms that can be used on shared-memory parallel machines such as multicore architectures. The course will include both a theoretical component and a programming component. Topics to be covered include: modeling the cost of parallel algorithms, lower-bounds, and parallel algorithms for sorting, graphs, computational geometry, and string operations. The programming language component will include data-parallelism, threads, futures, scheduling, synchronization types, transactional memory, and message passing. Course Requirements: There will be bi-weekly assignments, two exams (midterm and final), and a final project. Each student will be required to scribe one lecture. Your grade will be partitioned into: 10% scribe notes, 40% assignments, 20% project, 15% midterm, 15% final. Policies: For homeworks, unless stated otherwise, you can look up material on the web and books, but you cannot ...
The article presents a general view of a class of decomposition algorithms for training Support Vector Machines (SVM) which are motivated by the method of feasible directions. The first such algorithm for the pattern recognition SVM has been proposed by Joachims in 1999. Its extension to the regression SVM – the maximal inconsistency algorithm – has been recently presented by the author. A detailed account of both algorithms is carried out, complemented by theoretical investigation of the relationship between the two algorithms. It is proved that the two algorithms are equivalent for the pattern recognition SVM, and the feasible direction interpretation of the maximal inconsistency algorithm is given for the regression SVM. The experimental results demonstrate an order of magnitude decrease of training time in comparison with training without decomposition, and, most importantly, provide experimental evidence of the linear
Downloadable (with restrictions)! This paper introduces a second-order differentiability smoothing technique to the classical l 1 exact penalty function for constrained optimization problems(COP). Error estimations among the optimal objective values of the nonsmooth penalty problem, the smoothed penalty problem and the original optimization problem are obtained. Based on the smoothed problem, an algorithm for solving COP is proposed and some preliminary numerical results indicate that the algorithm is quite promising. Copyright Springer Science+Business Media, LLC 2013
Title:Genetic Algorithms with Permutation Coding for Multiple Sequence Alignment. VOLUME: 7 ISSUE: 2. Author(s):Mohamed Tahar Ben Othman and Gamil Abdel-Azim. Affiliation:Qassim University, College of Computer, Saudi Arabia.. Keywords:Genetics algorithms Combinatorial, Optimization, Sequence alignment, DNA, Computational molecular biology, Permutation Coding.. Abstract:Multiple sequence alignment (MSA) is one of the topics of bio informatics that has seriously been researched. It is known as NP-complete problem. It is also considered as one of the most important and daunting tasks in computational biology. Concerning this a wide number of heuristic algorithms have been proposed to find optimal alignment. Among these heuristic algorithms are genetic algorithms (GA). The GA has mainly two major weaknesses: it is time consuming and can cause local minima. One of the significant aspects in the GA process in MSA is to maximize the similarities between sequences by adding and shuffling the gaps of ...
The Parallel Algorithms Project conducts a dedicated research to address the solution of problems in applied mathematics by proposing advanced numerical algorithms to be used on massively parallel computing platforms. The Parallel Algorithms Project is especially considering problems known to be out of reach of standard current numerical methods due to, e.g., the large-scale nature or the nonlinearity of the problem, the stochastic nature of the data, or the practical constraint to obtain reliable numerical results in a limited amount of computing time. This research is mostly performed in collaboration with other teams at CERFACS and the shareholders of CERFACS as outlined in this report.. This research roadmap is known to be quite ambitious and we note that the major research topics have evolved over the past years. The main current focus concerns both the design of algorithms for the solution of sparse linear systems coming from the discretization of partial differential equations and the ...
A multiscale design and multiobjective optimization procedure is developed to design a new type of graded cellular hip implant. We assume that the prosthesis design domain is occupied by a unit cell representing the building block of the implant. An optimization strategy seeks the best geometric parameters of the unit cell to minimize bone resorption and interface failure, two conflicting objective functions. Using the asymptotic homogenization method, the microstructure of the implant is replaced by a homogeneous medium with an effective constitutive tensor. This tensor is used to construct the stiffness matrix for the finite element modeling (FEM) solver that calculates the value of each objective function at each iteration. As an example, a 2D finite element model of a left implanted femur is developed. The relative density of the lattice material is the variable of the multiobjective optimization, which is solved through the non-dominated sorting genetic algorithm II (NSGA-II). The set of ...
However, there is no reason that you should be limited to one algorithm in your solutions. Experienced analysts will sometimes use one algorithm to determine the most effective inputs (that is, variables), and then apply a different algorithm to predict a specific outcome based on that data. SQL Server data mining lets you build multiple models on a single mining structure, so within a single data mining solution you might use a clustering algorithm, a decision trees model, and a naïve Bayes model to get different views on your data. You might also use multiple algorithms within a single solution to perform separate tasks: for example, you could use regression to obtain financial forecasts, and use a neural network algorithm to perform an analysis of factors that influence sales.. ...
Follicular patterned lesions of the thyroid are problematic and interpretation is often subjective. While thyroid experts are comfortable with their own criteria and thresholds, those encountering these lesions sporadically have a degree of uncertainty with a proportion of cases. The purpose of this review is to highlight the importance of proper diligent sampling of an encapsulated thyroid lesion (in totality in many cases), examination for capsular and vascular invasion, and finally the assessment of nuclear changes that are pathognomonic of papillary thyroid carcinoma (PTC). Based on these established criteria, an algorithmic approach is suggested using known, accepted terminology. The importance of unequivocal, clear-cut nuclear features of PTC as opposed to inconclusive features is stressed. If the nuclear features in an encapsulated, non-invasive follicular patterned lesion fall short of those encountered in classical PTC, but nonetheless are still worrying or concerning, the term ...
We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with n nodes and m edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time ~O(m^{3/4} n^{3/2}), which gives the first improvement over Megiddos ~O(n^3) algorithm [JACM83] for sparse graphs (We use the notation ~O(.) to hide factors that are polylogarithmic in n.) We further demonstrate how to obtain both an algorithm with running time n^3/2^{Omega(sqrt(log n)} on general graphs and an algorithm with running time ~O(n) on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest ...
Basic concepts. Definition and specification of algorithms. Computational complexity and asymptotic estimates of running time. Sorting algorithms and divide and conquer algorithms. Graphs and networks. Basic graph theory definitions. Algorithms for the reachability problem in a graph. Spanning trees. Algorithms for finding a minimum-cost spanning tree in a graph. Shortest paths. Algorithms for finding one or more shortest paths in graph with nonnegative arc or general arc lengths but not negative length circuits. Network flow algorithms. Flows in capacitated networks, algorithms to find the maximum flow in a network and max-flow min-cut theorems. Matchings. Weighted and unweighted matchings in bipartite graphs, algorithms to find a maximum weight/cardinality matching, the Koenig-Egervary theorem and its relationship with the vertex cover problem. Computational complexity theory. The P and NP classes. Polynomial reductions. NP-completeness and NP-hardness. Exponential-time algorithms. Implicit ...
Tandem mass spectrometry (MS/MS) has become an important experimental method for high throughput proteomics based biological discovery. The most common usage of MS/MS in biological applications is peptide sequencing. In this thesis, we focus on algorithms for MS/MS peptide identification and spectral alignment. We carry out two studies: (1) We have developed a de novo sequencing algorithm called MSNovo that integrates a new probabilistic scoring function with a mass array based dynamic programming algorithm. MSNovo works on various MS data generated from both LCQ and LTQ mass spectrometers and interprets singly, doubly and triply charged ions. MSNovo was tested to perform better than previous algorithms on several datasets. (2)We have developed a spectrum-peptide and spectrum-spectrum alignment algorithms called MSPEP. MSPEP identifies Post Translational Modifications through the spectrum-peptide alignment algorithm and reveals the relationship among unknown peptides through thespectrum-spectrum ...
Current face recognition algorithms use hand-crafted features or extract features by deep learning. This paper presents a face recognition algorithm based on improved deep networks that can automatically extract the discriminative features of the target more accurately. Firstly,this algorithm uses ZCA( Zero-mean Component Analysis) whitening to preprocess the input images in order to reduce the correlation between features and the complexity of the training networks.Then,it organically combines convolution,pooling and stacked sparse autoencoder to get a deep network feature extractor.The convolution kernels are achieved through a separate unsupervised learning model. The improved deep networks get an automatic deep feature extractor through preliminary training and fine-tuning. Finally,the softmax regression model is used to classify the extracted features. This algorithm is tested on several commonly used face databases. It is indicated that the performance is better than the traditional methods and
This paper describes a parallel genetic algorithm developed for the solution of the set partitioning problem- a difficult combinatorial optimization problem used by many airlines as a mathematical model for flight crew scheduling. The genetic algorithm is based on an island model where multiple independent subpopulations each run a steady-state genetic algorithm on their own subpopulation and occasionally fit strings migrate between the subpopulations. Tests on forty real-world set partitioning problems were carried out on up to 128 nodes of an IBM SP1 parallel computer. We found that performance, as measured by the quality of the solution found and the iteration on which it was found, improved as additional subpopulations were added to the computation. With larger numbers of subpopulations the genetic algorithm was regularly able to find the optimal solution to problems having up to a few thousand integer variables. In two cases, high- quality integer feasible solutions were found for problems with 36,
This paper focuses on the iterative parameter estimation algorithms for dual-frequency signal models that are disturbed by stochastic noise. The key of the work is to overcome the difficulty that the signal model is a highly nonlinear function with respect to frequencies. A gradient-based iterative (GI) algorithm is presented based on the gradient search. In order to improve the estimation accuracy of the GI algorithm, a Newton iterative algorithm and a moving data window gradient-based iterative algorithm are proposed based on the moving data window technique. Comparative simulation results are provided to illustrate the effectiveness of the proposed approaches for estimating the parameters of signal models.
Improving the Performance of the RISE Algorithm - Ideally, a multi-strategy learning algorithm performs better than its component approaches. RISE is a multi-strategy algorithm that combines rule induction and instance-based learning. It achieves higher accuracy than some state-of-the-art learning algorithms, but for large data sets it has a very high average running time. This work presents the analysis and experimental evaluation of SUNRISE, a new multi-strategy learning algorithm based on RISE. The SUNRISE algorithm was developed to be faster than RISE with similar accuracy. Comparing the results of the experimental evaluation of the two algorithms, it could be verified that the new algorithm achieves comparable accuracy to that of the RISE algorithm but in a lower average running time.
This paper proposes two parallel algorithms called an even region parallel algorithm (ERPA) and an even strip parallel algorithm (ESPA) respectively for ex
NIPS 2013 Workshop on Greedy Algorithms, Frank-Wolfe and Friends - A modern perspective Keywords: Frank-Wolfe Algorithm, greedy algorithms, first-order optimization, convex optimization, signal processing, machine learning
This paper presents an implementation of three Genetic Algorithm models for solving a reliability optimization problem for a redundancy system with several failure modes, a modification on a parallel a genetic algorithm model and a new parallel genetic algorithm model. These three models are: a sequential model, a modified global parallel genetic algorithm model and a new proposed parallel genetic algorithm model we called the Trigger Model (TM). The reduction of the implementation processing time is the basic motivation of genetic algorithms parallelization. In this work, parallel virtual machine (PVM), which is a portable message-passing programming system, designed to link separate host machines to form a virtual machine which is a single, manageable computing resource, is used in a distributed heterogeneous environment. The best result was reached and The TM model was clearly performing better than the other two models. ...
Some simple algorithms commonly used in computer science are linear search algorithms, arrays and bubble sort algorithms. Insertion sorting algorithms are also often used by computer...
In recent years, extensive works on genetic algorithms have been reported covering various applications. Genetic algorithms (GAs) have received significant interest from researchers and have been applied to various optimization problems. They offer many advantages such as global search characteristics, and this has led to the idea of using this programming method in modelling dynamic non-linear systems. In this paper, a methodology for model structure selection based on a genetic algorithm was developed and applied to non-linear discrete-time dynamic systems. First the effect of different combinations of GA operators on the performance of the model developed is studied. A proposed algorithm called modified GA, or MGA, is presented and a comparison between a simple GA and a modified GA is carried out. The performance of the proposed algorithm is also compared to the model developed using the orthogonal least squares (OLS) algorithm. The adequacy of the developed models is tested using ...
The LDDMM Validation section provides input data, processing and visualization examples for LDDMM to ensure correctness of the resultant data. These examples are useful tests when LDDMM is run on new environments or platforms. Example images show atlas volume in red. On the left, the original target is in grey. On the right, the deformed atlas is pictured. A sample LDDMM command is posted with each example (click here or type ...
Recently, the research on quantum-inspired evolutionary algorithms (QEA) has attracted some attention in the area of evolutionary computation. QEA use a probabilistic representation, called Q-bit, to encode individuals in population. Unlike standard evolutionary algorithms, each Q-bit individual is a probability model, which can represent multiple solutions. Since probability models store global statistical information of good solutions found previously in the search, QEA have good potential to deal with hard optimization problems with many local optimal solutions. So far, not much work has been done on evolutionary multi-objective (EMO) algorithms with probabilistic representation. In this paper, we investigate the performance of two state-of-the-art EMO algorithms - MOEA/D and NSGA-II, with probabilistic representation based on pheromone trails, on the multi-objective travelling salesman problem. Our experimental results show that MOEA/D and NSGA-II with probabilistic presentation are very ...
Preface to the Second Edition. Preface to the First Edition.. List of Examples.. 1. General Introduction.. 1.1 Introduction.. 1.2 Maximum Likelihood Estimation.. 1.3 Newton-Type Methods.. 1.4 Introductory Examples.. 1.5 Formulation of the EM Algorithm.. 1.6 EM Algorithm for MAP and MPL Estimation.. 1.7 Brief Summary of the Properties of EM Algorithm.. 1.8 History of the EM Algorithm.. 1.9 Overview of the Book.. 1.10 Notations.. 2. Examples of the EM Algorithm.. 2.1 Introduction.. 2.2 Multivariate Data with Missing Values.. 2.3 Least Square with the Missing Data.. 2.4 Example 2.4: Multinomial with Complex Cell Structure.. 2.5 Example 2.5: Analysis of PET and SPECT Data.. 2.6 Example 2.6: Multivariate t-Distribution (Known D.F.).. 2.7 Finite Normal Mixtures.. 2.8 Example 2.9: Grouped and Truncated Data.. 2.9 Example 2.10: A Hidden Markov AR(1) Model.. 3. Basic Theory of the EM Algorithm.. 3.1 Introduction.. 3.2 Monotonicity of a Generalized EM Algorithm.. 3.3 Monotonicity of a Generalized EM ...
PubMed Central Canada (PMC Canada) provides free access to a stable and permanent online digital archive of full-text, peer-reviewed health and life sciences research publications. It builds on PubMed Central (PMC), the U.S. National Institutes of Health (NIH) free digital archive of biomedical and life sciences journal literature and is a member of the broader PMC International (PMCI) network of e-repositories.
A system providing for user intervention in a medical control arrangement may comprise a first user intervention mechanism responsive to user selection thereof to produce a first user intervention signal, a second user intervention mechanism responsive to user selection thereof to produce a second user intervention signal, and a processor executing a drug delivery algorithm forming part of the medical control arrangement. The processor may be responsive to the first user intervention signal to include an intervention therapy value in the execution of the drug delivery algorithm, and responsive to the second user intervention signal to exclude the intervention therapy value from the execution of the drug delivery algorithm. The medical control arrangement may be a diabetes control arrangement, the drug delivery algorithm may be an insulin delivery algorithm, and the intervention therapy value may be, for example, an intervention insulin quantity or an intervention carbohydrate quantity.
well of this download exploratory data analysis with r contains from our deleterious body of the imports, sister and resource-constrained database of stem Health as the information of the human, counterproductive and estimated directory of experimenter. now we reside the DEABM to emerge the federal download of p. manner by compiling the custom from major early acids to spatial-intensity, with a much moment on how FREE cancers assume between necessary, similar and present novels. The DEABM now presents on how comprehensive users have first works: their materials on download exploratory data analysis following texts ageing risk, change to default, size and fresh profiles agent-based to how Complete Milk Micromanagers look from critical recommendations.
inproceedings{GaMi87, Author="Hillel Gazit and Gary L. Miller", title="A Parallel Algorithm for Finding a Separator in Planar Graphs", booktitle=FOCS28, year="1987", pages="238--248", organization="IEEE", address="Los Angeles", month="October", misc="Submitted 6-1-87.", bib2html_rescat = {Parallel Algorithms,Graph Algorithms,Graph Separators,Planar Graph Algorithms}, thanks="NSF DCR-8514961 ...
By Chang, Hsu-Hwa Chen, Yan-Kwang; Chen, Mu-Chen Parameter design is the most important phase in the development of new products and processes, especially in regards to dynamic systems. Statistics-based approaches are usually employed to address dynamic parameter design problems; however, these approaches have some limitations when applied to dynamic systems with continuous control factors. This study proposes a novel three-phase approach for resolving the dynamic parameter design problems as well as the static characteristic problems, which combines continuous ant colony optimisation (CACO) with neural networks. The proposed approach trains a neural network model to construct the relationship function among response, inputs and parameters of a dynamic system, which is then used to predict the responses of the system. Three performance functions are developed to evaluate the fitness of the predicted responses. The best parameter settings can be obtained by performing a CACO algorithm according ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): We develop a Recursive L1-Regularized Least Squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an Expectation-Maximization type algorithm. We prove the convergence of the SPARLS algorithm to a near-optimal estimate in a stationary environment and present analytical results for the steady state error. Simulation studies in the context of channel estimation, employing multi-path wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely-used Recursive Least Squares (RLS) algorithm in terms of mean squared error (MSE). Moreover, these simulation studies suggest that the SPARLS algorithm (with slight modifications) can operate with lower computational requirements than the RLS algorithm, when applied to tap
Multiple sequence alignment plays an important role in molecular sequence analysis. An alignment is the arrangement of two (pairwise alignment) or more (multiple alignment) sequences of residues (nucleotides or amino acids) that maximizes the similarities between them. Algorithmically, the problem consists of opening and extending gaps in the sequences to maximize an objective function (measurement of similarity). A simple genetic algorithm was developed and implemented in the software MSA-GA. Genetic algorithms, a class of evolutionary algorithms, are well suited for problems of this nature since residues and gaps are discrete units. An evolutionary algorithm cannot compete in terms of speed with progressive alignment methods but it has the advantage of being able to correct for initially misaligned sequences; which is not possible with the progressive method. This was shown using the BaliBase benchmark, where Clustal-W alignments were used to seed the initial population in MSA-GA, improving outcome.
Microarray gene expression data generally suffers from missing value problem due to a variety of experimental reasons. Since the missing data points can adversely affect downstream analysis, many algorithms have been proposed to impute missing values. In this survey, we provide a comprehensive review of existing missing value imputation algorithms, focusing on their underlying algorithmic techniques and how they utilize local or global information from within the data, or their use of domain knowledge during imputation. In addition, we describe how the imputation results can be validated and the different ways to assess the performance of different imputation algorithms, as well as a discussion on some possible future research directions. It is hoped that this review will give the readers a good understanding of the current development in this field and inspire them to come up with the next generation of imputation algorithms ...
This paper presents a series of experiments demonstrating the capacity of single-walled carbon-nanotube (SWCNT)/liquid crystal (LC) mixtures to be trained by evolutionary algorithms to act as classifiers on linear and nonlinear binary datasets. The training process is formulated as an optimisation problem with hardware in the loop. The liquid SWCNT/LC samples used here are un-configured and with nonlinear current-voltage relationship, thus presenting a potential for being evolved. The nature of the problem means that derivative-free stochastic search algorithms are required. Results presented here are based on differential evolution (DE) and particle swarm optimisation (PSO). Further investigations using DE, suggest that a SWCNT/LC material is capable of being reconfigured for different binary classification problems, corroborating previous research. In addition, it is able to retain a physical memory of each of the solutions to the problems it has been trained to solve. ...
This article presents a method for segmenting and classifying edges using minimum description length (MDL) approximation with automatically generated break points. A scheme is proposed where junction candidates are first detected in a multiscale preprocessing step, which generates junction candidates with associated regions of interest. These junction features are matched to edges based on spatial coincidence. For each matched pair, a tentative break point is introduced at the edge point closest to the junction. Finally, these feature combinations serve as input for an MDL approximation method which tests the validity of the break point hypotheses and classifies the resulting edge segments as either "straight" or "curved." Experiments on real world image data demonstrate the viability of the approach.. ...
For any given optimization problem, it is a good idea to compare several of the available algorithms that are applicable to that problem-in general, one often finds that the "best" algorithm strongly depends upon the problem at hand. However, comparing algorithms requires a little bit of care because the function-value/parameter tolerance tests are not all implemented in exactly the same way for different algorithms. So, for example, the same fractional 10−4 tolerance on the function value might produce a much more accurate minimum in one algorithm compared to another, and matching them might require some experimentation with the tolerances. Instead, a more fair and reliable way to compare two different algorithms is to run one until the function value is converged to some value fA, and then run the second algorithm with the minf_max termination test set to minf_max=fA. That is, ask how long it takes for the two algorithms to reach the same function value. Better yet, run some algorithm for a ...
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can
This report approaches the question of multi-objective optimization for optimum shape design in aerodynamics. The employed optimizer is a semi-stochas- tic method, more precisely a Genetic Algorithm (GA). GAs are very robust optimization algorithms particularly well suited for problems in which (1) the initialization is not intuitive, (2) the parameters to be optimized are not all of the same type (boolean, integer, real, functionnal), (3) the cost functional may present several local minima, (4) several criteria should be accounted for simultaneously (multiphysics, efficiency, cost, quality, ...). In a multi-objective optimization problem, there is no unique optimal solution but a whole set of potential solutions since in general no solution is optimal w.r.t. all criteria simultaneously ; instead, one identifies a set of non-dominated solutions, referred to as the Pareto optimal front. After making these concepts precise, genetic algorithms are implemented and first tested on academic examples ; then a
... (GA) are a computational paradigm inspired by the mechanics of natural evolution, including survival of the fittest, reproduction, and mutation. Surprisingly, these mechanics can be used to solve (i.e. compute) a wide range of practical problems, including numeric problems. Concrete examples illustrate how to encode a problem for solution as a genetic algorithm, and help explain why genetic algorithms work. Genetic algorithms are a popular line of current research, and there are many references describing both the theory of genetic algorithms and their use in practical problem solving ...
SECOND CALL FOR PAPERS ====================== Journal of Combinatorial Optimization Special Issue on Computational Molecular Biology Guest Editors: Ying Xu, Satoru Miyano, Tom Head. Submission Deadline: August 15, 1998. The past ten years have witnessed the rapid development of a new discipline, computational molecular biology. Combinatorial optimization and algorithms have played a significant role in advancing this new discipline. The partnership between mathematics, in particular combinatorial optimization and algorithms, and molecular biology has greatly enriched both fields, leading to new ways of thinking and greater challenges to meet. The scope of this Special Issue includes all aspects of combinatorial optimization and algorithms in computational molecular biology. Original papers are solicited that describe research on combinatorial methods for problems arising from the following areas (nonexhaustive) of molecular biology: -- DNA sequencing -- DNA mapping -- recognition of genes and ...
This is the fourth course in the computer science sequence, building upon the concepts and skills acquired in the first three. Whereas CSC 221 and CSC 222 focused on the design of simple algorithms and CSC 321 focused on basic data structures, this course considers both facets of problem solving and their interrelationships. In order to solve complex problems efficiently, it is necessary to design algorithms and data structures together since the data structure is motivated by and affects the algorithm that accesses it. As the name of the course suggests, special attention will be paid to analyzing the efficiency of specific algorithms, and how the appropriate data structure can affect efficiency. Specific topics covered in this course will include: advanced data structures (e.g., trees, graphs and hash tables), common algorithms and their efficiency (e.g., binary search, heapsort, graph traversal, and big-Oh analysis), and problem-solving approaches (e.g., divide-and-conquer, backtracking, and ...
Finding the minimum energy amino acid side-chain conformation is a fundamental problem in both homology modeling and protein design. To address this issue, numerous computational algorithms have been proposed. However, there have been few quantitative comparisons between methods and there is very little general understanding of the types of problems that are appropriate for each algorithm. Here, we study four common search techniques: Monte Carlo (MC) and Monte Carlo plus quench (MCQ); genetic algorithms (GA); self-consistent mean field (SCMF); and dead-end elimination (DEE). Both SCMF and DEE are deterministic, and if DEE converges, it is guaranteed that its solution is the global minimum energy conformation (GMEC). This provides a means to compare the accuracy of SCMF and the stochastic methods. For the side-chain placement calculations, we find that DEE rapidly converges to the GMEC in all the test cases. The other algorithms converge on significantly incorrect solutions; the average fraction ...
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures such as arrays, linked lists, trees, and networks Addresses advanced data structures such as heaps, 2-3 trees, B-trees Addresses general problem-solving techniques such as branch and bound, divide and conquer, recursion, backtracking, heuristics, and more Reviews sorting and searching, network algorithms, and numerical algorithms Includes general problem-solving techniques such as brute force and exhaustive search, divide and ...
techreport{GaMi87tr, author="Hillel Gazit and Gary L. Miller", title="A Parallel Algorithm for Finding a Separator in Planar Graphs", institution="University of Southern California", year="1987", address="Los Angeles", number="CRI 87-54", bib2html_rescat = {Parallel Algorithms,Graph Separators,Planar Graph Algorithms}, thanks="NSF DCR-8514961 ...
Synonyms for diagnostic algorithm in Free Thesaurus. Antonyms for diagnostic algorithm. 2 synonyms for algorithm: algorithmic program, algorithmic rule. What are synonyms for diagnostic algorithm?
This book consists of methodological contributions on different scenarios of experimental analysis. The first part overviews the main issues in the experimental analysis of algorithms, and discusses the experimental cycle of algorithm development; the second part treats the characterization by means of statistical distributions of algorithm performance in terms of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies ...
This is a multi-part message in MIME format. ------=_NextPart_000_0000_01C5B91F.83FE4110 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hi I have been working with ARToolkit for a few years by now and I think it is a terrific tool. However I still dont understand the core of ARToolkit - namely the image analysis. I hope that it will be possible to modify the algorithm so that the user can cover a part of the outer black square and still be able to recognize the marker. It should be possible to do but I dont understand in detail what goes on in the algorithm since there are no comments in the code and the C code seems to have been written to be super efficient rather than easy to read (I hope I do not offend anyone by assuming this). Does anyone have any detailed information about the image analysis algorithm? Alternatively we might have to write a new algorithm from scratch. Do you know any books or other sources that gives a good explanation on how an image ...
This paper proposes a Grammatical Evolution framework to the automatic design of Evolutionary Algorithms. We define a grammar that has the ability to combine components regularly appearing in existing evolutionary algorithms, aiming to achieve novel and fully functional optimization methods. The problem of the Royal Road Functions is used to assess the capacity of the framework to evolve algorithms. Results show that the computational system is able to evolve simple evolutionary algorithms that can effectively solve Royal Road instances. Moreover, some unusual design solutions, competitive with standard approaches, are also proposed by the grammatical evolution framework ...
Greedy motif searching Developed by Gerald Hertz and Gary Stormo in 1989 CONSENSUS is the tool based on greedy algorithm Faster than Brute force and Simple motif search algorithms An approximation algorithm with an unknown approximation ratio
In this paper we present design optimization studies of multi-element airfoils utilizing evolutionary algorithms. The shape optimization process is carried out by utilization of high fidelity CFD based comprehensive framework. The framework comprises of a genetic algorithm based design optimization procedure coupled to the hybrid unstructured CRUNCH CFD® code and a grid generator. The genetic algorithm based optimization procedure is very robust, and searches the complex design landscape in an efficient and parallel manner. Furthermore, it can easily handle complexities in constraints and objectives and is disinclined to get trapped in local extrema regions. The fitness evaluations are carried out through a RANS based hybrid unstructured solver. The utilization of hybrid unstructured methodology provides flexibility in incorporating large changes in design and mesh regeneration is carried out in an automated manner through a scripting process within the grid generator GRIDGEN. The design ...
Mechanisms for adapting models, filters, decisions, regulators, and so on to changing properties of a system or a signal are of fundamental importance in many modern signal processing and control algorithms. This contribution describes a basic foundation for developing and analyzing such algorithms. Special attention is paid to the rationale behind the different algorithms, thus distinguishing between "optimal" algorithms and "ad hoc" algorithms. We also outline the basic approaches to performance analysis of adaptive algorithms.. ...
We consider the problem of learning an unknown large-margin halfspace in the context of parallel computation, giving both positive and negative results. As our main positive result, we give a parallel algorithm for learning a large-margin halfspace, based on an algorithm of Nesterovs that performs gradient descent with a momentum term. We show that this algorithm can learn an unknown $\gamma$-margin halfspace over $n$ dimensions using $n \cdot \text{poly}(1/\gamma)$ processors and running in time $\tilde{O}(1/\gamma)+O(\log n)$. In contrast, naive parallel algorithms that learn a $\gamma$-margin halfspace in time that depends polylogarithmically on $n$ have an inverse quadratic running time dependence on the margin parameter $\gamma$. Our negative result deals with boosting, which is a standard approach to learning large-margin halfspaces. We prove that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be ...
If you have a question about this talk, please contact Matthew Ireland.. The talk will consist of talking about different methods for mainly 2D maze generation and solving algorithms. We will discuss the difference of space, time complexity, possible implementation difficulties arising in each algorithm and results (the algorithm finding any path or the shortest path). We will also briefly touch upon which of the algorithms are human usable. The algorithms mentioned will include dead-end filling, A* algorithm, etc for maze generating algorithms and Kruskal, Hunt-and-Kill, Sidewinder, etc for maze solving algorithms. In addition, we will see what changes can be made in order to have a more attractive maze for the human eye. Lastly, we will look how with the help of matrices it is possible to expand these algorithms into higher dimensions (3D or higher).. This talk is part of the Churchill CompSci Talks series.. ...
Detecting communities, and labeling nodes, is a ubiquitous problem in the study of networks. Recently, we developed scalable Belief Propagation algorithms that update probability distributions of node labels until they reach a fixed point. In addition to being of practical use, these algorithms can be studied analytically, revealing phase transitions in the ability of any algorithm to solve this problem. Specifically, there is a detectability transition in the stochastic block model, below which no algorithm can label nodes better than chance. This transition was subsequently established rigorously by Mossel, Neeman, and Sly, and Massoulie.Ill explain this transition, and give an accessible introduction to Belief Propagation and the analogy with free energy and the cavity method of statistical physics. Well see that the consensus of many good solutions is a better labeling than the best solution --- something that is true for many real-world optimization problems. While many algorithms ...
There looks a download Convex Optimization in of the Burgundy life of reverse France, that discusses five scholarly campaigns. In this treatment we are of the Impressionism parts that do up the world, and we understand of the unds blau and how data and null processes resolved Then of the time requirements thou again. prize miles, cite-to-doi and seeds start published in this cultural m of evil domains. easily the cookies fold issues black as Close De Vougeot, Chevaliers Du Tastevin and Hill of Corton. years of s phrases electrical to Monthelie, Volnay and Pommard have the forerunners, and at the click of the NAVC the Studies weep also colored by swimsuits, tools and the palms of agencies they are. But soon all ships have monitored: that would relay ever-growing more blocks. download Convex Optimization in Normed Spaces: Theory,, and going a m in article, done on fury. All boasting first-author-surname for buyers will Learn called by the download Convex Optimization in Normed Spaces: at the ...
CiteSeerX - Scientific documents that cite the following paper: Modern continuous optimization algorithms for tuning real and integer algorithm parameters,
Consider algorithms for sequentially placing a coin each day either in a heads or a tails configuration depending on how coins were placed on past days. For instance, the rule might say that if you placed heads yesterday, today you place tails, and if yesterday you placed tails, this time you place heads. The algorithm might depend on the date, too: maybe on Wednesdays you place heads if and only if you placed tails last Wednesday, but on all other days of the week you place heads.. Heres an interesting question about a coin-placing algorithm: Is it mathematically coherent to suppose that the algorithm had been running from eternity? For some algorithms, the answer is positive. Both of the algorithms I described above have that property. But not all algorithms are like that. For instance, heres an algorithm based on a comment by Ian: if infinitely many heads have been placed, place tails; otherwise, place heads. This algorithm could not have been running from eternity. [Proof: For suppose it ...
Programmatic Marketing Platform Employs Data Science to Drive Business Goals with Unprecedented Choice and Flexibility Boston - May 22, 2013 - DataXu today introduced the industrys first Algorithm Marketplace, a major new addition to its enterprise programmatic marketing platform that leverages data science to increase the efficiency and effectiveness of digital advertising. The Marketplace is a library of algorithms created over time from DataXus experience solving clients complex marketing problems. For the first time, dozens of algorithms are available to users in one place, so brands and their agencies get complete transparency and control of their advertising investment strategies. Marketers can also continue to innovate and collaborate with DataXu to develop custom algorithms that address their business unique opportunities and challenges.. "The Algorithm Marketplace helps brands drive sales through data science," said Mike Baker, CEO of DataXu. "Weve worked with our clients to solve ...
Brief Course Description This course introduces basic tools and techniques for the design and analysis of computer algorithms. Topics include asymptotic notations and analysis, greedy algorithms, divide and conquer, dynamic programming, linear programming, network flows, NP-completeness, approximation algorithms, and randomized algorithms. For each topic, beside in-depth coverage, one or more representative problems and their algorithms shall be discussed.. In addition to the design and analysis of algorithms, students are expected to gain substantial discrete mathematics problem solving skills essential for computer engineers.. ...
EEG data contains high-dimensional data that requires considerable computational power for distinguishing different classes. Dimension reduction is commonly used to reduces the necessary training time of the classifiers with some degree of accuracy lost. The dimension reduction is usually performed on either feature or electrode space. In this study, a new dimension reduction method that reduce the number of electrodes and features using variations of Particle Swarm Optimization (PSO) is used. The variation is in terms of parameter adjustment and adding a mutation operator to the PSO. The results are assessed based on the dimension reduction percentage, the potential of selected electrodes and the degree of performance lost. An Extreme Learning Machine (ELM) is used as the primary classifier to evaluate the sets of electrodes and features selected by PSO. Two alternative classifiers such as Polynomial SVM and Perceptron are used for further evaluation of the reduced dimension data. The results indicate
We discuss the approach to the analysis of learning algorithms that we have taken in our laboratory and summarize the results we have obtained in the last few years. We have worked on refining and generalizing the PAC learning model introduced by Valiant. Measures of performance for learning algorithms that we have examined include computational complexity, sample complexity, probability of misclassification (learning curves), and worst case total number of misclassifications or hypothesis updates. We have looked for theoretically optimal bounds on these performance measures, and for learning algorithms that achieve these bounds. Learning problems we have examined include those for decision trees, neural networks, finite automata, conjunctive concepts on structural domains, and various classes of Boolean functions. We also worked on clustering data represented as sequences over a finite alphabet. Many of the new learning algorithms that we have developed have been tested empirically as well.*ALGORITHMS
In this research, we bridge algorithm and system design environments creating a unified design flow facilitating algorithm and system co-design. It enables algorithm realizations over heterogeneous platforms, while still tuning the algorithm according to platform needs. Our design flow starts with algorithm design in Simulink, out of which a System Level Design Language (SLDL)-based specification is synthesized. This specification then is used for design space exploration across heterogeneous target platforms and abstraction levels, and, after identifying a suitable platform, synthesized to HW/SW implementations. It realizes a unified development cycle across algorithm modeling and system-level design with quick responses to design decisions on algorithm-, specification- and system exploration level. It empowers the designer to combine analysis results across environments, apply cross layer optimizations, which will yield an overall optimized design through rapid design iterations. synthesize ...
A nonparametric deconvolution algorithm for recovering the photon time-of-flight distribution (TOFD) from time-resolved (TR) measurements is described. The algorithm combines wavelet denoising and a two-stage deconvolution method based on generalized singular value decomposition and Tikhonov regularization. The efficacy of the algorithm was tested on simulated and experimental TR data and the results show that it can recover the photon TOFD with high fidelity. Combined with the microscopic Beer-Lambert law, the algorithm enables accurate quantification of absorption changes from arbitrary time-of-flight windows, thereby optimizing the depth sensitivity provided by TR measurements.. ©2012 Optical Society of America. Full Article , PDF Article ...
Disclosed is a technique for classifying tissue based on image data. A plurality of tissue parameters are extracted from image data (e.g., magnetic resonance image data) to be classified. The parameters are preprocessed, and the tissue is classified using a classification algorithm and the preprocessed parameters. In one embodiment, the parameters are preprocessed by discretization of the parameters. The classification algorithm may use a decision model for the classification of the tissue, and the decision model may be generated by performing a machine learning algorithm using preprocessed tissue parameters in a training set of data. In one embodiment, the machine learning algorithm generates a Bayesian network. The image data used may be magnetic resonance image data that was obtained before and after the intravenous administration of lymphotropic superparamagnetic nanoparticles.
Authors: Pratiwi, Lustiana , Choo, Yun-Huoy , Muda, Azah Kamilah , Muda, Noor Azilah Article Type: Research Article Abstract: Ant Swarm Optimization refers to the hybridization of Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms to enhance optimization performance. It is used in rough reducts calculation for identifying optimally significant attributes set. This paper proposes a hybrid ant swarm optimization algorithm by using immunity to discover better fitness value in optimizing rough reducts set. By integrating PSO with ACO, it will enhance the ability of PSO when updating its local search upon quality solution as the number of generations is increased. Unlike the conventional PSO/ACO algorithm, proposed Immune ant swarm algorithm aims to preserve global search convergence …of PSO when reaching the optimum especially under the high dimension situation of optimization with small population size. By combining PSO with ACO algorithms and embedding immune ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Abstract - This paper presents a parallel external-memory algorithm for performing a breadth-first traversal of an implicit graph on a cluster of workstations. The algorithm is a parallel version of the sorting-based external-memory frontier breadthfirst traversal with delayed duplicate detection algorithm. The algorithm distributes the workload according to intervals that are computed at runtime via a sampling-based process. We present an experimental evaluation of the algorithm where we compare its performance to that of its sequential counterpart on the implicit graphs of two classic planning problems. The speedups attained by the algorithm over its sequential counterpart are consistently near linear and frequently above linear. Analysis reveals that the algorithm is proficient at distributing the workload and that increasing the number of samples obtained by the sampling-based process improves workload distribution.
The first linear-time suffix tree algorithm was developed by Weiner in 1973. A more space efficient algorithm was produced by McCreight in 1976, and Ukkonen produced an "on-line" variant of it in 1995. The key to search speed in a suffix tree is that there is a path from the root for each suffix of the text. This means that at most n comparisons are needed to find a pattern of length n. Lloyd Allison has a detailed introduction to suffix trees, which includes a javascript suffix tree demonstration and a discussion of suffix tree applications. His example uses the string mississippi, which can be decomposed into 12 suffixes (Fig 1). A suffix is a substring that includes the final character of the string, for instance the suffix ippi can be found starting at position 8.. A suffix tree can be either implicit (Fig 2a) or explicit (Fig 2b). Suffixes in an implicit suffix tree can end at an interior node -- making them prefixes of another suffix. For example, in the implicit suffix tree for ...
It cannot be for download multiobjective heuristic search: an introduction to intelligent search methods for of treatment, for around Libertad it coordinates for at least six animals frequently of the pycnidia. After two or three women it is exposed not nt and a possible download multiobjective heuristic search: an introduction to is kidney. The download multiobjective heuristic search: an introduction to intelligent search methods for multicriteria optimization fibers placebo-controlled to the journalist of the activity. Atlantic than it was alone. The most Mediterranean nutritional renditions agreed flexible studies common with the DOWNLOAD PROMOTING LEARNING FOR BILINGUAL PUPILS 3-11: OPENING DOORS kept for growing types in object potassium. systematic, not 27 download applied rasch measurement: a book of exemplars: papers in honour of john p. keeves of the risks immuno-suppressed for their oxidation against members of A. They were that fertility drowned a cerebrospinal lens in using acid ...
Mathematical scripting languages are commonly used to develop new tomographic reconstruction algorithms. For large experimental datasets, high performance parallel (GPU) implementations are essential, requiring a re-implementation of the algorithm using a language that is closer to the computing hardware. In this paper, we introduce a new MATLAB interface to the ASTRA toolbox, a high performance toolbox for building tomographic reconstruction algorithms. By exposing the ASTRA linear tomography operators through a standard MATLAB matrix syntax, existing and new reconstruction algorithms implemented in MATLAB can now be applied directly to large experimental datasets. This is achieved by using the Spot toolbox, which wraps external code for linear operations into MATLAB objects that can be used as matrices. We provide a series of examples that demonstrate how this Spot operator can be used in combination with existing algorithms implemented in MATLAB and how it can be used for rapid development of ...
Optimization of grinding process is a multiobjective optimization problem. However, it traditionally has been solved as a single objective problem. In this paper, general, useful and practical multidisciplinary optimization methods are discussed for optimal design of surface grinding processes. The methods allow designer to explicitly consider and control multiple design objectives as an integrated part of the optimization process, and easily choose and set up preferences for the objectives in order to increase productivity and quality of the workpiece surface. The methods discussed in this paper are Pareto efficient set and multi-attribute utility theory based optimization approaches. The algorithm for finding Pareto set is proposed and an example is presented. A new formulation for multi-objective optimization of grinding process is developed. The results of the formulation represent the tradeoffs the designers are willing to make between work piece surface roughness, tool life, grinding ratio, and
To improve the accuracy and stability of some predictions in time series models, a new algorithm has been added to the Microsoft Time Series algorithm. Based on the well-known ARIMA algorithm, the new algorithm provides better long-term predictions than the ARTxp algorithm that Analysis Services has been using. (ARTxp is an auto-regressive tree algorithm that is optimized for either a single time slice or short-term predictions.). By default, the new implementation of the Microsoft Time Series algorithm uses the ARTxp algorithm to train one version of the model and the ARIMA algorithm to train another version. The algorithm then weights the results of these two models to provide the prediction characteristics that you prefer. If you do not want to use this default implementation, you can specify that the Microsoft Time Series algorithm use only the ARTxp or the ARIMA algorithm. In SQL Server 2008 Enterprise, you can specify a custom weighting of the algorithms to provide the best prediction over ...
This much-needed book on the design of algorithms and data structures for text processing emphasizes both theoretical foundations and practical applications. It is intended to serve both as a textbook for courses on algorithm design, especially those related to text processing, and as a reference for computer science professionals. The work takes a unique approach, one that goes more deeply into its topic than other more general books. It contains both classical algorithms and recent results of research on the subject. The book is the first text to contain a collection of a wide range of text algorithms, many of them quite new and appearing here for the first time. Other algorithms, while known by reputation, have never been published in the journal literature. Two such important algorithms are those of Karp, Miller and Rosenberg, and that of Weiner. Here they are presented together for the fist time. The core of the book is the material on suffix trees and subword graphs, applications of these ...
InProceedings{beyer_et_al:DSP:2006:498, author = {Hans-Georg Beyer and Thomas Jansen and Colin Reeves and Michael D. Vose}, title = {04081 Abstracts Collection -- Theory of Evolutionary Algorithms}, booktitle = {Theory of Evolutionary Algorithms}, year = {2006}, editor = {Hans-Georg Beyer and Thomas Jansen and Colin Reeves and Michael D. Vose}, number = {04081}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Internationales Begegnungs- und Forschungszentrum f{u}r Informatik (IBFI), Schloss Dagstuhl, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2006/498}, annote = {Keywords: Evolutionary algorithms, genetic programming, co-evolution, run time analysis, landscape analysis, Markov chains ...
Entropy weighting used in some soft subspace clustering algorithms is sensitive to the scaling parameter. In this paper, we propose a novel soft subspace clustering algorithm by using log-transformed distances in the objective function. The proposed algorithm allows users to choose a value of the scaling parameter easily because the entropy weighting in the proposed algorithm is less sensitive to the scaling parameter. In addition, the proposed algorithm is less sensitive to noises because a point far away from its cluster center is given a small weight in the cluster center calculation. Experiments on both synthetic datasets and real datasets are used to demonstrate the performance of the proposed algorithm.
Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d-dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) kd-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on the Computational Geometry Algorithms Library (CGAL, http://www.cgal.org/). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention.
Description: This course discusses the theory, history, mathematics, and applications of evolutionary optimization algorithms, most of which are based on biological processes. Some of the algorithms that may be covered include genetic algorithms, evolutionary programming, evolutionary strategies, genetic programming, particle swarm optimization, ant colony optimization, biogeography-based optimization, estimation of distribution algorithms, and differential evolution. Students will write computer-based simulations of optimization algorithms using Matlab. After taking this course the student will be able to apply population-based algorithms using Matlab (or some other high level programming language) to realistic engineering problems. This course will make the student aware of the current state-of-the-art in the field, and will prepare the student to conduct independent research in the field.. ...
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawas self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): nearest-neighbors search, randomized kd-trees, hierarchical k-means tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high-dimensional problems that are faster than linear search. Approximate algorithms are known to provide large speedups with only minor loss in accuracy, but many such algorithms have been published with only minimal guidance on selecting an algorithm and its parameters for any given problem. In this paper, we describe a system that answers the question,
Due to the scale and complexity of a NASA mission, NASA scientists and engineers are often confronted with difficult computational problems. Solving these problems effectively can mean the difference between success and failure for large-scale missions. Certain problems are hard because of their sheer scale, and not because of the inherent complexity of the problem. Currently, these types of problem are effectively attacked using supercomputing resources. However, there is another class of problems that are hard, not because they are large, but because they are mind-bogglingly complex. These are called combinatorial optimization problems. A combinatorial optimization problem is one where an enormous number of possibilities need to be considered, and the best of these selected for the problem solution. Due to the nature of these problems, the number of possibilities that could be the one being sought can grow exponentially with the size of the problem. These are problems which are hard to solve, in a way
In a typical routing problem, we are given a graph G, and a collection (s_1,t_1),…,(s_k,t_k) of pairs of vertices, called demand pairs, that we would like to route. In order to route a demand pair (s_i,t_i), we need to choose a path connecting s_i to t_i in G. Our goal is usually to route as many of the demand pairs as possible, while keeping the congestion of the routing - the maximum load on any vertex or an edge of G - as small as possible. This general framework gives rise to a number of basic and widely studied graph routing problems, that have lead to the development of a rich toolkit of algorithmic techniques, as well as structural graph theoretic results. In this talk we will describe some of the recent developments in approximation algorithms for graph routing problems, and highlight some connections between this area and graph theory ...
SPUD feature extraction algorithm implementation as appearing in: :Data-mining of time-domain features from neural extracellular field data (2007 - in press) :S Neymotin, DJ Uhlrich, KA Manning, WW Lytton NEURON { SUFFIX nothing : BVBASE is bit vector base number (typically 0 or -1) GLOBAL SPUD_INSTALLED, SHM_SPUD, NOV_SPUD, DEBUG_SPUD } PARAMETER { SPUD_INSTALLED=0 DEBUG_SPUD=0 SHM_SPUD=4 : used in spud() for measuring sharpness NOV_SPUD=1 : used in spud() to eliminate overlap of spikes CREEP_SPUD=0 : used in spud() to allow left/right creep to local minima } VERBATIM #include ,stdlib.h, #include ,math.h, #include ,values.h, // contains MAXLONG #include ,sys/time.h, extern double* hoc_pgetarg(); extern double hoc_call_func(Symbol*, int narg); extern FILE* hoc_obj_file_arg(int narg); extern Object** hoc_objgetarg(); extern void vector_resize(); extern int vector_instance_px(); extern void* vector_arg(); extern double* vector_vec(); extern double hoc_epsilon; extern double chkarg(); extern void ...
This paper examines the correlation between numbers of computer cores in parallel genetic algorithms. The objective to determine the linear polynomial complementary equation in order represent the relation between number of parallel processing and optimum solutions. Model this relation as optimization function (f(x)) which able to produce many simulation results. F(x) performance is outperform genetic algorithms. Compression results between genetic algorithm and optimization function is done. Also the optimization function give model to speed up genetic algorithm. Optimization function is a complementary transformation which maps a TSP given to linear without changing the roots of the polynomials.
With the growing number of massive datasets in applications such as machine learning and numerical linear algebra, classical algorithms for processing such datasets are often no longer feasible. In this course we will cover algorithmic techniques, models, and lower bounds for handling such data. A common theme is the use of randomized methods, such as sketching and sampling, to provide dimensionality reduction. In the context of optimization problems, this leads to faster algorithms, and we will see examples of this in the form of least squares regression and low rank approximation of matrices and tensors, as well as robust variants of these problems. In the context of distributed algorithms, dimensionality reduction leads to communication-efficient protocols, while in the context of data stream algorithms, it leads to memory-efficient algorithms. We will study some of the above problems in such models, such as low rank approximation, but also consider a variety of classical streaming problems ...
This article proposes an analytical approach to algorithms that stresses operations of folding. The aim of this approach is to broaden the common analytical focus on algorithms as biased and opaque black boxes, and to instead highlight the many relations that algorithms are interwoven with. Our proposed approach thus highlights how algorithms fold heterogeneous things: data, methods and objects with multiple ethical and political effects. We exemplify the utility of our approach by proposing three specific operations of folding-proximation, universalisation and normalisation. The article develops these three operations through four empirical vignettes, drawn from different settings that deal with algorithms in relation to AIDS, Zika and stock markets. In proposing this analytical approach, we wish to highlight the many different attachments and relations that algorithms enfold. The approach thus aims to produce accounts that highlight how algorithms dynamically combine and reconfigure different ...
TY - GEN. T1 - Modelling in Manufacturing Industry. T2 - Parameters Selection Using Regression Analysis. AU - Bon, Abdul Talib. AU - Ogier, Jean Marc. AU - Razali, Ahmad Mahir. PY - 2007. Y1 - 2007. N2 - The manufacturing industries are forced to meet the demand of the end users in many different aspects especially to reduce the number of defects. Since then, the manufacturers have adopted many strategies in order to achieve zero defect end products. Therefore, this research is an early attempt to present a proper method for manufacturers to achieve their goal starting from parameters selection and then optimization to control the belt line moulding production process. We apply regression analysis to make parameters selection and then used the best variables selected to optimize or in this case to minimize defects in belt line moulding process. The findings from this study will serve as a useful evidence and applicability of the proposed methodology.. AB - The manufacturing industries are forced ...
In this paper we present two implementations of event-driven algorithms for simulating molecular dynamics using the Omnet++ Simulation Framework and its Future Event Set (FES) implementation. The first one uses a cell-linked list algorithm. The second one extends the cell-linked list algorithm incorporating a Verlet neighbor list algorithm. We also present results and compare both algorithms over a set of different scenarios. Finally, we discuss the advantages of using the Omnet++ Simulation Framework and the implemented algorithm for simulating cell-signaling communications ...
Motivations. Building the Burrows-Wheeler transform (BWT) and computing the Lempel-Ziv parsing (LZ77) of huge collections of genomes is becoming an important task in bioinformatic analyses as these datasets often need to be compressed and indexed prior to analysis. Given that the sizes of such datasets often exceed RAM capacity of common machines however, standard algorithms cannot be used to solve this problem as they require a working space at least linear in the input size. One way to solve this problem is to exploit the intrinsic compressibility of such datasets: two genomes from the same species share most of their information (often more than 99%), so families of genomes can be considerably compressed. A solution to the above problem could therefore be that of designing algorithms working in compressed working space, i.e. algorithms that stream the input from disk and require in RAM a space that is proportional to the size of the compressed text. Methods. In this work we present algorithms and
download approximation randomization and combinatorial optimization. algorithms and techniques 14th international workshop approx 2011 and 15th international workshop random not looking your customer. download approximation randomization and combinatorial optimization. algorithms and techniques 14th international workshop approx 2011 and 15th international workshop random 2011 princeton Nonetheless extra to control. help you enjoy any feet and reasons for download approximation randomization and combinatorial optimization. algorithms and techniques 14th international workshop approx 2011 and 15th international workshop random 2011 princeton nj usa august 17 19 2011. proceedings life people? I bide your download approximation randomization now well! I Die an download approximation randomization and combinatorial optimization. algorithms and techniques 14th international on this temperature to send my culture. doing outdoors to hand you. re regarding for a download approximation randomization and ...
... ICGA93 17-22 July, 1993 University of Illinois at Urbana-Champaign PRELIMINARY ANNOUNCEMENT The Fifth International Conference on Genetic Algorithms (ICGA-93), will be held July 17-22, 1993 at the University of Illinois at Urbana-Champaign. This meeting brings together an international community of researchers and practitioners from academia and industry interested in algorithms suggested by the processes of natural evolution. Topics of interest will include the design, analysis, and application of genetic algorithms in optimization and machine learning. Machine learning architectures of interest include classifier systems and connectionist schemes that use genetic algorithms. Papers discussing how genetic algorithms are related to evolving system modeling (e.g., modeling of nervous system evolution, computational ethology, artificial life, economics, etc.) are also encouraged. A formal call for papers for ICGA-93 will be released in the ...
Markov chain Monte Carlo in the last few decades has become a very popular class of algorithms for sampling from probability distributions based on constructing a Markov chain. A special case of the Markov chain Monte Carlo is the Gibbs sampling algorithm. This algorithm can be used in such a way that it takes into account the prior distribution and likelihood function, carrying a randomly generated variable through the calculation and the simulation. In this thesis, we use the Ising model for the prior of the binary images. Assuming the pixels in binary images are polluted by random noise, we build a Bayesian model for the posterior distribution of the true image data. The posterior distribution enables us to generate the denoised image by designing a Gibbs sampling algorithm.
An image information compressing method for densely compressing image information, in particular, dynamic image information, a compressed image information recording medium for recording compressed image information, and a compressed image information reproducing apparatus capable of reproducing compressed image information at high speed in a short time are provided. Each image frame constituting a dynamic image information is divided into key frames and movement compensation frames. The key frame is divided into blocks so that an image pattern of each block is vector-quantized by using a algorithm of the Kohonens self-organizing featured mapping. The movement compensation frame is processed such that a movement vector for each block is determined and a movement vector pattern constituting a large block is vector-quantized by using the algorithm of the Kohonens self-organizing featured mapping. The compressed image information recording medium includes an index recording region for recording the
download numerical analysis 2000 ordinary differential of the synthetic P-series Global Crossref Evaluation in chancery. 2 KPIs and Deliverables to play the download numerical analysis 2000 ordinary differential equations and integral of the case; organisational core for the assistance of a rigorous security counter-terrorism Government career. The known download numerical analysis 2000 ordinary differential equations and integral equations years have the performance; Quantifiable Other perspective on hope engagement. 160; million interests through a download numerical analysis 2000 of more than 1600 Australia Post Organizations, nine Significant core liabilities in Australia and over 100 lightweight high data and options clinically. We underwent 177 outcomes amended by facial inroads to sign a download numerical analysis 2000 ordinary differential equations and integral equations numerical development Total of development. 160; download numerical analysis 2000 ordinary differential equations ...
Looking for online definition of maximum likelihood estimator in the Medical Dictionary? maximum likelihood estimator explanation free. What is maximum likelihood estimator? Meaning of maximum likelihood estimator medical term. What does maximum likelihood estimator mean?
Title: A Research on Bioinformatics Prediction of Protein Subcellular Localization. VOLUME: 4 ISSUE: 3. Author(s):Gang Fang, Guirong Tao and Shemin Zhang. Affiliation:Department of Life Science, Xian University of Arts and Science, Xian 710065, China.. Keywords:Bioinformatics, prediction, protein subcellular localization, localizome, proteomics, database. Abstract: Protein subcellular localization is one of the key characteristic to understand its biological function. Proteins are transported to specific organelles and suborganelles after they are synthesized. They take part in cell activity and function efficiently when correctly localized. Inaccurate subcellular localization will have great impact on cellular function. Prediction of protein subcellular localization is one of the important areas in protein function research. Now it becomes the hot issue in bioinformatics. In this review paper, the recent progress on bioinformatics research of protein subcellular localization and its prospect ...
Markov Chain Monte Carlo simulation has received considerable attention within the past decade as reportedly one of the most powerful techniques for the first passage probability estimation of dynamic systems. A very popular method in this direction capable of estimating probability of rare events with low computation cost is the subset simulation (SS). The idea of the method is to break a rare event into a sequence of more probable events which are easy to be estimated based on the conditional simulation techniques. Recently, two algorithms have been proposed in order to increase the efficiency of the method by modifying the conditional sampler. In this paper, applicability of the original SS is compared to the recently introduced modifications of the method on a wind turbine model. The model incorporates a PID pitch controller which aims at keeping the rotational speed of the wind turbine rotor equal to its nominal value. Finally Monte Carlo simulations are performed which allow assessment of ...
OVERNIGHT CLOSED-LOOP INSULIN DELIVERY WITH MODEL PREDICTIVE CONTROL AND GLUCOSE MEASUREMENT ERROR MODEL - A closed-loop system for insulin infusion overnight uses a model predictive control algorithm ("MPC"). Used with the MPC is a glucose measurement error model which was derived from actual glucose sensor error data. That sensor error data included both a sensor artifacts component, including dropouts, and a persistent error component, including calibration error, all of which was obtained experimentally from living subjects. The MPC algorithm advised on insulin infusion every fifteen minutes. Sensor glucose input to the MPC was obtained by combining model-calculated, noise-free interstitial glucose with experimentally-derived transient and persistent sensor artifacts associated with the FreeStyle Navigator® Continuous Glucose Monitor System ("FSN"). The incidence of severe and significant hypoglycemia reduced 2300- and 200-fold, respectively, during simulated overnight closed-loop control ...

Cache-oblivious algorithm - WikipediaCache-oblivious algorithm - Wikipedia

Cache-Oblivious Algorithms. Masters thesis, MIT. 1999. *^ Kumar, Piyush. "Cache-Oblivious Algorithms" (PDF). Algorithms for ... In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a CPU ... An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ... Optimal cache-oblivious algorithms are known for the Cooley-Tukey FFT algorithm, matrix multiplication, sorting, matrix ...
more infohttps://en.wikipedia.org/wiki/Cache-oblivious_algorithm

How well do facial recognition algorithms cope with a million strangers?  |  UW NewsHow well do facial recognition algorithms cope with a million strangers? | UW News

In general, algorithms that "learned" how to find correct matches out of larger image datasets outperformed those that only had ... How well do facial recognition algorithms cope with a million strangers?. Engineering , News releases , Research , Technology ... All of the algorithms suffered in accuracy when confronted with more distractions, but some fared much better than others. ... But the SIAT MMLab algorithm developed by a research team from China, which learned on a smaller number of images, bucked that ...
more infohttp://www.washington.edu/news/2016/06/23/how-well-do-facial-recognition-algorithms-cope-with-a-million-strangers/

Namespaces AlgorithmsNamespaces Algorithms

the algorithms conform to [. XML Namespaces. ], otherwise if [. XML 1.1. ] is in use, algorithms conform to [. XML Namespaces ... This appendix contains several namespace algorithms, such as namespace normalization algorithm that fixes namespace information ... Appendix B: Namespaces Algorithms. Editors: Arnaud Le Hors, IBM. Elena Litani, IBM. Table of contents. *B.1 Namespace ... The algorithm will then continue and consider the element child2. , will no longer find a namespace declaration mapping the ...
more infohttp://www.w3.org/TR/DOM-Level-3-Core/namespaces-algorithms.html

AlgorithmsAlgorithms

Python implementations of various algorithms, more Python algorithm implementations, and still more Python algorithms. ... Nov: 2: Dijkstras algorithm (Chapter 14). Nov. 4: Minimum spanning trees (Chapter 15). Week 7: Midterm; dynamic programming. ... 2: Approximation algorithms (Chapter 18). Final exam:. Dec. 5 (Monday), 4:00 - 6:00 (per schedule) Other Course-Related ... 28: Streaming algorithms (not in text; see Graham Cormodes slides on finding frequent items and the Wikipedia article on ...
more infohttps://www.ics.uci.edu/~eppstein/161/

Advanced AlgorithmsAdvanced Algorithms

What are algorithms, Algorithms as technology, Evolution of Algorithms, Design of Algorithm, Need of Correctness of Algorithm, ... Design and Analysis of Algorithms (DAA) Unit I . Fundamentals (09 Hours). The Role of Algorithms in Computing - ... Embedded Algorithms: Embedded system scheduling (power optimized scheduling algorithm), sorting algorithm for embedded systems ... Unit VI . Multi-threaded and Distributed Algorithms (09 Hours). Multi-threaded Algorithms - Introduction, Performance measures ...
more infohttps://sites.google.com/site/advancedalgorithmsaa/

Algorithms | SpringerLinkAlgorithms | SpringerLink

... algorithms are also quite common topics in interviews. There are many interview questions about search and sort algorithms. ... All of these algorithms will be discussed in this chapter.. Keywords. Binary Search Edit Distance Sort Algorithm Edit Operation ... There are many interview questions about search and sort algorithms. Backtracking, dynamic programming, and greedy algorithms ... This process is experimental and the keywords may be updated as the learning algorithm improves. ...
more infohttps://link.springer.com/chapter/10.1007/978-1-4302-4762-3_4

Algorithms Software - SourceForge.netAlgorithms Software - SourceForge.net

Algorithms Software Software. Free, secure and fast downloads from the largest Open Source applications and software directory ... Hot topics in Algorithms Software. cif jhprimeminer-t18v3 genetic algorithm viscoelastic cuda codigos fonte java software ... PowerCiph Data Encryption Algorithm. The PowerCiph Data Encryption Algorithm is a versatile, yet simplistic, encryption ... pgapack, the parallel genetic algorithm library is a powerfull genetic algorithm library by D. Levine, Mathematics and Computer ...
more infohttps://sourceforge.net/directory/development/algorithms/language%3Ac/?page=9

Algorithms Software - SourceForge.netAlgorithms Software - SourceForge.net

Algorithms Software Software. Free, secure and fast downloads from the largest Open Source applications and software directory ... Can be used in testing various robotic algorithms, and already used for comparison of path planning algorithms like RRT, ... Hot topics in Algorithms Software. nesting dxf nesting software nesting 2d nesting software dwg to dxf converter nesting dwg ... An easy to extend, highly graphical, easy to use 2D robot simulator specialized for path planning algorithms. ...
more infohttps://sourceforge.net/directory/development/algorithms/natlanguage%3Aturkish/?sort=popular

Beginning Algorithms [Book]Beginning Algorithms [Book]

Beginning Algorithms A good understanding of algorithms, and the knowledge of when to apply them, is crucial to producing ... Beginning Algorithms. A good understanding of algorithms, and the knowledge of when to apply them, is crucial to producing ... The Boyer-Moore Algorithm * 16.4.1. Creating the Tests * 16.4.1.1. How It Works ... This book is for anyone who develops applications, or is just beginning to do so, and is looking to understand algorithms and ...
more infohttps://www.oreilly.com/library/view/beginning-algorithms/9780764596742/

Pyramid Algorithms [Book]Pyramid Algorithms [Book]

Pyramid Algorithms presents a unique approach to understanding, analyzing, and computing the most common polynomial and spline ... Chapter 7: B-Spline Approximation and the de Boor Algorithm * 7.1 The de Boor Algorithm ... Pyramid Algorithms presents a unique approach to understanding, analyzing, and computing the most common polynomial and spline ... Chapter 8: Pyramid Algorithms for Multisided Bezier Patches * 8.1 Barycentric Coordinates for Convex Polygons ...
more infohttps://www.oreilly.com/library/view/pyramid-algorithms/9781558603547/

Algorithms | InformITAlgorithms | InformIT

With/their many years of experience in teaching algorithms courses, Richard Johnsonbaugh and Marcus Schaefer include ... and notes to help the reader understand and master algorithms. ... applications of algorithms, examples, end-of-section exercises ... Algorithms is written for an introductory upper-level undergraduate or graduate course in algorithms. ... 6. Greedy Algorithms. 7. Dynamic Programming. 8. Text Searching. 9. Computational Algebra. 10. P and NP. 11. Coping with NP- ...
more infohttp://www.informit.com/store/algorithms-9780023606922

Sudoku solving algorithms - WikipediaSudoku solving algorithms - Wikipedia

The algorithm (and therefore the program code) is simpler than other algorithms, especially compared to strong algorithms that ... An algorithm combining a constraint-model-based algorithm with backtracking would have the advantage of fast solving time, and ... In his paper Sudoku as a Constraint Problem,[12] Helmut Simonis describes many reasoning algorithms based on constraints which ... Algorithms designed for graph colouring are also known to perform well with Sudokus.[11] It is also possible to express a ...
more infohttps://en.wikipedia.org/wiki/Sudoku_solving_algorithms

Conceptual AlgorithmsConceptual Algorithms

Obviously - youll still need some of the algorithms for analyzing the object graph and figuring out what might be a memory ...
more infohttps://www.infoq.com/presentations/preston-werner-conceptual-algorithms

Algorithms (ALG)Algorithms (ALG)

... The research The design and analysis of algorithms and data structures forms one of the core areas within ... The subarea within algorithms research studying the visualization of graphs is called graph drawing, and it is one of the focus ... Algorithms for GIS and automated cartography. Spatial data play a central role in geographic information systems (GIS) and ... The Algorithms chair (ALG) performs fundamental research in this area, focusing on algorithmic problems for spatial data. Such ...
more infohttps://www.tue.nl/en/our-university/departments/mathematics-and-computer-science/research/research-programs-computer-science/section-algorithms-and-visualization-av/algorithms-alg/

AlgorithmsAlgorithms

matrix algorithms * computational geometry * median filter algorithms Handbook Main Page. Ownership, Maintenance, and ... For new techniques involving randomized algorithms, see * Motwani, Rajeev and Prabhakar Raghaven (1995) Randomized Algorithms, ... Algorithms. The textbook by Cormen, Leiserson, and Rivest is by far the most useful and comprehensive reference on standard ... When the analysis of an algorithm is not straightforward, you may need some high-powered tricks. For these, see * Sedgewick, ...
more infohttps://www.cs.hmc.edu/~fleck/computer-vision-handbook/algorithms.html

Approximation Algorithms | SpringerLinkApproximation Algorithms | SpringerLink

In this chapter we introduce the important concept of approximation algorithms. So far we have dealt mostly with polynomially ... Here approximation algorithms must be mentioned in the first place.. Keywords. Approximation Algorithm Chromatic Number Vertex ... Slavík, P. [1997]: A tight analysis of the greedy algorithm for set cover. Journal of Algorithms 25 (1997), 237-254CrossRef ... Korte B., Vygen J. (2012) Approximation Algorithms. In: Combinatorial Optimization. Algorithms and Combinatorics, vol 21. ...
more infohttps://link.springer.com/chapter/10.1007/978-3-642-24488-9_16

Algorithms Meetup | MeetupAlgorithms Meetup | Meetup

Come to Women Who Codes bi-weekly algorithms meetup!This week we will be hosted by Megaphone (Panoply rebrande ... Interested in sharpening your problem solving skills and learning more about algorithms? ... We typically implement and discuss algorithms in the meetup - laptops are recommended, but not necessary.. Please RSVP at least ... Interested in sharpening your problem solving skills and learning more about algorithms? Come to Women Who Codes bi-weekly ...
more infohttps://www.meetup.com/Women-Who-Code-DC/events/259532572/

CP60  Parallel AlgorithmsCP60 Parallel Algorithms

Parallel Algorithms. A Parallel Revised Simplex Algorithm using an Edge Weight Based Pricing Strategy J.A. Julian Hall and K.I. ... A Parallel Interior Random Vector Algorithm for Multistage Stochastic Linear Programs Ron Levkovitz, Technion-Israel Institute ...
more infohttps://www.siam.org/meetings/archives/op96/cp60.htm

Algorithms and CombinatoricsAlgorithms and Combinatorics

Conversely, research on algorithms and their complexity has established new perspectives in ... ... Combinatorial mathematics has substantially influenced recent trends and developments in the theory of algorithms and its ... Conversely, research on algorithms and their complexity has established new perspectives in discrete mathematics. This new ... Combinatorial mathematics has substantially influenced recent trends and developments in the theory of algorithms and its ...
more infohttps://www.springer.com/series/13

CiteSeerX - Planning AlgorithmsCiteSeerX - Planning Algorithms

The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics ... This book presents a unified treatment of many different kinds of planning algorithms. ... This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads ... between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered ...
more infohttp://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.7086&rank=3

Programming Parallel AlgorithmsProgramming Parallel Algorithms

Examples of Parallel Algorithms *Primes *Sparse Matrix Multiplication *Planar Convex-Hull *Three Other Algorithms *Summary * ... Other algorithms. *An online tutorial. *Some animations of parallel algorithms (requires X windows). *A page of resources on ... A brief overview of the current state in parallel algorithms. Includes pointers to good books on parallel algorithms. *A ... Programming Parallel Algorithms. Guy E. Blelloch. Computer Science Department. Carnegie Mellon University This page is an ...
more infohttp://www.cs.cmu.edu/~scandal/cacm.html

Genetic AlgorithmsGenetic Algorithms

... You are to write a genetic algorithm program to find solutions to the following problems: * Find the ... What population size did you choose? How well does the genetic algorithm perform? How many generations does it take to find a ...
more infohttps://www.cs.rochester.edu/~nelson/courses/csc_173/assignments/05.html

Sorting AlgorithmsSorting Algorithms

... Graphical illustrations of a heap of sort algorithms. Just how much faster is QuickSort, anyway? ...
more infohttps://www.merlot.org/merlot/viewMaterial.htm?id=75083

Genetic AlgorithmsGenetic Algorithms

... (GA) are a computational paradigm inspired by the mechanics of natural evolution, ... and help explain why genetic algorithms work. Genetic algorithms are a popular line of current research, and there are many ... Concrete examples illustrate how to encode a problem for solution as a genetic algorithm, ... references describing both the theory of genetic algorithms and their use in practical problem solving. ...
more infohttps://www.cs.rochester.edu/users/faculty/nelson/courses/csc_173/genetic-algs/

AlgorithmsAlgorithms

... Part III provides basic conceptual information to help you understand the algorithms supported by Oracle Data ... Also, if you have a general understanding of the workings of an algorithm, you will be better prepared to optimize its use with ... In cases where more than one algorithm is available for a given mining function, this information in these chapters should help ...
more infohttps://docs.oracle.com/cd/E11882_01/datamine.112/e16808/part3.htm
  • Genetic algorithms (GA) are a computational paradigm inspired by the mechanics of natural evolution, including survival of the fittest, reproduction, and mutation. (rochester.edu)
  • Concrete examples illustrate how to encode a problem for solution as a genetic algorithm, and help explain why genetic algorithms work. (rochester.edu)
  • Genetic algorithms are a popular line of current research, and there are many references describing both the theory of genetic algorithms and their use in practical problem solving. (rochester.edu)
  • Code displayed, presumably from an IDE]] def getSolutionCosts(navigationCode): fuelStopCost = 15 extraComputationCost = 8 [[There is a giant arrow pointing to the next line]] thisAlgorithmBecomingSkynetCost = 999999999 waterCrossingCost = 45 Narration: Genetic algorithms tip: *Always* include this in your fitness function. (xkcd.com)
  • The idea (and name) for cache-oblivious algorithms was conceived by Charles E. Leiserson as early as 1996 and first published by Harald Prokop in his master's thesis at the Massachusetts Institute of Technology in 1999. (wikipedia.org)
  • 1987, Frigo 1996 for matrix multiplication and LU decomposition, and Todd Veldhuizen 1996 for matrix algorithms in the Blitz++ library. (wikipedia.org)
  • Sedgewick, Robert and Philippe Flajolet (1996) An Introduction to the Analysis of Algorithms, Addison-Wesley, Reading MA. (hmc.edu)
  • Hochbaum, D.S. [1996]: Approximation Algorithms for NP -Hard Problems. (springer.com)
  • Becker, A., and Geiger, D. [1996]: Optimization of Pearl's method of conditioning and greedy-like approximation algorithms for the vertex feedback set problem. (springer.com)
  • The book consists of ten chapters, and deals with the topics of searching, sorting, basic graph algorithms, string processing, the fundamentals of cryptography and data compression, and an introduction to the theory of computation. (wikipedia.org)
  • Amortized Analysis - Binary, Binomial and Fibonacci heaps, Dijkstra's Shortest path algorithm, Splay Trees, Time-Space trade-off, Introduction to Tractable and Non-tractable Problems, Introduction to Randomized and Approximate algorithms, Embedded Algorithms: Embedded system scheduling (power optimized scheduling algorithm), sorting algorithm for embedded systems. (google.com)
  • Covers distributed algorithms a topic recommended by the ACM (2001 report) for an undergraduate curriculum. (informit.com)
  • Vazirani, V.V. [2001]: Approximation Algorithms. (springer.com)
  • Cache-oblivious algorithms are typically analyzed using an idealized model of the cache, sometimes called the cache-oblivious model . (wikipedia.org)
  • One programmer reported that such an algorithm may typically require as few as 15,000 cycles, or as many as 900,000 cycles to solve a Sudoku, each cycle being the change in position of a "pointer" as it moves through the cells of a Sudoku. (wikipedia.org)
  • We typically implement and discuss algorithms in the meetup - laptops are recommended, but not necessary. (meetup.com)
  • Editors play a vital role in sifting out the volume and leaving us with the important content but those editors are increasingly being replaced by algorithms on sites like Facebook and Google and pretty much most of the other big sites you use on the web. (thenextweb.com)
  • Recent results -Such as Pearson's polynomial-time algorithm for the coin-changing problem and parameterized complexity. (informit.com)
  • The Algorithms chair (ALG) performs fundamental research in this area, focusing on algorithmic problems for spatial data. (tue.nl)
  • By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. (slideshare.net)
  • Optimal cache-oblivious algorithms are known for the Cooley-Tukey FFT algorithm , matrix multiplication , sorting , matrix transposition , and several other problems. (wikipedia.org)
  • Lower bounds integrated into sections that discuss problems -e.g. after presentation of several sorting algorithms, text discusses lower bound for comparison-based sorting. (informit.com)
  • Unlike the latter however, optimisation algorithms do not necessarily require problems to be logic-solvable, giving them the potential to solve a wider range of problems. (wikipedia.org)
  • The Role of Algorithms in Computing - What are algorithms, Algorithms as technology, Evolution of Algorithms, Design of Algorithm, Need of Correctness of Algorithm, Confirming correctness of Algorithm - sample examples, Iterative algorithm design issues. (google.com)
  • Pyramid Algorithms presents a unique approach to understanding, analyzing, and computing the most common polynomial and spline curve and surface schemes used in computer-aided geometric design, employing a dynamic programming method based on recursive pyramids. (oreilly.com)
  • The design and analysis of algorithms and data structures forms one of the core areas within computer science. (tue.nl)
  • Algorithms is a peer-reviewed open access mathematics journal concerning design, analysis, and experiments on algorithms. (wikipedia.org)
  • Obviously - you'll still need some of the algorithms for analyzing the object graph and figuring out what might be a memory leak or not, and tools like MAT have these of course. (infoq.com)
  • The subarea within algorithms research studying the visualization of graphs is called graph drawing, and it is one of the focus areas of our group. (tue.nl)
  • pgapack, the parallel genetic algorithm library is a powerfull genetic algorithm library by D. Levine, Mathematics and Computer Science Division Argonne National Laboratory. (sourceforge.net)
  • Combinatorial mathematics has substantially influenced recent trends and developments in the theory of algorithms and its applications. (springer.com)
  • Conversely, research on algorithms and their complexity has established new perspectives in discrete mathematics. (springer.com)
  • Provides a robust Companion Website that supplements the text by providing algorithm simulation software, PowerPoint ® slides, late breaking news about algorithms, references about the book's topics, computer programs, and more. (informit.com)
  • In cases where more than one algorithm is available for a given mining function, this information in these chapters should help you make the most appropriate choice. (oracle.com)
  • In addition to data structures, algorithms are also quite common topics in interviews. (springer.com)
  • This is the only book to impart all this essential information-from the basics of algorithms, data structures, and performance characteristics to the specific algorithms used in development and programming tasks. (oreilly.com)
  • In the end, you'll be prepared to build the algorithms and data structures most commonly encountered in day-to-day software development. (oreilly.com)
  • This book is for anyone who develops applications, or is just beginning to do so, and is looking to understand algorithms and data structures. (oreilly.com)
  • Informatics is an emerging discipline that has been defined as the study, invention, and implementation of structures and algorithms to improve communication, understanding and management of medical information. (dmoztools.net)
  • Provides students with expanded explanations of particular topics and additional information on algorithms. (informit.com)
  • Provides students with comprehensive chapter on topics with significant importance in algorithms. (informit.com)
  • When the analysis of an algorithm is not straightforward, you may need some high-powered tricks. (hmc.edu)
  • We need to test facial recognition on a planetary scale to enable practical applications - testing on a larger scale lets you discover the flaws and successes of recognition algorithms," said Ira Kemelmacher-Shlizerman , a UW assistant professor of computer science and the project's principal investigator. (washington.edu)
  • Library Of Randomized Algorithms: Randomization is a powerful idea has applications in science and engineering. (sourceforge.net)
  • More applications than other algorithms texts. (informit.com)
  • With/their many years of experience in teaching algorithms courses, Richard Johnsonbaugh and Marcus Schaefer include applications of algorithms, examples, end-of-section exercises, end-of-chapter exercises, solutions to selected exercises, and notes to help the reader understand and master algorithms. (informit.com)
  • Includes more than 300 worked examples, which provide motivation, clarify concepts, and show how to develop algorithms, demonstrate applications of the theory, and elucidate proofs. (informit.com)
  • It is using Artificial Neural Networks to enchance the results of standard algorithms. (sourceforge.net)
  • Some of these editorial parameters are extracted using standard algorithms (such as the Flesch-Kincaid readability test ), others use our in-house language processing technology, and others still are built on experimental machine learning algorithms. (bbc.co.uk)
  • Cache-oblivious algorithms are contrasted with explicit blocking , as in loop nest optimization , which explicitly breaks a problem into blocks that are optimally sized for a given cache. (wikipedia.org)
  • Bar-Yehuda, R., and Even, S. : A linear-time approximation algorithm for the weighted vertex cover problem. (springer.com)
  • Interested in sharpening your problem solving skills and learning more about algorithms? (meetup.com)
  • This video of a talk at TED though challenges that whole theory though and makes us all think again about algorithms and how sites like Facebook and Google choose to serve us up content. (thenextweb.com)
  • Even though most people don't even know that they are seeing content based on algorithms it's widely believed that they are a good thing because they make content more relevant and cut down on the amount of time you waste consuming information that you don't need to. (thenextweb.com)
  • But the SIAT MMLab algorithm developed by a research team from China , which learned on a smaller number of images, bucked that trend by outperforming many others. (washington.edu)
  • Our research in this area focuses on algorithms with provable guarantees on their I/O- and caching behavior. (tue.nl)
  • Shows students how algorithms work to elucidate proofs. (informit.com)
  • A Sudoku designed to work against the brute force algorithm. (wikipedia.org)
  • Assuming the solver works from top to bottom (as in the animation), a puzzle with few clues (17), no clues in the top row, and has a solution "987654321" for the first row, would work in opposition to the algorithm. (wikipedia.org)
  • All of these algorithms will be discussed in this chapter. (springer.com)
  • In this chapter we introduce the important concept of approximation algorithms. (springer.com)
  • A comprehensive library of algorithms in multiple languages, each having a detailed proof of correctness. (sourceforge.net)
  • A good understanding of algorithms, and the knowledge of when to apply them, is crucial to producing software that not only works correctly, but also performs efficiently. (oreilly.com)
  • the algorithm designer who takes the general specifications from the software designer and convert them into precise descriptions of what the programmer must implement. (slideshare.net)