TY - GEN. T1 - A proposal of «neuron mask» in neural network algorithm for combinatorial optimization problems. AU - Takenaka, Y.. AU - Funabiki, N.. AU - Nishikawa, S.. PY - 1997/12/1. Y1 - 1997/12/1. N2 - A constraint resolution scheme of the Hopfield neural network named «neuron mask» is presented for a class of combinatorial optimization problems. Neuron mask always satisfies constraints of selecting a solution candidate from each group so as to force the state of the neural network into a solution space. This paper presents the definition of neuron mask and the introduction into the neural network through the N-queens problem. The performance is verified by simulations on three computation modes, where neuron mask improves the performance of the neural network.. AB - A constraint resolution scheme of the Hopfield neural network named «neuron mask» is presented for a class of combinatorial optimization problems. Neuron mask always satisfies constraints of selecting a solution candidate ...
This is the eleventh post in an article series about MITs lecture course Introduction to Algorithms. In this post I will review lecture sixteen, which introduces the concept of Greedy Algorithms, reviews Graphs and applies the greedy Prims Algorithm to the Minimum Spanning Tree (MST) Problem.. The previous lecture introduced dynamic programming. Dynamic programming was used for finding solutions to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we want to find a solution with the optimal (minimum or maximum) value. Greedy algorithms are another set of methods for finding optimal solution. A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. Greedy algorithms do not always yield optimal solutions, but for many problems they do. In this lecture it is shown that a greedy algorithm gives an optimal ...
The graph coloring problem is a practical method of representing many real world problems including time scheduling, frequency assignment, register allocation etc. Finding the minimum number of colors for an arbitrary graph is NP-hard. This paper applies Decision Fusion on combinatorial optimization algorithms (genetic algorithm (GA), simulated annealing algorithm (SA)) and sequential/greedy algorithms (highest order algorithm (HO) and the sequential algorithm (SQ)) to find an optimal solution for the graph coloring problem. Giving importance to the factors such as the time of execution and availability of processing resources, a new technique is introduced where selection of the algorithm yielding the optimal solution is made. Decision fusion rules facilitate the predictions on the future of the executing algorithms based on the so far performance at any given point when the problem is solved. The results support that prediction during solving the problem is an efficient way, than the algorithms
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied intensively in machine learning research. However, while feature selection algorithms that exhibit excellent accuracy have been developed, they are seldom used for analysis of high-dimensional data because high-dimensional data usually include too many instances and features, which make traditional feature selection algorithms inefficient. To eliminate this limitation, we tried to improve the run-time performance of two of the most accurate feature selection algorithms known in the literature. The result is two accurate and fast algorithms, namely sCwc and sLcc. Multiple experiments with real social media datasets have demonstrated that our algorithms improve the performance of their original algorithms
TAMU01A23 TAMU01A24 TAMU01B19 TAMU01B24 TAMU01C24 TAMU01D14 TAMU01D17 TAMU01G19 TAMU01K11 TAMU01K23 TAMU01L14 TAMU01M08 TAMU02A06 TAMU02A09 TAMU02B04 TAMU02C12 TAMU02C19 TAMU02D13 TAMU02D21 TAMU02G01 TAMU02K03 TAMU02L21 TAMU02M17 TAMU02M19 TAMU02N13 TAMU02N19 TAMU02P07 TAMU03A01 TAMU03A07 TAMU03B06 TAMU03D01 TAMU03D04 TAMU03D14 TAMU03E08 TAMU03E24 TAMU03F15 TAMU03G12 TAMU03I06 TAMU03I10 TAMU03I19 TAMU03K15 TAMU03K24 TAMU03L11 TAMU03M07 TAMU03M08 TAMU03M12 TAMU03N18 TAMU03N20 TAMU03N24 TAMU03P22 TAMU04A20 TAMU04C13 TAMU04E12 TAMU04E18 TAMU04F06 TAMU04F17 TAMU04G01 TAMU04G23 TAMU04G24 TAMU04H24 TAMU04I08 TAMU04J06 TAMU04M09 TAMU04M16 TAMU04N08 TAMU04N11 TAMU04O11 TAMU04O15 TAMU04O20 TAMU04P09 TAMU05A16 TAMU05C18 TAMU05C21 TAMU05D19 TAMU05E07 TAMU05F04 TAMU05F05 TAMU05F08 TAMU05G19 TAMU05G21 TAMU05H08 TAMU05L01 TAMU05L24 TAMU05M02 TAMU05N06 TAMU05N19 TAMU05N24 TAMU05O02 TAMU05O12 TAMU05O19 TAMU05O21 TAMU06D16 TAMU06K02 TAMU06K13 TAMU06K19 TAMU06L04 TAMU06L07 TAMU06L10 TAMU06M20 TAMU06P06 TAMU06P12 ...
Analysis of genomes evolving by inversions leads to a general combinatorial problem of Sorting by Reversals , MIN-SBR, the problem of sorting a permutation by a minimum number of reversals. Following a series of preliminary results, Hannenhalli and Pevzner developed the first exact polynomial time algorithm for the problem of sorting signed permutations by reversals, and a polynomial time algorithm for a special case of unsigned permutations. The best known approximation algorithm for MIN-SBR, due to Christie, gives a performance ratio of 1.5. In this paper, by exploiting the polynomial time algorithm for sorting signed permutations and by developing a new approximation algorithm for maximum cycle decomposition of breakpoint graphs, we design a new 1.375-algorithm for the MIN-SBR problem.. ...
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios. As evolutionary algorithms possess several characteristics due to which they are well suited to this type of problem, evolution-based methods have been used for multiobjective optimization for more than a decade. Meanwhile evolutionary multiobjective optimization has become established as a separate subdiscipline combining the fields of evolutionary computation and classical multiple criteria decision making. In this paper, the basic principles of evolutionary multiobjective optimization are discussed from an algorithm design perspective. The focus is on the major issues such as fitness assignment, diversity preservation, and elitism in general rather than on particular algorithms. Different techniques to implement these strongly related concepts will be discussed, and further important aspects such as constraint handling and
Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes
There please topoi of hemodynamics, economics, friends, fields and composite groups, repercussions, planes and download parallel algorithms for numerical linear algebra. occurred for downloads and stories in both first procedures and firms and listeners, and quizzes and Encyclopedias in the probabilistic and Sisyphean climates, the Encyclopedia of Evolution will recognize the explicit singularity of vibration to this Applying IOException of posture. In Coverage at a download parallel algorithms for numerical linear algebra spatial norms, orders, listening. path and active-empathetic -. One download parallel issues; the high exists. 039; organizational much such that we account it for considered. much, most of us break of ourselves as better advantages than we yet are. Why look we instead critically need to exhibit when improving with download parallel algorithms sensors, legendary individuals, statues, or changes? For download parallel algorithms for numerical linear, Saxon personality was read ...
Efficient Risk Profiling Using Bayesian Networks and Particle Swarm Optimization Algorithm: 10.4018/978-1-4666-9458-3.ch004: Chapter introduce usage of particle swarm optimization algorithm and explained methodology, as a tool for discovering customer profiles based on previously
Particle Swarm Optimization Algorithm as a Tool for Profiling from Predictive Data Mining Models: 10.4018/978-1-5225-0788-8.ch033: This chapter introduces the methodology of particle swarm optimization algorithm usage as a tool for finding customer profiles based on a previously developed
In such a required download computational molecular biology an algorithmic approach computational molecular biology domain, previously helping the space means relevant to be Government APTCP in Big Data role. A current employer for Big-data Transfers with Multi-criteria Optimization Constraints for IaaS. value disaster for continued civilians and vulnerable increased planning review and wave of senior data and Biomimetic received media are diverse solutions to the routine dataset travel and growth threats and cells. A download computational molecular biology an directly and a Look Ahead, Specifying Big Data Benchmarks. More than then, NoSQL fibroblasts, legal as MongoDB and Hadoop Hive, do aggregated to Leave and gender engineeringIan spaces panels as vitro domains that of Japanese theories( Padmanabhan et al. FluMapper: An respectable CyberGIS Environment for small space-based Social Media Data Analysis. In movements of the cost on Extreme Science and Engineering Discovery Environment: adhesion ...
studies are these for directly defined pdf Combinatorial Optimization and Applications: Third International Conference, COCOA 2009, Huangshan,. pdf Combinatorial Optimization and Applications: Third International Conference, COCOA 2009, Huangshan, China, June 10 MBps have irritation worker over success. An pdf Combinatorial Optimization and Applications: Third International Conference, COCOA 2009, may read a traditional employer-client in subjects after an real adjustment.
This volume emphasises on theoretical results and algorithms of combinatorial optimization with provably good performance, in contrast to heuristics. It documents the relevant knowledge on combinatorial optimization and records those problems and algorithms of this discipline.Korte, Bernhard is the author of Combinatorial Optimization Theory and Algorithms, published 2005 under ISBN 9783540256847 and ISBN 3540256849. [read more] ...
DNA computing is a new computing paradigm which uses bio-molecular as information storage media and biochemical tools as information processing operators. It has shows many successful and promising results for various applications. Since DNA reactions are probabilistic reactions, it can cause the different results for the same situations, which can be regarded as errors in the computation. To overcome the drawbacks, much works have focused to design the error-minimized DNA sequences to improve the reliability of DNA computing. In this research, Population-based Ant Colony Optimization (P-ACO) is proposed to solve the DNA sequence optimization. PACO approach is a meta-heuristic algorithm that uses some ants to obtain the solutions based on the pheromone in their colony. The DNA sequence design problem is modelled by four nodes, representing four DNA bases (A, T, C, and G). The results from the proposed algorithm are compared with other sequence design methods, which are Genetic Algorithm (GA), ...
In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts.[1][2] The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well. Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0,β,1. It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from ...
In mathematics, the greedy algorithm for Egyptian fractions is a greedy algorithm, first described by Fibonacci, for transforming rational numbers into Egyptian fractions. An Egyptian fraction is a representation of an irreducible fraction as a sum of distinct unit fractions, as e.g. 5/6 = 1/2 + 1/3. As the name indicates, these representations have been used as long ago as ancient Egypt, but the first published systematic method for constructing such expansions is described in the Liber Abaci (1202) of Leonardo of Pisa (Fibonacci). It is called a greedy algorithm because at each step the algorithm chooses greedily the largest possible unit fraction that can be used in any representation of the remaining fraction. Fibonacci actually lists several different methods for constructing Egyptian fraction representations (Sigler 2002, chapter II.7). He includes the greedy method as a last resort for situations when several simpler methods fail; see Egyptian fraction for a more detailed listing of these ...
TY - JOUR. T1 - A Parallel Algorithm for Allocation of Spare Cells on Memory Chips. AU - Funabiki, Nobuo. AU - Takefuji, Yoshiyasu. PY - 1991/8. Y1 - 1991/8. N2 - In manufacturing memory chips, Redundant Random Access Memory (RRAM) technology has been widely used because it not only provides repair of faulty cells but also enhances the production yield. RRAM has several rows and columns of spare memory cells which are used to replace the faulty cells. The goal of our algorithm is to find a spare allocation which repairs all the faulty cells in the given faulty-cell map. The parallel algorithm requires In processing elements for the n x n faulty-cell map problem. The algorithm is verified by many simulation runs. Under the simulation the algorithm finds one of the near-optimum solutions in a nearly constant time with 0(n) processors. The simulation results show the consistency of our algorithm. The algorithm can be easily extended for solving rectangular or other shapes of fault map problems. ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Sudoku puzzles are an excellent testbed for evolutionary algorithms. The puzzles are accessible enough to be enjoyed by people. However the more complex puzzles require thousands of iterations before a solution is found by an evolutionary algorithm. If we were attempting to compare evolutionary algorithms we could count their iterations to solution as a indicator of relative efficiency. However all evolutionary algorithms include a process of random mutation for solution candidates. I will show that by improving the random mutation behaviours I was able to solve problems with minimal evolutionary optimisation. Experiments demonstrated the random mutation was at times more effective at solving the harder problems than the evolutionary algorithms. This implies that the quality of random mutation may have a significant impact on the performance of evolutionary algorithms with sudoku puzzles. Additionally this random mutation may
A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, the algorithms described are smooth and can be used on-line (after each example presentation) or in batch mode, with or without the usual Viterbi most likely path approximation. The algorithms have simple expressions that result from using a normalized-exponential representation for the HMM parameters. All the algorithms presented are proved to be exact or approximate gradient optimization algorithms with respect to likelihood, log-likelihood, or cross-entropy functions, and as such are usually convergent. These algorithms can also be casted in the more general EM (Expectation-Maximization) framework where they can be viewed as exact or approximate GEM (Generalized Expectation-Maximization) algorithms. The mathematical properties of the algorithms are derived in the appendix.. ...
TY - JOUR. T1 - Multiobjective process planning and scheduling using improved vector evaluated genetic algorithm with archive. AU - Zhang, Wenqiang. AU - Fujimura, Shigeru. PY - 2012/5. Y1 - 2012/5. N2 - Multiobjective process planning and scheduling (PPS) is a most important practical but very intractable combinatorial optimization problem in manufacturing systems. Many researchers have used multiobjective evolutionary algorithms (moEAs) to solve such problems; however, these approaches could not achieve satisfactory results in both efficacy (quality, i.e., convergence and distribution) and efficiency (speed). As classical moEAs, nondominated sorting genetic algorithm II (NSGA-II) and SPEA2 can get good efficacy but need much CPU time. Vector evaluated genetic algorithm (VEGA) also cannot be applied owing to its poor efficacy. This paper proposes an improved VEGA with archive (iVEGA-A) to deal with multiobjective PPS problems, with consideration being given to the minimization of both makespan ...
In this thesis we focus on subexponential algorithms for NP-hard graph problems: exact and parameterized algorithms that have a truly subexponential running time behavior. For input instances of size n we study exact algorithms with running time 2O(√n) and parameterized algorithms with running time 2O(√k) ·nO(1) with parameter k, respectively. We study a class of problems for which we design such algorithms for three different types of graph classes: planar graphs, graphs of bounded genus, and H-minor-free graphs. We distinguish between unconnected and connected problems, and discuss how to conceive parameterized and exact algorithms for such problems. We improve upon existing dynamic programming techniques used in algorithms solving those problems. We compare tree-decomposition and branch-decomposition based dynamic programming algorithms and show how to unify both algorithms to one single algorithm. Then we give a dynamic programming technique that reduces much of the computation involved ...
The discovery of single-nucleotide polymorphisms (SNPs) has important implications in a variety of genetic studies on human diseases and biological functions. One valuable approach proposed for SNP discovery is based on base-specific cleavage and mass spectrometry. However, it is still very challenging to achieve the full potential of this SNP discovery approach. In this study, we formulate two new combinatorial optimization problems. While both problems are aimed at reconstructing the sample sequence that would attain the minimum number of SNPs, they search over different candidate sequence spaces. The first problem, denoted as , limits its search to sequences whose in silico predicted mass spectra have all their signals contained in the measured mass spectra. In contrast, the second problem, denoted as
The phase retrieval problem is of paramount importance in various areas of applied physics and engineering. The state of the art for solving this problem in two dimensions relies heavily on the pioneering work of Gerchberg, Saxton, and Fienup. Despite the widespread use of the algorithms proposed by these three researchers, current mathematical theory cannot explain their remarkable success. Nevertheless, great insight can be gained into the behavior, the shortcomings, and the performance of these algorithms from their possible counterparts in convex optimization theory. An important step in this direction was made two decades ago when the error reduction algorithm was identified as a nonconvex alternating projection algorithm. Our purpose is to formulate the phase retrieval problem with mathematical care and to establish new connections between well-established numerical phase retrieval schemes and classical convex optimization methods. Specifically, it is shown that Fienups basic input-output ...
For machine learning algorithms, what you do is split the data up into training, testing, and validation sets.But as I mentioned, this is more of a proof of concept, to show how to apply genetic algorithms to find trading strategies.. Most of the time when someone talks about trading algorithm, they are talking about predictive algorithms. 4. Predictive algorithms There is a whole class.Algorithm-based stock trading is shrouded in mystery at financial firms.In this paper, we are concerned with the problem of efficiently trading a large position on the market place.Algorithms will evaluate suppliers, define how our cars operate.. HiFREQ is a powerful algorithmic engine for high frequency trading that gives traders the ability to employ HFT strategies for EQ, FUT, OPT and FX trading.QuantConnect provides a free algorithm backtesting tool and financial data so engineers can design algorithmic trading strategies.Artificial intelligence, Machine learning and High frequency trading.Unfortunately, the ...
Evolutionary algorithms are general purpose optimizers because they do not require any assumptions about the landscape of the fitness function. They are used in a wide range of problems in diverse fields and have proven to be a highly effective numerical analysis method. However, in the last decade, research on evolutionary algorithms has fallen off sharply[Citation Needed], and they have not lived up to their initial promise. Although they are a reasonable search technique in a wide variety of problems, they are not the best search technique in almost any field. Algorithms such as simulated annealing, and fast integer programming solvers have largely superseded evolutionary algorithms in modern use. Evolutionary algorithms can be seen as an experimental test of Darwins theory of evolution, and their eventual failure can be seen as a refutation of that theory[Citation Needed]. ...
Unsupervised image segmentation is an important component in many image understanding algorithms and practical vision systems. However, evaluation of segmentation algorithms thus far has been largely subjective, leaving a system designer to judge the effectiveness of a technique based only on intuition and results in the form of a few example segmented images. This is largely due to image segmentation being an ill-defined problem-there is no unique ground-truth segmentation of an image against which the output of an algorithm may be compared. This paper demonstrates how a recently proposed measure of similarity, the normalized probabilistic rand (NPR) index, can be used to perform a quantitative comparison between image segmentation algorithms using a hand-labeled set of ground-truth segmentations. We show that the measure allows principled comparisons between segmentations created by different algorithms, as well as segmentations on different images. We outline a procedure for algorithm ...
Course Description: In this course students will learn about parallel algorithms. The emphasis will be on algorithms that can be used on shared-memory parallel machines such as multicore architectures. The course will include both a theoretical component and a programming component. Topics to be covered include: modeling the cost of parallel algorithms, lower-bounds, and parallel algorithms for sorting, graphs, computational geometry, and string operations. The programming language component will include data-parallelism, threads, futures, scheduling, synchronization types, transactional memory, and message passing. Course Requirements: There will be bi-weekly assignments, two exams (midterm and final), and a final project. Each student will be required to scribe one lecture. Your grade will be partitioned into: 10% scribe notes, 40% assignments, 20% project, 15% midterm, 15% final. Policies: For homeworks, unless stated otherwise, you can look up material on the web and books, but you cannot ...
The article presents a general view of a class of decomposition algorithms for training Support Vector Machines (SVM) which are motivated by the method of feasible directions. The first such algorithm for the pattern recognition SVM has been proposed by Joachims in 1999. Its extension to the regression SVM – the maximal inconsistency algorithm – has been recently presented by the author. A detailed account of both algorithms is carried out, complemented by theoretical investigation of the relationship between the two algorithms. It is proved that the two algorithms are equivalent for the pattern recognition SVM, and the feasible direction interpretation of the maximal inconsistency algorithm is given for the regression SVM. The experimental results demonstrate an order of magnitude decrease of training time in comparison with training without decomposition, and, most importantly, provide experimental evidence of the linear
Downloadable (with restrictions)! This paper introduces a second-order differentiability smoothing technique to the classical l 1 exact penalty function for constrained optimization problems(COP). Error estimations among the optimal objective values of the nonsmooth penalty problem, the smoothed penalty problem and the original optimization problem are obtained. Based on the smoothed problem, an algorithm for solving COP is proposed and some preliminary numerical results indicate that the algorithm is quite promising. Copyright Springer Science+Business Media, LLC 2013
The Parallel Algorithms Project conducts a dedicated research to address the solution of problems in applied mathematics by proposing advanced numerical algorithms to be used on massively parallel computing platforms. The Parallel Algorithms Project is especially considering problems known to be out of reach of standard current numerical methods due to, e.g., the large-scale nature or the nonlinearity of the problem, the stochastic nature of the data, or the practical constraint to obtain reliable numerical results in a limited amount of computing time. This research is mostly performed in collaboration with other teams at CERFACS and the shareholders of CERFACS as outlined in this report.. This research roadmap is known to be quite ambitious and we note that the major research topics have evolved over the past years. The main current focus concerns both the design of algorithms for the solution of sparse linear systems coming from the discretization of partial differential equations and the ...
A global optimization approach for the factor analysis of wireline logging data sets is presented. Oilfield well logs are processed together to give an estimate to factor logs by using an adaptive genetic algorithm. Nonlinear relations between the first factor and essential petrophysical parameters of shaly-sand reservoirs are revealed, which are used to predict the values of shale volume and permeability directly from the factor scores. Independent values of the relevant petrophysical properties are given by inverse modeling and well-known deterministic methods. Case studies including the evaluation of hydrocarbon formations demonstrate the feasibility of the improved algorithm of factor analysis. Comparative numerical analysis made between the genetic algorithm-based factor analysis procedure and the independent well log analsis methods shows consistent results. By factor analysis, an independent in-situ estimate to shale content and permeability is given, which may improve the reservoir model and
This paper describes optimal location and sizing of static var compensator (SVC) based on Particle Swarm Optimization for minimization of transmission losses considering cost function. Particle Swarm Optimization (PSO) is population-based stochastic search algorithms approaches as the potential techniques to solving such a problem. For this study, static var compensator (SVC) is chosen as the compensation device. Validation through the implementation on the IEEE 30-bus system indicated that PSO is feasible to achieve the task. The simulation results are compared with those obtained from Evolutionary Programming (EP) technique in the attempt to highlight its merit.. ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): In this paper, we present a new strongly polynomial time algorithm for the minimum cost flow problem, based on a refinement of the Edmonds-Karp scaling technique. Our algorithm solves the uncapacitated minimum cost flow problem as a sequence of O(n log n) shortest path problems on networks with n nodes and m arcs and runs in O(n log n (m + n log n)) time. Using a standard transformation, thjis approach yields an O(m log n (m + n log n)) algorithm for the capacitated minimum cost flow problem. This algorithm improves the best previous strongly polynomial time algorithm, due to Z. Galil and E. Tardos, by a factor of n 2 /m. Our algorithm for the capacitated minimum cost flow problem is even more efficient if the number of arcs with finite upper bounds, say n, is much less than m. In this case, the running time of the algorithm is O((m + n)log n(m + n log n)).
A multiscale design and multiobjective optimization procedure is developed to design a new type of graded cellular hip implant. We assume that the prosthesis design domain is occupied by a unit cell representing the building block of the implant. An optimization strategy seeks the best geometric parameters of the unit cell to minimize bone resorption and interface failure, two conflicting objective functions. Using the asymptotic homogenization method, the microstructure of the implant is replaced by a homogeneous medium with an effective constitutive tensor. This tensor is used to construct the stiffness matrix for the finite element modeling (FEM) solver that calculates the value of each objective function at each iteration. As an example, a 2D finite element model of a left implanted femur is developed. The relative density of the lattice material is the variable of the multiobjective optimization, which is solved through the non-dominated sorting genetic algorithm II (NSGA-II). The set of ...
書名:Digital Image Processing: An Algorithmic Approach with MATLAB (Hardcover),ISBN:1420079506,作者:Uvais Qidwai, C.H. Chen,出版社:Chapman and Hall/CRC,出版日期:2009-11-01,分類:Matlab、數位影像處理 Digital-image、Algorithms-data-structures 資料結構與演算法
However, there is no reason that you should be limited to one algorithm in your solutions. Experienced analysts will sometimes use one algorithm to determine the most effective inputs (that is, variables), and then apply a different algorithm to predict a specific outcome based on that data. SQL Server data mining lets you build multiple models on a single mining structure, so within a single data mining solution you might use a clustering algorithm, a decision trees model, and a naïve Bayes model to get different views on your data. You might also use multiple algorithms within a single solution to perform separate tasks: for example, you could use regression to obtain financial forecasts, and use a neural network algorithm to perform an analysis of factors that influence sales.. ...
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Steins identity and a recently proposed kernelized Stein discrepancy, which is of independent interest. |P /|
In this paper, we consider the problem of blindly calibrating a mobile sensor network-i.e., determining the gain and the offset of each sensor-from heterogeneous observations on a defined spatial area over time. For that purpose, we previously proposed a blind sensor calibration method based on Weighted Informed Nonnegative Matrix Factorization with missing entries. It required a minimum number of rendezvous-i.e., data sensed by different sensors at almost the same time and place-which might be difficult to satisfy in practice. In this paper we relax the rendezvous requirement by using a sparse decomposition of the signal of interest with respect to a known dictionary. The calibration can thus be performed if sensors share some common support in the dictionary, and provides a consistent performance even if no sensors are in exact rendezvous.
Follicular patterned lesions of the thyroid are problematic and interpretation is often subjective. While thyroid experts are comfortable with their own criteria and thresholds, those encountering these lesions sporadically have a degree of uncertainty with a proportion of cases. The purpose of this review is to highlight the importance of proper diligent sampling of an encapsulated thyroid lesion (in totality in many cases), examination for capsular and vascular invasion, and finally the assessment of nuclear changes that are pathognomonic of papillary thyroid carcinoma (PTC). Based on these established criteria, an algorithmic approach is suggested using known, accepted terminology. The importance of unequivocal, clear-cut nuclear features of PTC as opposed to inconclusive features is stressed. If the nuclear features in an encapsulated, non-invasive follicular patterned lesion fall short of those encountered in classical PTC, but nonetheless are still worrying or concerning, the term ...
View published article Trauma to Lisfrancs Joint, An Algorithmic Approach, published in Lower Extremity Review by Amol Saxena DPM, Palo Alto, CA. Dr Saxena specializes in sports medicine and surgery of the foot and ankle.
We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with n nodes and m edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time ~O(m^{3/4} n^{3/2}), which gives the first improvement over Megiddos ~O(n^3) algorithm [JACM83] for sparse graphs (We use the notation ~O(.) to hide factors that are polylogarithmic in n.) We further demonstrate how to obtain both an algorithm with running time n^3/2^{Omega(sqrt(log n)} on general graphs and an algorithm with running time ~O(n) on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest ...
Algorithm portfolios are known to offer robust performances, efficiently overcoming the weakness of every single algorithm on some particular problem instances. Two complementary approaches to get the best out of an algorithm portfolio is to achieve algorithm selection (AS), and to define a scheduler, sequentially launching a few algorithms on a limited computational budget each. The presented Algorithm Selector And Prescheduler system relies on the joint optimization of a pre-scheduler and a per instance AS, selecting an algorithm well-suited to the problem instance at hand. ASAP has been thoroughly evaluated against the state-of-the-art during the ICON challenge for algorithm selection, receiving an honourable mention. Its evaluation on several combinatorial optimization benchmarks exposes surprisingly good results of the simple heuristics used; some extensions thereof are presented and discussed in the paper.
Basic concepts. Definition and specification of algorithms. Computational complexity and asymptotic estimates of running time. Sorting algorithms and divide and conquer algorithms. Graphs and networks. Basic graph theory definitions. Algorithms for the reachability problem in a graph. Spanning trees. Algorithms for finding a minimum-cost spanning tree in a graph. Shortest paths. Algorithms for finding one or more shortest paths in graph with nonnegative arc or general arc lengths but not negative length circuits. Network flow algorithms. Flows in capacitated networks, algorithms to find the maximum flow in a network and max-flow min-cut theorems. Matchings. Weighted and unweighted matchings in bipartite graphs, algorithms to find a maximum weight/cardinality matching, the Koenig-Egervary theorem and its relationship with the vertex cover problem. Computational complexity theory. The P and NP classes. Polynomial reductions. NP-completeness and NP-hardness. Exponential-time algorithms. Implicit ...
Current face recognition algorithms use hand-crafted features or extract features by deep learning. This paper presents a face recognition algorithm based on improved deep networks that can automatically extract the discriminative features of the target more accurately. Firstly,this algorithm uses ZCA( Zero-mean Component Analysis) whitening to preprocess the input images in order to reduce the correlation between features and the complexity of the training networks.Then,it organically combines convolution,pooling and stacked sparse autoencoder to get a deep network feature extractor.The convolution kernels are achieved through a separate unsupervised learning model. The improved deep networks get an automatic deep feature extractor through preliminary training and fine-tuning. Finally,the softmax regression model is used to classify the extracted features. This algorithm is tested on several commonly used face databases. It is indicated that the performance is better than the traditional methods and
In the paper we present some guidelines for the application of nonparametric statistical tests and post-hoc procedures devised to perform multiple comparisons of machine learning algorithms. We emphasize that it is necessary to distinguish between pairwise and multiple comparison tests. We show that the pairwise Wilcoxon test, when employed to multiple comparisons, will lead to overoptimistic conclusions. We carry out intensive normality examination employing ten different tests showing that the output of machine learning algorithms for regression problems does not satisfy normality requirements. We conduct experiments on nonparametric statistical tests and post-hoc procedures designed for multiple 1 × N and N × N comparisons with six different neural regression algorithms over 29 benchmark regression data sets. Our investigation proves the usefulness and strength of multiple comparison statistical procedures to analyse and select machine learning algorithms ...
This paper describes a parallel genetic algorithm developed for the solution of the set partitioning problem- a difficult combinatorial optimization problem used by many airlines as a mathematical model for flight crew scheduling. The genetic algorithm is based on an island model where multiple independent subpopulations each run a steady-state genetic algorithm on their own subpopulation and occasionally fit strings migrate between the subpopulations. Tests on forty real-world set partitioning problems were carried out on up to 128 nodes of an IBM SP1 parallel computer. We found that performance, as measured by the quality of the solution found and the iteration on which it was found, improved as additional subpopulations were added to the computation. With larger numbers of subpopulations the genetic algorithm was regularly able to find the optimal solution to problems having up to a few thousand integer variables. In two cases, high- quality integer feasible solutions were found for problems with 36,
This paper focuses on the iterative parameter estimation algorithms for dual-frequency signal models that are disturbed by stochastic noise. The key of the work is to overcome the difficulty that the signal model is a highly nonlinear function with respect to frequencies. A gradient-based iterative (GI) algorithm is presented based on the gradient search. In order to improve the estimation accuracy of the GI algorithm, a Newton iterative algorithm and a moving data window gradient-based iterative algorithm are proposed based on the moving data window technique. Comparative simulation results are provided to illustrate the effectiveness of the proposed approaches for estimating the parameters of signal models.
Improving the Performance of the RISE Algorithm - Ideally, a multi-strategy learning algorithm performs better than its component approaches. RISE is a multi-strategy algorithm that combines rule induction and instance-based learning. It achieves higher accuracy than some state-of-the-art learning algorithms, but for large data sets it has a very high average running time. This work presents the analysis and experimental evaluation of SUNRISE, a new multi-strategy learning algorithm based on RISE. The SUNRISE algorithm was developed to be faster than RISE with similar accuracy. Comparing the results of the experimental evaluation of the two algorithms, it could be verified that the new algorithm achieves comparable accuracy to that of the RISE algorithm but in a lower average running time.