Human epithelial (HEp-2) cell specimens is obtained from indirect immunofluorescence (IIF) imaging for diagnosis and management of autoimmune diseases. Analysis of HEp2 cells is important and in this work we consider automatic cell segmentation and classification using spatial and texture pattern features and random forest classifiers. In this paper, we summarize our efforts in classification and segmentation tasks proposed in ICPR 2016 contest. For the cell level staining pattern classification (Task 1), we utilized texture features such as rotational invariant co-occurrence (RIC) versions of the well-known local binary pattern (LBP), median binary pattern (MBP), joint adaptive median binary pattern (JAMBP), and motif cooccurrence matrix (MCM) along with other optimized features. We report the classification results utilizing different classifiers such as the k-nearest neighbors (kNN), support vector machine (SVM), and random forest (RF). We obtained the best mean class accuracy of 94.29% for ...
Learning Radiology: Recognizing the Basics 4th Revised edition - William Herring - ISBN: 9780323567299. The leading introductory radiology text for medical students and others who are required to read and interpret common radiologic images, Learning Radiology, 4th Edition, stresses an easy-to-follow pattern recognition approach that teaches how to differentiate normal and abnormal images. Dr. ...
An economizer controller system having a plug and play recognition approach with an automatic user interface population mechanism. A check may be made for sensors connected to the controller. The control type of the sensors may be determined. The menu structure may be repopulated based on the control type. The user interface may then be updated. This approach may be repeated as needed.
Accurate tumor segmentation is an essential and crucial step for computer-aided brain tumor diagnosis and surgical planning. Subjective segmentations are w
The increasing amounts of microscopy data generated in cell biology requires the development of automated tools for the quantitative analysis of images. Clumps of cells are difficult to segment due to the frequent lack of clear boundaries between cells and are often ignored, but communication between cells is an intrinsic part of the response of cells to their environment. In addition cells often show a large variation in their responses, even within a clump, and an accurate segmentation is therefore vital to prevent the unwanted averaging of measurements over multiple cells. Here we present a method for segmenting clumps of cells by using a multi-scale ridge filter to enhance unclear boundaries. A multi-phase level set method incorporating a region competition term is used to identify a boundary for each cell based on the ridge filter response.
Recently, image-based, high throughput RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions i
log in you can tag this publication with additional keywords A publication can refer to another publication (outgoing references) or it can be referred to by other publications (incoming references).. ...
In the recent years we have assisted to a progressively growing number of applicative areas of Pattern Recognition, mainly devoted to the exploitation of cutting edge scientific methodologies for the solution of problems of relevant interest to civil society. This trend is generating new communities, as aggregation of scientist having as common aim the development of systems more or less prompt to be transformed into real working prototypes. In the field of medical image analysis this trend has been even more evident than in others, as the availability of assisted diagnosis tools would allow the medical community to increase their productivity jointly with an improvement of the quality and precision of the diagnostic act.. Among all, rather novel interests are concentrating on the indirect immunofluorescence images (IIF), i.e. images obtained by making biological tissue interacting with special sources of light, so as to generate fluorescent image responses; these revealed especially suited for ...
CiteSeerX - Scientific documents that cite the following paper: Efficient and robust feature extraction by maximum margin criterion
TL;DRHelixIO is a slick platform for telling you what is in your fastq sample - very useful for microbiologists!Will be interesting to see how it stacks up against KmerID in a larger number of samples.Interesting ycombinator threadHelixIO have recently launched a public beta of their intriguing bioinformatics platform for fast, portable and scalable sequence analysis.…
The current application concerns a new approach for disease recognition of vine leaves based on Local Binary Patterns (LBPs). The LBP approach was applied on color digital pictures with a natural complex background that contained infected leaves. The pictures were captured with a smartphone camera from vine plants. A 32-bin histogram was calculated by the LBP characteristic features that resulted from a Hue plane. Moreover, four One Class Support Vector Machines (OCSVMs) were trained with a training set of 8 pictures from each disease including healthy, Powdery Mildew and Black Rot and Downy Mildew. The trained OCSVMs were tested with 100 infected vine leaf pictures corresponding to each disease which were capable of generalizing correctly, when presented with vine leave which was infected by the same disease. The recognition percentage reached 97 %, 95 % and 93 % for each disease respectively while healthy plants were recognized with an accuracy rate of 100 %.
Statistical Pattern Recognition Techniques for Early Diagnosis of Diabetic Neuropathy by Posturographic Data: 10.4018/978-1-4666-1803-9.ch002: The goal of this chapter is to describe the use of statistical pattern recognition techniques in order to build a classification model for the early diagnosis
Current face recognition algorithms use hand-crafted features or extract features by deep learning. This paper presents a face recognition algorithm based on improved deep networks that can automatically extract the discriminative features of the target more accurately. Firstly,this algorithm uses ZCA( Zero-mean Component Analysis) whitening to preprocess the input images in order to reduce the correlation between features and the complexity of the training networks.Then,it organically combines convolution,pooling and stacked sparse autoencoder to get a deep network feature extractor.The convolution kernels are achieved through a separate unsupervised learning model. The improved deep networks get an automatic deep feature extractor through preliminary training and fine-tuning. Finally,the softmax regression model is used to classify the extracted features. This algorithm is tested on several commonly used face databases. It is indicated that the performance is better than the traditional methods and
The standard non-negative matrix factorization focuses on batch learning assuming that the fixed global latent parameters completely describe the observations. Many online extensions assume rigid constraints and smooth continuity in observations. However, the more complex time series processes can have multivariate distributions switch between a finite number of states or regimes. In this paper we proposes a regime-switching model for non-negative matrix factorization and present a method of forecasting in this lower-dimensional regime-dependent space. The time dependent observations are partitioned into regimes to enhance factors interpretability inherent in non-negative matrix factorization. We use weighted non-negative matrix factorization to handle missing values and to avoid needless contamination of observed structure. Finally, we propose a method of forecasting from the regime components via threshold autoregressive model and projecting the forecasts back to the original target space. ...
Finucane C, Fan CW, Hade D, Byrne L, Boyle G,Kenny R, Cunningham C, Identifying Blood Pressure Response Subtypes Following Orthostatis Using Pattern Recognition Techniques, TCD Medical School Postgraduate Research Day, Dubiln, Ireland, 2008 ...
TY - JOUR. T1 - High-Resolution Encoder-Decoder Networks for Low-Contrast Medical Image Segmentation. AU - Zhou, Sihang. AU - Nie, Dong. AU - Adeli, Ehsan. AU - Yin, Jianping. AU - Lian, Jun. AU - Shen, Dinggang. PY - 2020/1/1. Y1 - 2020/1/1. N2 - Automatic image segmentation is an essential step for many medical image analysis applications, include computer-aided radiation therapy, disease diagnosis, and treatment effect evaluation. One of the major challenges for this task is the blurry nature of medical images (e.g., CT, MR, and microscopic images), which can often result in low-contrast and vanishing boundaries. With the recent advances in convolutional neural networks, vast improvements have been made for image segmentation, mainly based on the skip-connection-linked encoder-decoder deep architectures. However, in many applications (with adjacent targets in blurry images), these models often fail to accurately locate complex boundaries and properly segment tiny isolated parts. In this ...
Unsupervised image segmentation is an important component in many image understanding algorithms and practical vision systems. However, evaluation of segmentation algorithms thus far has been largely subjective, leaving a system designer to judge the effectiveness of a technique based only on intuition and results in the form of a few example segmented images. This is largely due to image segmentation being an ill-defined problem-there is no unique ground-truth segmentation of an image against which the output of an algorithm may be compared. This paper demonstrates how a recently proposed measure of similarity, the normalized probabilistic rand (NPR) index, can be used to perform a quantitative comparison between image segmentation algorithms using a hand-labeled set of ground-truth segmentations. We show that the measure allows principled comparisons between segmentations created by different algorithms, as well as segmentations on different images. We outline a procedure for algorithm ...
Dr Jesuchristopher Joseph obtained his PhD in 2008 from Anna University, Chennai, India in the field of Medical Image Processing. He joined as a postdoctoral fellow in MRC/UCT Medical Imaging Research Unit, University of Cape Town, South Africa, and developed a novel 3D surface deformation-based shape analysis method to assess the shape variations of subcortical structures in Fetal Alcohol Spectrum Children. He also developed various applications for medical image segmentation and quantification in the area of medical microscopic image processing in Optra Systems Pvt. Ltd., India. He has over six years of experience in the fields of medical image processing and Analysis. The projects he has worked on so far include: Estimation of Nuclear Protein Expression in prostate cancer tissues. Estimation of Cytoplasm expression in colorectal tumour samples using IHC-MARK. Automated segmentation of lumen in prostate cancer using pattern recognition approach; Computerised analysis of structural and ...
CiteSeerX - Scientific articles matching the query: A 2PJ/Pixel/Direction MIMO Processing Based CMOS Image Sensor for Omnidirectional Local Binary Pattern Extraction and Edge Detection
|p style=text-indent:20px;|Honggang Yu, An efficient face recognition algorithm using the improved convolutional neural network|/p||p style=text-indent:20px;|Discrete & Continuous Dynamical Systems - S, 12 (2019), 901-914|/p||p style=text-indent:20px;|This paper is retracted by decision of the Editors in Chief of the journal Discrete &Continuous Dynamical Systems - S.|/p|
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain’s anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and
University of Washington computer scientists and engineers have launched the MegaFace Challenge, the worlds first competition aimed at evaluating and improving the performance of face recognition algorithms at the million person scale.
Fibonacci and Lucas cubes are induced subgraphs of hypercubes obtained by excluding certain binary strings from the vertex set. They appear as models for interconnection networks, as well as in chemistry. We derive a characterization of Lucas cubes that is based on a peripheral expansion of a unique convex subgraph of an appropriate Fibonacci cube. This serves as the foundation for a recognition algorithm of Lucas cubes that runs in linear time.
It is a challenging task to analyze medical images because there are very minute variations & larger data set for analysis. It is a quite difficult to develop an automated recognition system which could process on a large information of patient and provide a correct estimation. The conventional method in medicine for brain MR images classification and tumor detection is by human inspection. Fuzzy logic technique is more accurate but it fully depends on expert knowledge, which may not always available. Here we extract the feature using PCAand after that training using the ANFIS tool. The performance of the ANFIS classifier was evaluated in terms of training performance and classification accuracy. Here the resultconfirmed that the proposed ANFIS classifier with accuracy greater than 90 percentage has potential in detecting the tumors. This paper describes the proposed strategy to medical image classification of patients MRI scan images of the brain.
BioMed Research International is a peer-reviewed, Open Access journal that publishes original research articles, review articles, and clinical studies covering a wide range of subjects in life sciences and medicine. The journal is divided into 55 subject areas.
This paper addresses the problem of automatic classification of Spectral Domain OCT (SD-OCT) data for automatic identification of patients with Diabetic Macular Edema (DME) versus normal subjects. Our method is based on Local Binary Patterns (LBP) features to describe the texture of Optical Coherence Tomography (OCT) images and we compare different LBP features extraction approaches to compute a single signature for the whole OCT volume. Experimental results with two datasets of respectively 32 and 30 OCT volumes show that regardless of using low or high level representations, features derived from LBP texture have highly discriminative power. Moreover, the experiments show that the proposed method achieves better classification performances than other recent published works.
Different texture descriptors are proposed for the automatic classification of skin lesions from dermoscopic images. They are based on color texture analysis obtained from (1) color mathematical morphology (MM) and Kohonen self-organizing maps (SOMs) or (2) local binary patterns (LBPs), computed with the use of local adaptive neighborhoods of the image. Neither of these two approaches needs a previous segmentation process. In the first proposed descriptor, the adaptive neighborhoods are used as structuring elements to carry out adaptive MM operations which are further combined by using Kohonen SOM; this has been compared with a nonadaptive version. In the second one, the adaptive neighborhoods enable geometrical feature maps to be defined, from which LBP histograms are computed. This has also been compared with a classical LBP approach. A receiver operating characteristics analysis of the experimental results shows that the adaptive neighborhood-based LBP approach yields the best results. It ...
X-ray absorption spectromicroscopy provides rich information on the chemical organization of materials down to the nanoscale. However, interpretation of this information in studies of "natural" materials such as biological or environmental science specimens can be complicated by the rich mixtures of spectroscopically complicated materials present. We describe here the shortcomings that some times arise in previously employed approaches such as cluster analysis, and we present a new approach based on non-negative matrix approximation (NNMA) analysis with both sparseness and cluster-similarity regularizations. In a preliminary study of the large-scale biochemical organization of human spermatozoa, NNMA analysis delivers results that nicely show the major features of spermatozoa with no physically erroneous negative weightings or thicknesses in the calculated image. ...
BraTS has always been focusing on the evaluation of state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. BraTS 2019 utilizes multi-institutional pre-operative MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Furthemore, to pinpoint the clinical relevance of this segmentation task, BraTS19 also focuses on the prediction of patient overall survival, via integrative analyses of radiomic features and machine learning algorithms. Finally, BraTS19 intends to experimentally evaluate the uncertainty in tumor segmentations.. ...
Active Contour와 Optical Flow를 이용한 카메라가 움직이는 환경에서의 이동 물체의 검출과 추적(A Method of Segmentation and Tracking of a Moving Object in Moving Camera Circumstances using Active Contour Models and Optical Flow ...
This paper presents a novel unsupervised algorithm to detect salient regions and to segment out foreground objects from background. In contrast to previous unidirectional saliency-based object segmentation methods, in which only the detected saliency map is used to guide the object segmentation, our algorithm mutually exploits detection/segmentation cues from each other. To achieve this goal, an initial saliency map is generated by the proposed segmentation driven low-rank matrix recovery model. Such a saliency map is exploited to initialize object segmentation model, which is formulated as energy minimization of Markov random field. Mutually, the quality of saliency map is further improved by the segmentation result, and serves as a new guidance for the object segmentation. The optimal saliency map and the final segmentation are achieved by jointly optimizing the defined objective functions. Extensive evaluations on MSRA-B and PASCAL-1500 datasets demonstrate that the proposed algorithm achieves the
Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT ...
Shadows in high resolution imagery create significant problems for urban land cover classification and environmental application. We first investigated whether shadows were intrinsically different and hypothetically possible to separate from each other with ground spectral measurements. Both pixel-based and object-oriented methods were used to evaluate the effects of shadow detection on QuickBird image classification and spectroradiometric restoration. In each method, shadows were detected and separated either with or without histogram thresholding, and subsequently corrected with a k-nearest neighbor algorithm and a linear correlation correction. The results showed that shadows had distinct spectroradiometric characteristics, thus, could be detected with an optimal brightness threshold and further differentiated with a scene-based near infrared ratio. The pixel-based methods generally recognized more shadow areas and with statistically higher accuracy than the object-oriented methods. The effects of
The proposed PhD project will develop and study pattern recognition methods and machine learning techniques for context inference based on eye movement analysis. Potential applications are in activity and health monitoring, location-awareness, assisted living, and cognition-aware user interfaces. The research will be experimental, using portable eye tracking equipment and wearable sensor systems, and will involve user studies and data collection in daily life settings. In addition to experimental skills, the work will require to develop a thorough understanding of pattern recognition, machine learning and statistical signal processing techniques suitable for inferring various aspects of context from eye movements ...
1. Introduction Automatic diagnosis systems for detecting health illnesses that employ image processing have been widely accepted in recent years, because they offer support for decisions made by specialists. These methods reduce the subjectivity associated with the traditional diagnosis. In addition, these systems are capable of enhancing important details in the images based on color transformations that simplify the classi﫿cation stage and provide the clarity needed to identify object structures more easily. Feature extraction as a fundamental processing stage is very important, considering its strong in﫿uence on a successful diagnosis. For instance, the standard approach in dermoscopic images analyzed usually has three stages: (i) image segmentation, (ii) feature extraction and feature selection, (iii) lesion classification [1]. The first step is one of the most important because the accuracy of the next steps strongly depends on the image segmentation performance. In the literature, ...
Throughout history there have been numerous technological advances in the medical field. Some so minute that we don t even know about them, others that have changed the entire medical field at the blink of an eye. The applications of higher dimensional vector spaces have helped doctors diagnose breast cancer and heart disease. Vector spaces help doctors figure out if a lump found in a woman s breast is benign or malignant. Diagnosing heart disease in someone has also started using the application of vector spaces. ...
The publication Image segmentation techniques is placed in the Top 10000 of the best publications in CiteWeb. Also in the category Computer Science it is included to the Top 1000. Additionally, the publicaiton Image segmentation techniques is placed in the Top 1000 among other scientific works published in 1985 ...
In this study, we present PaCeQuant, a novel ImageJ-based tool for automatic segmentation of leaf epidermal PCs and simultaneous quantification of PC shape characteristics. The fully automatic segmentation of individual cells by PaCeQuant is a major advance because currently all measurements of PCs require manual segmentation. Manual segmentation is very time consuming and prone to bias introduced by the subjectivity of sample choice and contour labeling (Vanhaeren et al., 2015; Wu et al., 2016). PaCeQuant efficiently detects cell outlines in confocal input images using a combination of contrast and boundary enhancement, analysis of skeletons in binary images and watershed-based gap closing (Fig. 1).. We validated the accuracy of the automatic segmentation implemented in PaCeQuant by comparison to results from manually segmented cells (Fig. 3; Supplemental Fig. S2; Supplemental Table S2). In few cases, PaCeQuant locally determined cell contours with low accuracy, mostly at regions of lower ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Matrices that can be factored into a product of two simpler matrices can serve as a useful and often natural model in the analysis of tabulated or highdimensional data. Models based on matrix factorization (Factor Analysis, PCA) have been extensively used in statistical analysis and machine learning for over a century, with many new formulations and models suggested in recent
Influence of the Training Library Composition on a Patch-based label fusion method: Application to Hippocampus Segmentation on ADNI dataset.. . Biblioteca virtual para leer y descargar libros, documentos, trabajos y tesis universitarias en PDF. Material universiario, documentación y tareas realizadas por universitarios en nuestra biblioteca. Para descargar gratis y para leer online.
Læs om Cellular Image Classification - 2016. Udgivet af Springer International Publishing AG. Bogens ISBN er 9783319476285, køb den her
... (JoIPPRP) is a print and e-journal focused towards the rapid publication of fundamental research papers on all areas of Image Processing & pattern Recognition.. This Journal of image processing & pattern recognition progress have a broad scope, including advances in fundamental image processing, pattern recognition and statistical, mathematical techniques relevant to the scopes covers.. eISSN- 2394-1995. Focus & Scope:. ...
In this paper we present a robust parsing algorithm based on the link grammar formalism for parsing natural languages. Our algorithm is a natural extension of the original dynamic programming recognition algorithm which recursively counts the number of linkages between two words in the input sentence. The modified algorithm uses the notion of a null link in order to allow a connection between any pair of adjacent words, regardless of their dictionary definitions. The algorithm proceeds by making three dynamic programming passes. In the first pass, the input is parsed using the original algorithm which enforces the constraints on links to ensure grammaticality. In the second pass, the total cost of each substring of words is computed, where cost is determined by the number of null links necessary to parse the substring. The final pass counts the total number of parses with minimal cost. All of the original pruning techniques have natural counterparts in the robust algorithm. When used together ...
Robust active contour segmentation with an efficient global optimizer. . Biblioteca virtual para leer y descargar libros, documentos, trabajos y tesis universitarias en PDF. Material universiario, documentación y tareas realizadas por universitarios en nuestra biblioteca. Para descargar gratis y para leer online.
The intrinsic image decomposition aims to retrieve "intrinsic" properties of an image, such as shading and reflectance. To make it possible to quantitatively compare different approaches to this problem in realistic settings, we present a ground-truth dataset of intrinsic image decompositions for a variety of real-world objects. For each object, we separate an image of it into three components: Lambertian shading, reflectance, and specularities ...
I am currently with a medical imaging project. Just wondering how to measure the shape of a sphere. For example, how to give a measurement that an object is more like a sphere than the other?. I know some algorithm can give the roundness in 2D, but a measure of sphereness in 3D would be more helpful.. Also, I am very interested in other shape analysis. The shape of the object could be a very interesting feature for further pattern recognition.. Thanks very much!. ...
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): In recent NIST evaluations on sentence boundary detection, a single error metric was used to describe performance. Additional metrics, however, are available for such tasks, in which a word stream is partitioned into subunits. This paper compares alternative evaluation metrics-including the NIST error rate, classification error rate per word boundary, precision and recall, ROC curves, DET curves, precision-recall curves, and area under the curves-and discusses advantages and disadvantages of each. Unlike many studies in machine learning, we use real data for a real task. We find benefit from using curves in addition to a single metric. Furthermore, we find that data skew has an impact on metrics, and that differences among different system outputs are more visible in precision-recall curves. Results are expected to help us better understand evaluation metrics that should be generalizable to similar language processing tasks.
Cardiac parameters such as end-systolic volume, ejection fraction and myocardial mass are essential to the diagnosis and treatment of cardiovascular disease (CVD). Traditionally, these parameters are calculated based on manual myocardial segmentation by a trained technician. Fast, accurate, and automatic segmentation would provide researchers with an increased subject pool, an enhanced understanding of CVD, and may lead to the development of new therapies. In this paper we propose an automated algorithm for myocardial segmentation. This method utilizes speckle reducing anisotropic diffusion to assist the automated contour initialization. Speckle tracking segmentation (STS) is then applied throughout the cardiac cycle to track the myocardial borders. This approach, compared to standard active contour techniques, reduces the RMSE to ground truth by an order of magnitude ...
PDF (150dpi verson 2.2MB). Spatial-pooling properties deduced from the detectability of FM and Quasi-AM gratings: A reanalysis. Graham, N. and Rogowitz, B. E. (1976). Vision Research, 16, 1021-1026.. Abstract. In a thought-provoking paper, Stormeyer and Klein (1975) study several models of spatial pattern detection. The class of models they consider (Campbell and Robson, 1968; Thomas, 1970; Kulikowski and King-Smith, 1973) postulates the existence, at some stage in the visual system, of many different sizes of receptive field centered on every retinal point. Within this multiple channels context the questions they address directly are: (1) What is the bandwidth, or range of spatial frequencies, to which these receptive fields respond? and (2) What kind of pooling, if any, exists among receptive fields located at different spatial positions across the visual field? As Granger (1973) pointed out in a somewhat different context, bandwidth estimates in many situations depend on the spatial pooling ...
SPUD feature extraction algorithm implementation as appearing in: :Data-mining of time-domain features from neural extracellular field data (2007 - in press) :S Neymotin, DJ Uhlrich, KA Manning, WW Lytton NEURON { SUFFIX nothing : BVBASE is bit vector base number (typically 0 or -1) GLOBAL SPUD_INSTALLED, SHM_SPUD, NOV_SPUD, DEBUG_SPUD } PARAMETER { SPUD_INSTALLED=0 DEBUG_SPUD=0 SHM_SPUD=4 : used in spud() for measuring sharpness NOV_SPUD=1 : used in spud() to eliminate overlap of spikes CREEP_SPUD=0 : used in spud() to allow left/right creep to local minima } VERBATIM #include ,stdlib.h, #include ,math.h, #include ,values.h, // contains MAXLONG #include ,sys/time.h, extern double* hoc_pgetarg(); extern double hoc_call_func(Symbol*, int narg); extern FILE* hoc_obj_file_arg(int narg); extern Object** hoc_objgetarg(); extern void vector_resize(); extern int vector_instance_px(); extern void* vector_arg(); extern double* vector_vec(); extern double hoc_epsilon; extern double chkarg(); extern void ...