Loading...
  • coefficients
  • The network is designed to use the local tilts of the wave-front measured by a Shack Hartmann Wave-front Sensor (SHWFS) as inputs and estimate the turbulence in terms of Zernike coefficients. (mdpi.com)
  • A new approach for determining the coefficients of a complex-valued autoregressive (CAR) and complex-valued autoregressive moving average (CARMA) model coefficients using complex-valued neural network (CVNN) technique is discussed in this paper. (hindawi.com)
  • The CAR and complex-valued moving average (CMA) coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. (hindawi.com)
  • The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. (hindawi.com)
  • adaptive
  • By passing through the atmosphere a series of turbulent layers modify the light's wave-front in such a way that Adaptive Optics reconstruction techniques are needed to improve the image quality. (mdpi.com)
  • There are two major tomographic AO techniques currently under development: Multi Conjugate (MCAO) [ 2 , 3 ] and Multi Object Adaptive Optics (MOAO) [ 4 , 5 ]. (mdpi.com)
  • An adaptive multi-tiered framework, that can be utilised for designing a context-aware cyber physical system to carry out smart data acquisition and processing, while minimising the amount of necessary human intervention is proposed and applied. (springer.com)
  • CodeProject
  • The article is not intended to provide the entire theory of neural networks, which can be found easily on the great range of different resources all over the Internet, and on CodeProject as well. (codeproject.com)
  • framework
  • In proposed framework, a neural network is learning about a relationship of an image data and a discrimination performance of colors in an image, and a color conversion rule is trained as a part of a neural network. (springer.com)
  • Leveraging deep neural network on the TensorFlow framework, we developed a computational tool, integrated Mental-disorder GEnome Score (iMEGES), for analyzing whole genome/exome sequencing data on personal genomes. (springer.com)
  • data
  • The ANN used is a Multi-Layer Perceptron (MLP) trained with simulated data with one turbulent layer changing in altitude. (mdpi.com)
  • Measuring the similarity of the whole of the output set and target set and adjusting the parameters on this global measure rather than on the similarity of pairs of individual vectors provides enhanced training rates for neural networks having a data throughput rate that can be higher than the rate at which the parameters can be adjusted. (google.com.au)
  • CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. (pubmedcentralcanada.ca)
  • In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network's processing of data and images. (hindawi.com)
  • In the stage of preprocessing, the data at the input of the neural network can be used as a constant threshold value to distinguish static from dynamic activities, as it was done in [ 4 ]. (hindawi.com)
  • The removal of nonessential components of the data can lead to smaller sizes of the neural networks, and, respectively, to lower requirements for the input data. (hindawi.com)
  • Such networks can be fed the data from the graphic analysis of the input picture and trained to output characters in one or another form. (codeproject.com)
  • The main problem was in the representation of an input data for neural network. (codeproject.com)
  • We divided our data into 4 sets of 300 training and 100 test vectors and trained a neural net to estimate the gaze direction given the stance vector. (pubmedcentralcanada.ca)
  • recognition
  • It's easy now to modify this code to add as many layers as you want (some of the state-of-the-art image recognition models are hundred+ layers of convolutions, max pooling, dropout and etc). (medium.com)
  • Matlab Project 1: GMMs and RBF Networks for Speech Pattern Recognition. (informit.com)
  • Before testing the recognition ability you must train the network (or you can load an file image of trained net). (codeproject.com)
  • A layer-wise technique of DCNN has been employed that helped to achieve the highest recognition accuracy and also get a faster convergence rate. (mdpi.com)
  • parameters
  • The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. (pubmedcentralcanada.ca)
  • We consider dynamic configuration of wireless sensor networks, where certain functions can be automatically assigned to nodes at any time of network operations, based on the parameters such as remaining energy and topology changes. (archive.org)
  • vectors
  • A set of input vectors (Il to In) are input to network (2) at an input port (4). (google.com.au)
  • The corresponding set of output vectors (O' l to O' n ) provided by the network (2) are compared to a target set of output vectors (O l to O n ) by an error logger (12) which provides to the controller (10) a measure of similarity of the two sets. (google.com.au)
  • model
  • In this part let's go deeper and try multi-layer fully connected neural networks, writing your custom model to plug into the Scikit Flow and top it with trying out convolutional networks. (medium.com)
  • This model is very similar to the previous one, but we changed the activation function from a rectified linear unit to a hyperbolic tangent (rectified linear unit and hyperbolic tangent are most popular activation functions for neural networks). (medium.com)
  • We've created conv_model function, that given tensor X and y, runs 2D convolutional layer with the most simple max pooling - just maximum. (medium.com)
  • One such network with supervised learning rule is the Multi-Layer Perceptron (MLP) model. (codeproject.com)
  • The very nature of this particular model is that it will force the output to one of nearby values if a variation of input is fed to the network that it is not trained for, thus solving the proximity issue. (codeproject.com)
  • distortions
  • By passing through the atmosphere, it suffers some distortions caused by a series of optically turbulent layers present at different altitudes and with different relative strengths. (mdpi.com)
  • outputs
  • Specifically some network models use a set of desired outputs to compare with the output and compute an error to make use of in adjusting their weights. (codeproject.com)
  • design
  • Instead of combining several neural network entities into a single class and making a mess, which leads to loosing flexibility and clarity in the code and design, all entities were split into distinct classes, making them easer to understand and reuse. (codeproject.com)
  • favorable
  • The results of layer-wise-trained DCNN are favorable in comparison with those achieved by a shallow technique of handcrafted features and standard DCNN. (mdpi.com)
  • input
  • This is easily achieved for the hidden layer as it has direct links to the actual input layer. (codeproject.com)
  • The technical approach followed in processing input images, detecting graphic symbols, analyzing and mapping the symbols and training the network for a set of desired Unicode characters corresponding to the input images are discussed in the subsequent sections. (codeproject.com)
  • consists
  • Wireless sensor network consists of a number of energy constrained sensors which are densely deployed in large geographical area. (archive.org)