TY - JOUR. T1 - Quantifying Individual Brain Connectivity with Functional Principal Component Analysis for Networks. AU - Petersen, Alexander. AU - Zhao, Jianyang. AU - Carmichael, Owen. AU - Müller, Hans Georg. PY - 2016/9/1. Y1 - 2016/9/1. N2 - In typical functional connectivity studies, connections between voxels or regions in the brain are represented as edges in a network. Networks for different subjects are constructed at a given graph density and are summarized by some network measure such as path length. Examining these summary measures for many density values yields samples of connectivity curves, one for each individual. This has led to the adoption of basic tools of functional data analysis, most commonly to compare control and disease groups through the average curves in each group. Such group differences, however, neglect the variability in the sample of connectivity curves. In this article, the use of functional principal component analysis (FPCA) is demonstrated to enrich ...
Principle component analysis determines the direction of maximum variance of data for a given feature set.True or false Principle component analysis determines the direction of maximum var
One of the fundamental problems in time course gene expression data analysis is to identify genes associated with a biological process or a particular stimulus of interest, like a treatment or virus infection. Most of the existing methods for this problem are designed for data with longitudinal replicates. But in reality, many time course gene experiments have no replicates or only have a small number of independent replicates. We focus on the case without replicates and propose a new method for identifying differentially expressed genes by incorporating the functional principal component analysis (FPCA) into a hypothesis testing framework. The data-driven eigenfunctions allow a flexible and parsimonious representation of time course gene expression trajectories, leaving more degrees of freedom for the inference compared to that using a prespecified basis. Moreover, the information of all genes is borrowed for individual gene inferences. The proposed approach turns out to be more powerful in identifying
A non-iterative spatial phase-shifting algorithm based on principal component analysis (PCA) is proposed to directly extract the phase from only a single spatial carrier interferogram. Firstly, we compose a set of phase-shifted fringe patterns from the original spatial carrier interferogram shifting by one pixel their starting position. Secondly, two uncorrelated quadrature signals that correspond to the first and second principal components are extracted from the phase-shifted interferograms by the PCA algorithm. Then, the modulating phase is calculated from the arctangent function of the two quadrature signals. Meanwhile, the main factors that may influence the performance of the proposed method are analyzed and discussed, such as the level of random noise, the carrier-frequency values and the angle of carrier-frequency of fringe pattern. Numerical simulations and experiments are given to demonstrate the performance of the proposed method and the results show that the proposed method is fast, ...
Autism is often diagnosed during preschool or toddled age. This diagnosis often depends on behavioral test. It is known that individuals with autism have abnormal brain signals different from typical persons yet this difference in signals is slight that it is often difficult to distinguish from the normal. However, Electroencephalogram (EEG) signals have a lot of information which reflect the behavior of brain functions which therefore captures the marker for autism, help to early diagnose and speed the treatment. This work investigates and compares classification process for autism in open-eyed tasks and motor movement by using Principle Component Analysis (PCA) for feature extracted in Time-frequency domain to reduce data dimension. The results show that the proposed method gives accuracy in the range 90-100% for autism and normal children in motor task and around 90% to detect normal in open-eyed tasks though difficult to detect autism in this task.. ...
Time series is a series of observations over time. When there is one observation at each time instance, it is called a univariate time series (UTS), and when there are more than one observations, it is called a multivariate time series (MTS). While UTS datasets have been extensively explored, MTS datasets have not been broadly investigated. The techniques for UTS datasets, however, cannot be simply extended for MTS datasets, since multivariate time series is different from multiple univariate time series. That is, an MTS item may not be broken into multiple univariate time series and be separately analyzed, because this will result in the loss of the correlation information within the multivariate time series.; In this dissertation, we introduce a set of techniques for multivariate time series analysis based on principal component analysis (PCA). As a similarity measure for MTS datasets, we present Eros (Extended Frobenius norm). Eros computes the similarity between two MTS items by comparing ...
The aim of this study was to forecast the returns for the Stock Exchange of Thailand (SET) Index by adding some explanatory variables and stationary Autoregressive Moving-Average order p and q (ARMA (p, q)) in the mean equation of returns. In addition, we used Principal Component Analysis (PCA) to remove possible complications caused by multicollinearity. Afterwards, we forecast the volatility of the returns for the SET Index. Results showed that the ARMA (1,1), which includes multiple regression based on PCA, has the best performance. In forecasting the volatility of returns, the GARCH model performs best for one day ahead; and the EGARCH model performs best for five days, ten days and twenty-two days ahead.
When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter matrices. The proposed method, called DA-PC1, copes with the small sample size problem and has not the limitation of linear discriminant analysis (LDA) in the number of extracted features. In DA-PC1, the dominant structure of distribution is preserved by PC1 and the class separability is increased by DA. The experimental results show the good performance of DA-PC1 compared to some state-of-the-art feature extraction methods.
This MATLAB function performs principal component analysis on the square covariance matrix V and returns the principal component coefficients, also known as loadings.
Kinetic modeling using a reference region is a common method for the analysis of dynamic PET studies. Available methods for outlining regions of interest representing reference regions are usually time-consuming and difficult and tend to be subjective; therefore, MRI is used to help physicians and experts to define regions of interest with higher precision. The current work introduces a fast and automated method to delineate the reference region of images obtained from an N-methyl-(11)C-2-(4-methylaminophenyl)-6-hydroxy-benzothiazole ((11)C-PIB) PET study on Alzheimer disease patients and healthy controls using a newly introduced masked volumewise principal-component analysis.. METHODS: The analysis was performed on PET studies from 22 Alzheimer disease patients (baseline, follow-up, and test/retest studies) and 4 healthy controls, that is, a total of 26 individual scans. The second principal-component images, which illustrate the kinetic behavior of the tracer in gray matter of the cerebellar ...
Given a set of points in Euclidean space, the first principal component corresponds to a line that passes through the multidimensional mean and minimizes the sum of squares of the distances of the points from the line. The second principal component corresponds to the same concept after all correlation with the first principal component has been subtracted from the points. The singular values (in Σ) are the square roots of the eigenvalues of the matrix XTX. Each eigenvalue is proportional to the portion of the variance (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few ...
I think that what you describe is a standard application of multivariate functional data clustering. In the context of multivariate functional data each data unit is treated as the relation of a $d$-dimensional stochastic (often Gaussian) process $X := ( X_1, \dots , X_d )$.. Jacques & Preda (the authors of the nice survey paper you attach) have (somewhat) recently published a paper on Model-based clustering for multivariate functional data (2014) which extends their earlier work on Clustering multivariate functional data (2012). Approximately at the same time Chiou et al. also on Multivariate functional principal component analysis: A normalization approach (2014). Note that the two approach are quite different; Chious approach has a particular (very flexible) parametric association between the curve-samples while Jacques & Preda is much more data-driven.. Both of these works are based on multivariate functional principal component analysis (MvFPCA). Earlier applications where alluded in ...
Aiming at the problem that the evaluation model had proposed by researchers to evaluate the drivability of a vehicle in the process of engine start to exist poor stability and poor accuracy. In this paper, a drivability evaluation model combined with principal component analysis and support vector r
TY - GEN. T1 - A novel dimensionality reduction technique based on independent component analysis for modeling microarray gene expression data. AU - Liu, Han. AU - Kustra, Rafal. AU - Zhang, Ji. PY - 2004/12/1. Y1 - 2004/12/1. N2 - DNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. But one challenge of microarray studies is the fact that the number n of samples collected is relatively small compared to the number p of genes per sample which are usually in thousands. In statistical terms this very large number of predictors compared to a small number of samples or observations makes the classification problem difficult. This is known as the curse of dimensionality problem. An efficient way to solve this problem is by using dimensionality reduction techniques. Principle Component Analysis(PCA) is a leading method for ...
Background: Bacteria employ a variety of adaptation strategies during the course of chronic infections. Understanding bacterial adaptation can facilitate the identification of novel drug targets for better treatment of infectious diseases. Transcriptome profiling is a comprehensive and high-throughput approach for characterization of bacterial clinical isolates from infections. However, exploitation of the complex, noisy and high-dimensional transcriptomic dataset is difficult and often hindered by low statistical power. Results: In this study, we have applied two kinds of unsupervised analysis methods, principle component analysis (PCA) and independent component analysis (ICA), to extract and characterize the most informative features from transcriptomic dataset generated from cystic fibrosis (CF) Pseudomonas aeruginosa isolates. ICA was shown to be able to efficiently extract biological meaningful features from the transcriptomic dataset and improve clustering patterns of CF isolates. ...
NOTE: Where studies included discovery and validation cohorts, diagnostic metrics of the validation set included for analysis.. Abbreviations: EAC, esophageal adenocarcinoma; ESCC, esophageal squamo-cellular carcinoma; GAC, gastric adenocarcinoma; CRC, colorectal adenocarcinoma; UPLC-TQMS, ultra-performance liquid chromatography-triple quadrupole mass spectrometry; NMR, nuclear magnetic resonance spectroscopy; ESI-TOFMS, electrospray ionization time-of-flight mass spectrometry; RRLC, rapid relaxing liquid chromatography; GC-MS, gas chromatography mass spectrometry; HPLC, high-performance liquid chromatography; FTICR-MS, Fourier transform ion cyclotron mass spectrometry; MS/MS, tandem mass spectrometry; TQMRM, triple quadrupole multiple reaction monitoring; DI, direct ionization; SPME, solid phase microextraction; PLS-DA, partial least squares discriminant analysis; ROC, receiver operating characteristic curve; PCA, principle component analysis; OPLS-DA, orthogonal projection to latent structures ...
For anyone in need of a concise, introductory guide to principle components analysis, this book is a must. Through an effective use of simple mathematical geometrical and multiple real-life examples (such as crime statistics, indicators of drug abuse, and educational expenditures)--and by minimizing the use of matrix algebra--the reader can quickly master and put this technique to immediate use. In addition, the author shows how this technique can be used in tandem with other multivariate analysis techniques-such as multiple regression and discriminant analysis.. Flexible in his presentation, Dunteman speaks to students at differing levels, beginning or advanced, bringing them new material that is both accessible and useful.. Two of the best attributes of the book are the prolific use of good examples--primarily social science based--and the repetition basics. . . . This book is a useful addition to the work in this area.. --Issues in Researching Sexual Behavior Most academic researchers and ...
In recent years, many algorithms based on kernel principal component analysis (KPCA) have been proposed including kernel principal component regression (KPCR). KPCR can be viewed as a non-linearization of principal component regression (PCR) which uses the ordinary least squares (OLS) for estimating its regression coefficients. We use PCR to dispose the negative effects of multicollinearity in regression models. However, it is well known that the main disadvantage of OLS is its sensitiveness to the presence of outliers. Therefore, KPCR can be inappropriate to be used for data set containing outliers. In this paper, we propose a novel nonlinear robust technique using hybridization of KPCA and R-estimators. The proposed technique is compared to KPCR and gives better results than KPCR.
TY - JOUR. T1 - All sparse PCA models are wrong, but some are useful. Part I. T2 - Computation of scores, residuals and explained variance. AU - Camacho, J.. AU - Smilde, A. K.. AU - Saccenti, E.. AU - Westerhuis, J. A.. PY - 2020/1/15. Y1 - 2020/1/15. N2 - Sparse Principal Component Analysis (sPCA) is a popular matrix factorization approach based on Principal Component Analysis (PCA) that combines variance maximization and sparsity with the ultimate goal of improving data interpretation. When moving from PCA to sPCA, there are a number of implications that the practitioner needs to be aware of. A relevant one is that scores and loadings in sPCA may not be orthogonal. For this reason, the traditional way of computing scores, residuals and variance explained that is used in the classical PCA can lead to unexpected properties and therefore incorrect interpretations in sPCA. This also affects how sPCA components should be visualized. In this paper we illustrate this problem both theoretically and ...
It is important to manage leaks in water distribution systems by smart water technologies. In order to reduce the water loss, researches on the main factors of water pipe network affecting non-revenue water (NRW) are being actively carried out. In recent years, research has been conducted to estimate NRW using statistical analysis techniques such as Artificial Neural Network (ANN) and Principle Component Analysis (PCA). Research on identifying factors that affect NRW in the target area is actively underway. In this study, Principle components selected through Multiple Regression Analysis are reclassified and applied to NRW estimation using PCA-ANN. The results show that the principal components estimated through PCA are connected to the NRW estimation using ANN. The detailed NRW estimation methodology presented through the study, as a result of simulating PCA-ANN after selecting statistically significant factors by MRA, forward method showed higher NRW estimation accuracy than other MRA methods.
Background: In this paper we apply the principal-component analysis filter (Hotelling filter) to reduce noise fromdynamic positron-emission tomography (PET) patient data, for a number of different radio-tracer molecules. Wefurthermore show how preprocessing images with this filter improves parametric images created from suchdynamic sequence.We use zero-mean unit variance normalization, prior to performing a Hotelling filter on the slices of a dynamictime-series. The Scree-plot technique was used to determine which principal components to be rejected in thefilter process. This filter was applied to [11C]-acetate on heart and head-neck tumors, [18F]-FDG on liver tumors andbrain, and [11C]-Raclopride on brain. Simulations of blood and tissue regions with noise properties matched to realPET data, was used to analyze how quantitation and resolution is affected by the Hotelling filter. Summing varyingparts of a 90-frame [18F]-FDG brain scan, we created 9-frame dynamic scans with image statistics ...
Principal component analysis is a popular tool for performing dimensionality reduction in a dataset. PCA performs a linear transformation of a dataset (having possibly correlated variables) to a dimension of linearly uncorrelated variables (called principal components). This transformation aims to maximize the variance of the data. In practice, you would select a subset of the principal components to represent your dataset in a reduced dimension.. The Principal Component Analysis card provides a visual representation of a dataset in a reduced dimension.. ...
Inter-subject variability is a major hurdle for neuroimaging group-level inference, as it creates complex image patterns that are not captured by standard analysis models and jeopardizes the sensitivity of statistical procedures. A solution to this problem is to model random subjects effects by using the redundant information conveyed by multiple imaging contrasts. In this paper, we introduce a novel analysis framework, where we estimate the amount of variance that is fit by a random effects subspace learned on other images; we show that a principal component regression estimator outperforms other regression models and that it fits a significant proportion (10% to 25%) of the between-subject variability. This proves for the first time that the accumulation of contrasts in each individual can provide the basis for more sensitive neuroimaging group analyzes.
The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a low-rank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès et al.. One fundamental result in MC and RPCA is that nuclear norm based convex optimizations lead to the exact low-rank matrix recovery under suitable conditions. In this paper, we extend this result by showing that strongly convex optimizations can guarantee the exact low-rank matrix recovery as well. The result in this paper not only provides sufficient conditions under which the strongly convex models lead to the exact low-rank matrix recovery, but also guides us on how to choose suitable parameters in practical algorithms.
A principal components analysis was carried out on male crania from the northeast quadrant of Africa and selected European and other African series. Individuals, not predefined groups, were the units of study, while nevertheless keeping group membership in evidence. The first principal component seems to largely capture size variation in crania from all of the regions. The same general morphometric trends were found to exist within the African and European crania, although there was some broad separation along a cline. Anatomically, the second principal component captures predominant trends denoting a broader to narrower nasal aperture combined with a similar shape change in the maxilla, an inverse relation between face-base lengths (projection) and base breadths, and a decrease in anterior base length relative to base breadth. The third principal component broadly describes trends within Africa and Europe: specifically, a change from a combination of a relatively narrower face and longer vault, ...
In this paper we proposed a novel classification system to distinguish among elderly subjects with Alzheimers disease (AD), mild cognitive impairment (MCI), and normal controls (NC). The method employed the magnetic resonance imaging (MRI) data of 178 subjects consisting of 97 NCs, 57 MCIs, and 24 ADs. First, all these three dimensional (3D) MRI images were preprocessed with atlasregistered normalization. Then, gray matter images were extracted and the 3D images were undersampled. Afterwards, principle component analysis was applied for feature extraction. In total, 20 principal components (PC) were extracted from 3D MRI data using singular value decomposition (SVD) algorithm, and 2 PCs were extracted from additional information (consisting of demographics, clinical examination, and derived anatomic volumes) using alternating least squares (ALS). On the basic of the 22 features, we constructed a kernel support vector machine decision tree (kSVM-DT). The error penalty parameter C and kernel ...
Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score ...
The classical functional linear regression model (FLM) and its extensions, which are based on the assumption that all individuals are mutually independent, have been well studied and are used by many researchers. This independence assumption is sometimes violated in practice, especially when data with a network structure are collected in scientific disciplines including marketing, sociology and spatial economics. However, relatively few studies have examined the applications of FLM to data with network structures. We propose a novel spatial functional linear model (SFLM), that incorporates a spatial autoregressive parameter and a spatial weight matrix into FLM to accommodate spatial dependencies among individuals. The proposed model is relatively flexible as it takes advantage of FLM in handling high-dimensional covariates and spatial autoregressive (SAR) model in capturing network dependencies. We develop an estimation method based on functional principal component analysis (FPCA) and maximum
This chapter discusses several popular clustering functions and open source software packages in R and their feasibility of use on larger datasets. These will include the kmeans() function, the pvclust package, and the DBSCAN (density-based spatial clustering of applications with noise) package, which implement K-means, hierarchical, and density-based clustering, respectively. Dimension reduction methods such as PCA (principle component analysis) and SVD (singular value decomposition), as well as the choice of distance measure, are explored as methods to improve the performance of hierarchical and model-based clustering methods on larger datasets. These methods are illustrated through an application to a dataset of RNA-sequencing expression data for cancer patients obtained from the Cancer Genome Atlas Kidney Clear Cell Carcinoma (TCGA-KIRC) data collection from The Cancer Imaging Archive (TCIA).
We propose a simulated annealing algorithm (stochastic non-negative independent component analysis, SNICA) for blind decomposition of linear mixtures of non-negative sources with non-negative coefficients. The demixing is based on a Metropolis-type Monte Carlo search for least dependent components, with the mutual information between recovered components as a cost function and their non-negativity as a hard constraint. Elementary moves are shears in two-dimensional subspaces and rotations in three-dimensional subspaces. The algorithm is geared at decomposing signals whose probability densities peak at zero, the case typical in analytical spectroscopy and multivariate curve resolution. The decomposition performance on large samples of synthetic mixtures and experimental data is much better than that of traditional blind source separation methods based on principal component analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS, BTEM). ...
In this study, comparison of academically advanced science students and gifted students in terms of attitude toward science and motivation toward science learning is aimed. The survey method was used for the data collection by the help of two different instruments: Attitude Toward Science scale and motivation toward science learning . Examination of reliability and validity of the scores on the instruments was conducted by using the principle component analysis with varimax rotation due to existence of a new group for validation of the instruments. The study involved 93 advanced science students and 12 gifted students who had higher IQ scores than 130 on WISC-R. The results of the study showed that the adapted instrument was valid and reliable to use for the measurements of motivation toward science learning in the context of advanced science classrooms. The comparisons of the groups in terms of the variables of the study showed that there is no statistically significant difference between the ...
Chaka sheep, named after Chaka Salt Lake, are adapted to a harsh, highly saline environment. They are known for their high-grade meat quality and are a valuable genetic resource in China. Furthermore, the Chaka sheep breed has been designated a geographical symbol of agricultural products by the Chinese Ministry of Agriculture. The genomes of 10 Chaka sheep were sequenced using next-generation sequencing, and compared to that of additional Chinese sheep breeds (Mongolian: Bayinbuluke and Tan; Tibetan: Oula sheep) to explore its population structure, genetic diversity and positive selection signatures. Principle component analysis and a neighbor-joining tree indicated that Chaka sheep significantly diverged from Bayinbuluke, Tan, and Oula sheep. Moreover, they were found to have descended from unique ancestors (K = 2 and K = 3) according to the structure analysis. The Chaka sheep genome demonstrated comparable genetic diversity from the other three breeds, as indicated by observed heterozygosity (Ho),
The Dniester-Carpathian region has attracted much attention from historians, linguists, and anthropologists, but remains insufficiently studied genetically. We have analyzed a set of autosomal polymorphic loci and Y-chromosome markers in six autochthonous Dniester-Carpathian population groups: 2 Moldavian, 1 Romanian, 1 Ukrainian and 2 Gagauz populations. To gain insight into the population history of the region, the data obtained in this study were compared with corresponding data for other populations of Western Eurasia. The analysis of 12 Alu human-specific polymorphisms in 513 individuals from the Dniester-Carpathian region showed a high degree of homogeneity among Dniester-Carpathian as well as southeastern European populations. The observed homogeneity suggests either a common ancestry of all southeastern European populations or a strong gene flow between them. Nevertheless, tree reconstruction and principle component analyses allow the distinction between Balkan-Carpathian (Macedonians, ...
This course is in two halves: machine learning and complex networks. We will begin with an introduction to the R language and to visualisation and exploratory data analysis. We will describe the mathematical challenges and ideas in learning from data. We will introduce unsupervised and supervised learning through theory and through application of commonly used methods (such as principle components analysis, k-nearest neighbours, support vector machines and others). Moving to complex networks, we will introduce key concepts of graph theory and discuss model graphs used to describe social and biological phenomena (including Erdos-Renyi graphs, small-world and scale-free networks). We will define basic metrics to characterise data-derived networks, and illustrate how networks can be a useful way to interpret data. This level 7 (Masters) version of the module will have additional extension material for self-study incorporated into the projects. This will require a deeper understanding of the subject ...
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and ...
We use spectral methods (SVD) to building statistical language models. The resulting vector models of language are then used to predict a variety of properties of words including their entity type (E.g., person, place, organization ...), their part of speech, and their meaning (or at least their word sense). Canonical Correlation Analysis, CCA, a generalization of Principle Component Analysis (PCA), gives context-oblivious vector representations of words. More sophisticated spectral methods are used to estimate Hidden Markov Models (HMMs) and generative parsing models. These methods give state estimates for words and phrases based on their contexts, and probabilites for word sequences. These again can be used to imrpove performance on many NLP tasks. Core to this work is the use of the Eigenword, a real-valued vector associated with a word that captures its meaning in the sense that distributionally similar words have similar eigenwords. Eigenwords are computed as the singular vectors of the ...
Accounts of Chemical Research (2) ACS Applied Matererials and Interfaces (2) ACS Applied Materials and Interfaces (3) ACS Central Science (1) ACS Chemical Biology (1) ACS NAno (38) ACS Photonics (32) Acta Biologica Szegediensis (1) Acta Biomaterialia (4) Acta Crystallographica (1) Acta Mechanica Sinica (1) Acta Physica Sinica (4) Actuators (1) Advanced Biomaterials and Devices in Medicine (1) Advanced Biosystems (1) Advanced Drug Delivery Reviews (2) Advanced Functional Materials (3) Advanced Healthcare Materials (1) Advanced Materials (14) Advanced Optical Materials (7) Advanced Optical Technologies (1) Advanced Powder Technology (1) Advanced Science (3) Advances in Biological Regulation (1) Advances in Colloid and Interface Science (5) Advances in Optical Technologies (1) Advances in Optics and Photonics (2) Aerosol Science and Technology (2) AIP Advances (8) AIP Conference Proceedings (1) American Journal of Physics (3) American Journal of Physiology - Renal Physiology (1) American Journal of ...
Mass cytometry enabled the measurement of more than 30 intracellular and cell surface markers on each single cell, with an additional six channels reserved for metal barcoding for sample multiplexing.30 The convenient single tube labeling of multiplexed samples combined with semiautomatic analysis makes the technique highly efficient. We were able to recapitulate the expected hematologic response, and could also demonstrate the debulking of leukemic CD34 cells in the PB by immunophenotyping as early as one week after start of therapy. Importantly, by simultaneously probing key intracellular phosphorylation targets of the Bcr-Abl1 signaling network,32 we monitored changes in signal transduction of individual cell types for each patient undergoing TKI therapy. Unsupervised principle component analysis of these early changes in signal transduction allowed patients to be identified according to their BCR-ABL1, indicating a possible future prognostic impact of this approach.. The proportion of ...
The PM-bound metallic elements in 43 daily PM1 samples collected at Mount Tai during a summer campaign were analyzed by ICP-MS. The PM1 concentrations ranged between 11.02 and 83.71 µg m-3, with an average of 38.98 µg m-3, and were influenced by meteorological events, exhibiting an increasing trend in the early stage of rain, followed by a significant decrease denoting efficient scavenging. Higher elemental concentrations were detected at Mount Tai than at other overseas background sites. According to the enrichment factor (EF) and geo-accumulation index (Igeo) calculations, among the 16 considered elements, Mn, Al, Co, Sr, Mo, Fe, Ca, V, Ti and Ni in the PM1 were mainly of crustal origin, while Cu, Cr, As, Zn, Pb and Cd were primarily due to anthropogenic causes. Source identification via Pearson correlation analysis and principle component analysis showed that coal mining and coal burning activities, metal processing industries and vehicle emissions were common sources of heavy metals on Mount Tai;
Enhancers and promoters are cis-acting regulatory elements associated with lineage-specific gene expression. Previous studies showed that different categories of active regulatory elements are in regions of open chromatin, and each category is associated with a specific subset of post-translationally marked histones. These regulatory elements are systematically activated and repressed to promote commitment of hematopoietic stem cells along separate differentiation paths, including the closely related erythrocyte (ERY) and megakaryocyte (MK) lineages. However, the order in which these decisions are made remains unclear. To characterize the order of cell fate decisions during hematopoiesis, we collected primary cells from mouse bone marrow and isolated 10 hematopoietic populations to generate transcriptomes and genome-wide maps of chromatin accessibility and histone H3 acetylated at lysine 27 binding (H3K27ac). Principle component analysis of transcriptional and open chromatin profiles demonstrated that
BACKGROUND: Cryopreservation introduces iatrogenic damage to sperm cells due to excess production of reactive oxygen species (ROS) that can damage sperm macromolecules and alter the physiochemical properties of sperm cells. These altered properties can affect the biological potential of sperm cell towards fertility. OBJECTIVE: The study was designed to assess the role of oxidative stress in sperm DNA damage upon cryopreservation. MATERIALS AND METHODS: Semen samples (160) were classified into fertile and infertile on the basis of Computer Assisted Semen Analysis (CASA), and cryopreserved. Thawed samples were analyzed for 8OHdG marker, sperm chromatin dispersion (SCD)-based DNA fragmentation index (SCD-DFI) and ROS levels. Receiver Operating Characteristics (ROC) was performed to find the specificity and sensitivity of SCD-DFI in assessing the sperm DNA integrity. Principle component analysis (PCA) was performed to group semen parameters. RESULTS: SCD-DFI significantly correlates with 8OHdG in ...
Methods We used principal component analysis and factor analysis to evaluate the 6 hospitals management quality. 方法用主成分分析与因子分析法对6所医院的管理工作质量进行综合评价。 dict.cnki.net. ...
TY - JOUR. T1 - Two-way principal component analysis for matrix-variate data, with an application to functional magnetic resonance imaging data. AU - Huang, Lei. AU - Reiss, Philip T.. AU - Xiao, Luo. AU - Zipunnikov, Vadim. AU - Lindquist, Martin. AU - Crainiceanu, Ciprian M. PY - 2017/4/1. Y1 - 2017/4/1. N2 - Many modern neuroimaging studies acquire large spatial images of the brain observed sequentially over time. Such data are often stored in the forms of matrices. To model these matrix-variate data we introduce a class of separable processes using explicit latent process modeling. To account for the size and two-way structure of the data, we extend principal component analysis to achieve dimensionality reduction at the individual level. We introduce necessary identifiability conditions for each model and develop scalable estimationprocedures.Themethodismotivatedbyandappliedtoafunctionalmagneticresonanceimaging study designed to analyze the relationship between pain and brain ...
To explore the clinical patterns of patients with IgG4-related disease (IgG4-RD) based on laboratory tests and the number of organs involved. Twenty-two baseline variables were obtained from 154 patients with IgG4-RD. Based on principal component analysis (PCA), patients with IgG4-RD were classified into different subgroups using cluster analysis. Additionally, IgG4-RD composite score (IgG4-RD CS) as a comprehensive score was calculated for each patient by principal component evaluation. Multiple linear regression was used to establish the
Implements biplot (2d and 3d) of multivariate data based on principal components analysis and diagnostic tools of the quality of the reduction.. ...
The HPPRINCOMP procedure is a high-performance procedure that performs principal component analysis. It is a high-performance version of the PRINCOMP procedure in SAS/STAT software. PROC HPPRINCOMP accepts raw data as input and can create output data sets that contain eigenvalues, eigenvectors, and standardized or unstandardized principal component scores. Principal component analysis is a multivariate technique for examining relationships among several quantitative variables. The choice between using factor analysis and using principal component analysis depends in part on your research objectives. You should use the HPPRINCOMP procedure if you are interested in summarizing data and detecting linear relationships. You can use principal component analysis to reduce the number of variables in regression, clustering, and so on. Principal component analysis was originated by Pearson (1901) and later developed by Hotelling (1933). The application of principal components is discussed by Rao (1964); ...
The present study addresses the challenge of identifying the features of the centre of pressure (CoP) trajectory that are most sensitive to postural...
Principal component analysis (PCA) has gained popularity as a method for the analysis of high-dimensional genomic data. However, it is often difficult to interpret the results because the principal components are linear combinations of all variables, and the coefficients (loadings) are typically nonzero. These nonzero values also reflect poor estimation of the true vector loadings; for example, for gene expression data, biologically we expect only a portion of the genes to be expressed in any tissue, and an even smaller fraction to be involved in a particular process. Sparse PCA methods have recently been introduced for reducing the number of nonzero coefficients, but these existing methods are not satisfactory for high-dimensional data applications because they still give too many nonzero coefficients. Here we propose a new PCA method that uses two innovations to produce an extremely sparse loading vector: (i) a random-effect model on the loadings that leads to an unbounded penalty at the origin and
Downloadable! This article documents and examines the integration of grain markets in Europe across the early modern/late modern divide and across distances and regions. It relies on principal component analysis to identify market structures. The analysis finds that a European market emerged only in the nineteenth century, but the process had earlier roots. In early modern times a fall in trading costs was followed by an increase in market efficiency. Gradually expanding processes of integration unfolded in the long-run. Early modern regional integration was widespread but uneven, with North-Western Europe reaching high levels of integration at a particularly early stage. Low-land European markets tended to be larger and better integrated than in land-locked Europe, especially within large, centralised states. In the nineteenth century, national markets grew in old states, but continental and domestic dynamics had become strictly linked.
This article describes the major statistical analyses used in a large-scale follow-up study of prelingually deaf children implanted before 5 yrs of age. The data from this longitudinal project posed a number of challenges that required a compromise among statistical sophistication, ease of interpretation, consistency with analyses used following the initial wave of data collection, and attention to limited sample size and missing data. Primary analyses were based on principal components analysis to form composite measures of highly correlated variables followed by hierarchical multiple regression to determine the contribution of predictor sets ordered to reflect important causal assumptions and conceptual questions ...