Probability and Statistics Research Project
Essay title: Probability and Statistics Research Project
Probability and Statistics Research Project
Name: Lakeisha M. Henderson
ID: @02181956
Spring 2007
Abstract
Table of Contents
Principle Component Analysis (PCA)
Definition……………………………………………………………………….4
Uses of PCA……………………………………………………………………5
Illustrative Example of PCA……………………………………………………5
Method to Determine PCA……………………………………………………..6
Basic Analysis of Variance (ANOVA)
Purpose and Definition of ANOVA……………………………………………12
Illustrative Example of ANOVA……………………………………………….12
Risk Based Design Concepts
Definition……………………………………………………………………….15
Predictions and Relation to Risk Based Designs……………………………….15
Principle Components Analysis (PCA)
Definition:
Principal Components Analysis is a method that reduces data dimensionality by
performing a covariance analysis between factors. As such, it is suitable for data sets
in multiple dimensions. It is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analyzing data. The other main advantage of PCA is that once you have found these patterns in the data, and you compress the data, i.e. by reducing the number of dimensions, without much loss of information.
Technically speaking, PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. PCA can be used for dimensionality reduction in a data set while retaining those characteristics of the data set that contribute most to its variance, by keeping lower-order principal components and ignoring higher-order ones. Such low-order components often contain the “most important” aspects of the data. But this is not necessarily the case, depending on the application.
Figure 1. The blue lines represent 2 consecutive principle components. Note that they are at right angles to each other.
PCA is also called the (discrete) Karhunen-Loиve transform (or KLT, named after Kari Karhunen and Michel Loиve) or the Hotelling transform (in honor of Harold Hotelling). Unlike other linear transforms, PCA does not have a fixed set of basis vectors. Its basis vectors depend on the data set.
Uses of PCA:
Some of the direct uses of PCA involve the identification of groups of inter-related variables, and the reduction of number of variables. However, one major