Sufficient dimension reduction for normal models

In Section 3we present a class of locally efficient estimators and identify the efficient member. Therefore, the determination of d becomes a problem of determining the number of non-zero eigenvalues of the corresponding matrix, termed kernel matrix in the literature.

Otherwise, nearly all techniques for estimating E Y X employ some types of dimension reduction for X, either estimated or imposed as an intrinsic part of the model or method. In Section 2we propose a simple parameterization of the central subspace and highlight the semiparametric approach to estimating the central subspace.

This advantage is not confined to predictions, but may accrue in other phases of a regression analysis as well. One typical semiparametric tool is to obtain estimators through obtaining the corresponding influence functions.

The problem here is unfortunately much more complex than the one they considered and their result or explanation does not apply and cannot be adapted. However, deciding the amount of penalty through a data-driven procedure is usually difficult in practice.

In the non-parametric methods we reviewed in Section 2. In practice, deciding d is not a simple task. We give a broad overview of ideas underlying a particular class of methods for dimension reduction that includes PCs, along with an introduction to the corresponding methodology.

When the dimension p is large, such test procedures are computationally inefficient and the cumulative type-I error may not be ignorable. In this review article we leave out the derivation and proof of this surprising phenomenon. New methods are proposed for prediction in regressions with many predictors.

Our analysis is thus readily applicable when some covariates are discrete or categorical. In addition, the consistency of the bootstrap procedure is still unestablished in the literature.

Sufficient dimension reduction

Empirical studies suggest that the performance of the BIC type procedure can vary a great deal when different penalties are used, unless the sample size is very large.

For instance, in discriminant analysis, X Y is a random vector of features observed in one of a number of subpopulations indicated by the categorical response Yand no discriminatory information will be lost if classifiers are restricted to R. In deriving the influence function family and its efficient member, we use the geometric technique illustrated in [ 2 ] and [ 24 ].

Key words and phrases: Despite the various estimation methods, it is unclear if any of these estimators are optimal in the sense that they can exhaustively estimate the entire central subspace and have the minimum possible asymptotic estimation variance.

This is partially caused by the complexity of estimating a space rather than a parameter. The main idea of the SED method is to cast the spectral decomposition problem resulting from an inverse regression method into a least squares problem.

Efficiency bounds are of fundamental importance to the theoretical consideration. Sections 1a,b, 2 and 3 are devoted largely to this review. A potential advantage of sufficient dimension reduction is that predictions based on an estimated R may be substantially less variable than those based on X, without introducing worrisome bias.

A Review on Dimension Reduction

In any case, the estimator of E Y X is 2. Therefore, it is computationally attractive.Two models for Bayesian supervised dimension reduction BY KAI MAO Department of Statistical Science Duke University, Durham NCU.S.A.

models with likelihoods and priors are given for both methods and efficient posterior estimates of the effec- model rather than a simple normal model for Xy.

This is the approach taken in. In statistics, sufficient dimension reduction (SDR) is a paradigm for analyzing data that combines the ideas of dimension reduction with the concept of sufficiency. Dimension reduction has long been a primary goal of regression analysis.

A linearity condition is required for all the existing sufficient dimension reduction methods that deal with missing data. To remove the linearity condition, two new estimating equation procedures are proposed to handle missing response in sufficient dimension reduction: the complete-case estimating equation approach and the inverse probability weighted estimating equation approach.

Sufficient dimension-reduction methods are designed to estimate a population parameter The ideas of a sufficient reduction and the central subspace can be used to further our understanding of the existing methodology and to guide the development of a new methodology.

Sufficient dimension reduction and prediction in regression

we address these issues by using normal models for the conditional. A Review on Dimension Reduction Yanyuan Ma1 and Liping Zhu2 current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively.

condition; sliced inverse regression; sufficient dimension reduction. Sufficient Reductions in Regressions With Exponential Family Inverse Predictors we consider the problem of identifying sufficient reductions in regressions with predictors that can be all continuous, all categorical, or mixtures of continuous and categorical variables.

“ Sufficient Dimension Reduction in Regression With Categorical.

Download
Sufficient dimension reduction for normal models
Rated 5/5 based on 81 review