Read e-book Group Invariance in Statistical Inference

Free download. Book file PDF easily for everyone and every device. You can download and read online Group Invariance in Statistical Inference file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Group Invariance in Statistical Inference book. Happy reading Group Invariance in Statistical Inference Bookeveryone. Download file Free Book PDF Group Invariance in Statistical Inference at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Group Invariance in Statistical Inference Pocket Guide.

The question is: Let X 1 , Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. Featured on Meta. Unicorn Meta Zoo 8: What does leadership look like in our communities? In statistical analysis with large numbers of variables, the invariance approach is becoming increasingly popular and useful because of its ability and usefulness in deriving better statistical procedures.

In this book, Multivariate Statistical Inference is presented through Invariance. More Details Contributor: Narayan C. Free Returns We hope you are delighted with everything you buy from us. However, if you are not, we will refund or replace your order up to 30 days after purchase. Terms and exclusions apply; find out more from our Returns and Refunds Policy. Recently Viewed. This fact is often present in mutually correlated multivariate data. In the case of multivariate data, no well-accepted measure of variation between a mean vector p and a covariance matrix E is available.

I n recent years Cox and Hinkley , Efron , A m a r i a,b among other have reconsidered the problem of estimating p, when the coefficent of variation is known in the context of curved models. We want to estimate a under the loss function n p. Under the loss function 3. Since G p acts transitively on E we conclude from 3. To find the best equivariant estimator which minimizes R 8,p equivariant estimators p. Since the columns of O except the first one are arbitrary as far as they are orthogonal to Q it is easy to claim that the components of p.

T h e following theorem gives the best equivariant estimator BEE of p. Maximizing 3. The maximum likelihood estimator mle p is clearly equivariant and hence it is dominated by jl of Theorem 3. I n the univariate case the mle is. A m a r i a, 6 proposed through a geometric approach what he called the dual mle which is also equivariant.

As the group acts transitively on the parametric space the risk function using 3. T h e joint probability density function of Y and W under the assumption fi fi 0. Using 3. T h e result that g t is strictly decreasing in f tells what one may intuitively do if he has an idea of the true value of C and observe many large values concentrated. Normally one is suspicious of their effects on the sample mean and they have the tendency to shrink the sample mean towards the origin. T h a t is what our estimator does. T h e result that T V is strictly increasing in v relates the B B E of the mean for C known with the class of minimax estimators of the mean for C unknown.


  • Hardy-type Inequalities (Pitman Research Notes in Mathematics Series).
  • Deconstructing The Elements with 3ds Max - Second Edition;
  • Submission history!

Efron and Morris have shown that a necessary condition. So a truncated version of our estimator could be a compromise solution between the best, when one knows the values of C and the worst, one can do by using an incorrect value of C. T h e following interesting application of this model is given by Kent, Briden, and Mardia T h e natural remanent magnetization N R M in rocks is known to have, in general, originated in one or more relatively short time intervals during rock-forming or metamorphic events during which N R M is frozen in by falling temperature, grain growth, etc.

T h e N R M acquired during each such event is a single vector magnetization parallel to the then-prevailing geometric field and is called a component of N R M. B y thermal, alternating fields or chemical demagnetization in stages these components can be identified.

Group Invariance In Statistical Inference

Resistance to these treatments is known as "stability of remanence". A t each stage of the demagnetization treatment one measures the remanent magnetization as a vector in 3-dimensional space. These observations are represented by vectors Xi, T h e Q are assumed to possess some specific structures, like collinearity etc. If the E q. From 3. To find the value of k which maximizes the likelihood we compute the matrix of mixed derivatives a - logL dp'dp 2.

Hence the mle p. T h e risk function of do depends on C. T h i s phenomenon changes markedly when C varies. When C is small, dp is markedly superior to others. O n the other hand, when C is large. These conclusions are not exact as the risk of d ,di 0. T h e corresponding transformation on the sufficient statistic is given by 2. A maximal invariant in the space of X, S under G is R?

mathematics and statistics online

T h e following theorem gives a characterization of the equivariant estimator. T h e following theorem gives a characterization of the equivariant estimator of 0. Basu F i n d the conditions under which the maximum likelihood estimator is equivariant. Amari, Differential geometry of curved exponential ancillary families - curvatures and. Cox and D. Efron, The geometry of exponential families, Ann. Efron and C, Morris, Stein's estimation rule and its competitors, An empirical.

Efron and C. Morris, Stein's paradox in statistics, Sci. Kariya, N. Giri, and F. Kent, J. Briden, and K. Perron and N. Giri, On the best equivariant normal population, 40, Giri, Best equivariant multivariate Models, 3. Multivariate Analysis, 32, I n t r o d u c t i o n In Chapter 3 we have dealt with some applications of invariance in statistical estimation.


  • Starchy’s Encyclopaedia of Kissing v3.
  • Free-Space Laser Communications: Principles and Advances (Optical and Fiber Communications Reports).
  • Computational Social Network Analysis: Trends, Tools and Research Advances.
  • group theory - Finding the uniformly most powerful invariant test - Mathematics Stack Exchange.
  • Group Invariance In Statistical Inference : Narayan C. Giri : ;

We discuss in this chapter various testing problems concerning means of multivariate normal distributions. Testing problems concerning discriminant coefficents, as they are somewhat related to mean problems will also be cousidered. This chapter will also include testing problems concerning multiple correlation coefficient and a related problem concerning multiple correlation with partial informations. We will be concerned with invariant tests only and we will take a different approach to derive tests for these problems.

Rather than deriving the likelihood ratio tests and studying their optimum properties we will look for a group under which the testing problem remain invariant and then find tests based on the maximal invariant under the group. We find then the optimum invariant test using the above approach, the likelihood ratio test can be no better, since it is an invariant test. Its pdf is given by p 1. In what follows we denote 3: a p-dimensional linear vector space, and X', the dual space of X.

T h e uniqueness of nonmal distribution follows from the following two facts: a T h e distribution of X is completely determined by the family of distributions olB'X, distribution. For relevant results of univariate and multivariate normal distribution we refer to Giri We shall denote a p-variate normal with pdf 4.

Group Invariance In Statistical Inference

T h i s is known as Hotelling's T 2. I 0 otherwise. From 4. A s there exists a left invariant measure Example 2. Hence we obtain the following Theorem for Problem 1. Let Ti be the group of translations such that t i , g T i translates the last p pi components of each X", a 1 ,. Hence we get the following Theorem. Using the arguments of Theorem 4.

Let T be the translation group which translates the last a. A corresponding maximal invariant in the parametric space of , E is 61,62 , defined by,. However for fixed p, the likelihood ratio test is approximately optimum as the sample size N is large Wald, T h u s if p is not large, it seems likely that the sample size commonly occurring in practice will be fairly large enough for this result to hold.

However, if the dimension p is large, it might be that the sample size N must be extremely large for this result to apply.

Noninformative priors for maximal invariant parameter in group models | SpringerLink

T h e L B I test rejects H whenever 2 0. L B I test. For testing H 2. It follows from 4. Thus, from 4. T h e C l a s s i f i c a t i o n P r o b l e m T w o P o p u l a t i o n s Given two different p-dimensional populations Pi,P probability density functions pi,p. We assume for certainty that it belongs to one of the two populations. L e t x, 1 T be the a priori probability distribution on the states of nature.

L e t us now specialize the case of two p-variate normal populations with different means but the same covariance matrix. Fisher's discriminant function. In practice E , , ft are usually unknown, and we consider estimation and testing problems for T r i ,. We consider two testing problems about T. P r o b l e m 4, It remains invariant under the group G of p x p nonsingular matrices g of the form 9. Since the right-hand side of 4. In other words. For Problem 4, the likelihood ratio test given in 4. B y Theorem 2. Lemma 4. For Problem 5 the likelihood ratio test given in 4.

A denotes a noncentral chisquare random variable with noncentrality parameter A and k degrees of freedom. Also s. It may be checked that a corresponding maximal invariant in the parametric space is p. For problem 6 the test which rejects H invariant. Pearson L e m m a we get the theorem. As in Theorem 4. We consider here the following two testing problems: P r o b l e m 7. Partitions 5 and E as. E , where N has been reduced by one from what it was originally. For problem 7 the likelihood ratio test given in 4. Using the Neyman-Pearson L e m m a and 4. Prove Equations 4.

Prove 4. Let 7ri ,? In problem 5 let T with r a 1. Ghosh, Invariance in Testing and Estimation, Tech. Indian Statistical Institute, Calcutta, India, Soc, 54, I n t r o d u c t i o n T h e invariance principle, restricting its attention to invariant tests only, allows us to consider a subclass of the class of all available tests.

Naturally a question arises, under what conditions, an optimum invariant test is also optimum among the class of all tests if such can at all be achieved. A powerful support for this comes from the celebrated unpublished work of Hunt and Stein, popularly known as the Hunt-Stein theorem, who towards the end of Second World War proved that under certain conditions on the transformation group G , there exists an invariant test of level a which is also minimax, i. Though many proofs of this theorem have now appeared in the literature, the version of this theorem which appeared in Lehmann is probably close in spirit to that originally developed by Hunt and Stein.

P i t t m a n gave intuitive reasons for the use of best invariant procedure in hypothesis testing problems concerning location and scale parameters. Wald had the idea that for certain nonsequential location parameter estimation problems under certain restrictions on the group there exists an invariant estimator which is minimax. Peisakoff in his P h. Kiefer proved an analogue of the Hunt-Stein theorem for the continuous and. Wesler generalized for modified minimax tests based on slices of the parametric space. It is well-known that for statistical inference problems we can, without any loss of generality, characterize statistical tests as functions of sufficient statistic instead of sample observations.

Such a characterization introduces considerable simplifications to the sample space without loosing any information concerning the problem at hand. Though such a characterization in terms of maximal invariant is too strong a result to expect, the Hunt-Stein theorem has made considerable contribution towards that direction.

We shall present only the statements of this theorem. For a detailed discussion and a proof the reader is referred to Lehmann or Ghosh Let G be the group. Hunt-Stein Theorem. Let B be a a-field of subsets of with gx A is in A x B right invariant in that there exists a sequence of. Suppose distribution functions v n. It is a remarkable feature of this theorem that its assumptions have nothing to do with the statistical aspects of the problem and they involve only the group G. However, for the problem of admissibility of statistical tests the situation is more complicated.

If G is a finite or a locally compact group the best invariant test is admissible. For other groups the nature of V plays a dominant role. T h e proof of Theorem 5. L e t m denote the number of elements of G. We define. As observed in Chapter 2, invariant measures exist for many groups and they are essentially unique. B u t frequently they are not finite and as a result they cannot be taken as a probability measure.

T h e group GT P of nonsingular lower triangular matrices of order p also satisfies the conditions of this theorem see Lehmann, , p. For each point 5, JJ in the parametric with. T h i s is a local theory in sense that p x;A, JJ is close to p x;o, 7? Obviously, then, every test of level a would be locally minimax in the sense of trivial criterion obtained by not substracting a in the numerator and the denominator of 5. It may be remarked that our method of proof of 5.

A result of this type can be proved under various possible types of conditions of which we choose a form which is more convenient in many applications and stating other possible generalizations and simplifications as remarks. Using 5. A the uniformity of the last condition is unnecessary. T h e boundedness of U and the equicontinuity of the distribution of U can be similarly weakened. Specifically one may be interested to know if a level a critical region R which satisfies 5. I , A rather than just one in Theorem 5. A s further refinements are invoked The theory of locally minimax test as developed above and the theory of asymptotically minimax test far in distance from the null hypothesis to be developed later in this chapter serve two purposes.

F i r s t the obvious point of demonstrating such properties for their own sake. B u t well known valid doubts. Secondly, then, and in our opinion more important, these properties can give an indication of what to look for in the way of genuine minimax or admissibility property of certain tests, even though the later do not follow from the local or the asymptotic properties.

Consider Problem 1 of Chapter 4. T h e general linear group G p of nousingular matrix of order p operating as , s ; p , E. However, as discussed earlier see also James and Stein, , p. However this theorem does apply to the subgroup G j - p. T h u s , for each A, there is a level a test which is almost invariant and hence for this problem which is invariant under Grip see Lehmann, , p.

From the local point of view, the denominator of 5. I n the place of one-dimensional maximal invariant R under G j p we obtain a p-dimensional maximal invariant Ri, We now verify the validity of Theorem 2. From 5. Consider Problem 2 of Chapter 4. Consider Problem 3 of Chapter 4. T h e nuisance parameter in this. Hotelling's T 2 2 2. So Hotelling's T 2 2. However the subgroup G T P , T , where Gj- p is the multiplicative group of nonsingular lower triangular matrices of order p whose first column contains only zeros expect for the first element, satisfies Hunt-Stein conditions.

T h e action of the translation group T is to reduce the. We treat the latter formulation considering to have zero mean and positive definite covariance matrix E and for invariance. A maximal invariant in the R , 2. From Theorem 4. F o r ever? T h e first order terms in the expressions which involve only mathematical expectations of these quantities correspond each other.

Thus any test based on R with constant power on TJX is minimax for problem 7 if and only if it is Bayes. It is obvious that the assumption of Theorem 5. Hence we prove the following theorem. Moreover any region R form 5. T h e iE -test which rejects H20 whenever r test depends only on p at p 2 2. From Sec. So we get the following theorem. In order for an invariant test of H n against H x to be minimax, 2 2. T h e rejection region of the most powerful test is obtained from 5. T h e reader, familiar with the large sample theory, may recall that in this setting it is difficult to compare directly approximations to such small probabilities for different families of tests and one instead compares their logarithms.

While our considerations are asymptotic in a sense not involving sample sizes, we encounter the same difficulty which accounts for the form 5. Assume that 5. There are two possible cases, 5. T h e assumption 5. For every p,N,a,. Again 5. T h e fact that Hotelling's test satisfies 3. Using Theorem 4. From Theorem 5.

B u t from Theorem 5. G i r i , Kiefer and Stein a attacked the problem of minimax property of Hotelling's T - t e s t among all level a tests and proved its minimax propery for the very special case of p 2, N 3 by solving a Fredholm integral equation of first kind which is transformed into an "overdetermined" linear differential equation of first order. Linnik, Pliss and Salaeveski extended this result to N 4 and p 2 using a slightly more complicated argument to construct an overdetermined boundary problem with linear differential operator.

In the setting of Examples 5. G i r i , Kiefer and Stein a , then give the method of proof by Lunnik, Pliss and Salaeveski In particular Hotelling's T 1 2.

A n examination of the integrand of 5. Obviously 5. O n the other hand, if there are a and a constant C[ for which 5. Of course we do not assert that the left-hand side of 5. The computation is somewhat simplified by the fact that for fixed c and A we can at this point compute the unique value of c i for which 5. Note that for p 1, T i is a single point but the dependence on 7 in other cases is genuine. We could now try to solve 5. Instead we expand both 21 2' sides of 5. T h i s is solved by treatment of the corresponding homogeneous equation and by variation of parameter to yield f J 0.

However, there is nothing in the theory of the Stieltjes transform which tells us that an m x y. T h e first condition follows from 5. T h e condition d follows from the fact that m x 7 3 2. T o prove 5. T h e materials presented here can. T h e associated solution ip to the hypergeometric equation has the representation. We now prove 5. B y direct evaluation in terms of elementary integrals when 7 0 we get using 5. Group Invariance in Statistical Inference We now verify 5. Prom 5. Some Afinimai Test in Mv. W i t h a view to solving this problem we now consider the equation i 7 - t l l -.

Multiplying the r t h equation in 5. It can also be shown that any linear combination of the solutions in 5. I n the following paragraph we show that i does not vanish in the interval 2. L e t us denote it by XQ. L e t us now return to 5. Now to obtain 5. Consider the setting of Example 5. R -test 2. T h i s subgroup satisfies the conditions of the Hunt-Stein theorem. A n examination of the integrand in 5. Also write 2. Hence the expression in square brackets equals one.

Instead they proceeded as for Hotelling's T 2. T h a t proof does not rely on the somewhat heuristic development which follows, but we nevertheless sketch that development to give an idea of where the , 1 of 5. T h e generating function CC. Solving 5. T h e first condition will follow from 5. T h e former is obvious. To prove the positivity of c , z.

T h e proofs of 5. The first expression in square brackets in 5. T h e expression inside the square brackets is easily seen to be zero by computing the coefficent of z. We now verify 5. T h e integrand in 5. Hence we prove 5. Consider the setting of Sec. Exercises 1. Prove in details 5. Prove 5. Prove that in Problem 8 of Chapter 4 no invariant test under G T - ,! Behara, and N. Giri, Locally and asymptotically minimax test of some multivariate decision problems, Archiv der Mathmatik, 4, , Giri, Locally and asymptotically minimax tests of a multivariate problem.

Giri, and J. Kiefer, Local and asymptotic minimax properties of multivariate test, Ann. Kiefer, Minimal character of R -test in the simplest case, Ann, Math. Statist, 34, Giri, J. Kiefer, and C. Stein, Minimax properties of T -test in the simplest case, Ann. W, James, and C. Stein, Estimation with quadratic loss.

Fourth Berkeley Symp. Linnik, V. Kiefer, Invariance, minimax sequential estimation, and continuous time processes, Ann. Linnik, Appoximately minimax detection of a vector signal on a Gaussian background, Soviet Math. Doklady, 7, , Peisakoff, Transformation Parameters, Ph. Pittman, Tests of hypotheses concerning location and scale parameters, Biometrika 31, 20G Doklady 3, Semika, An optimum poroperty of two statistical test, Biometrika 33, Stein, The admissibility of Hotelling's T test, Ann.

Copyright:

Statist, 27, Wald, Contributions to the theory of statistical estimation and testing hypotheses, Ann. Statist 10, , Wesler, fnvariance theory and a modified principle, Ann. In multivariate analysis the role of multivariate normal distribution is of utmost importance for the obvious reason that many results relating to the univariate normal distribution have been successfully extended to the multivariate normal distribution.

However, in actual practice, the assumption of multinormality does not always hold and the verification of multinormality in a given set of data is, often, very cumbersome, if not impossible. Very often, the optimum statistical procedures derived under the assumption of multivariate normal remain optimum when the underlying distribution is a member of a family of elliptically symmetric distributions.

Elliptically Symmetric Distributions Univariate. S have the same mean and the same correlation matrix.

T h e S contains a class of probability densities whose contours of equal. T h i s family of distributions satisfies most properties of the multivariate normal.