Please click here to download the Matlab source codes for regenerating the results of the following paper. The article and its source codes can be cited as follows: Amin Zollanvari, Alex Pappachen James, Reza Sameni  A Theoretical Analysis of the Peaking Phenomenon in Classification".

(The article is currently under consideration at Elsevier Pattern Recognition Letters, November 2017)

A Theoretical Analysis of the Peaking Phenomenon in Classification

Amin Zollanvari, Alex Pappachen James, Reza Sameni

A. Zollanvari and A. P. James are with the Department of Electrical and Computer Engineering, Nazarbayev University, Kazakhstan (emails: amin.zollanvari@nu.edu.kz; apj@ieee.org); R. Sameni is with the School of Electrical and Computer Engineering, Shiraz University, Iran. (email: rsameni@shirazu.ac.ir)

Abstract

The peaking phenomenon is often referred to as the rationale behind dimensionality reduction in classification. Concomitant with a lack of a mathematical framework to analytically study this phenomenon is the existence of an attitude among practitioners to constantly and significantly reduce dimensionality. In this letter, we placed the problem in a rigorous asymptotic framework that allowed us to analytically study the phenomenon. Such a focus is warranted because it can provide more insight to the underlying factors of the phenomenon.

Index Terms

Peaking Phenomenon, Linear Discriminant Analysis, Classification Error Rate, Multiple Asymptotic Analysis

I. Introduction

In a classification setting, dimensionality reduction is generally attested to by the peaking phenomenon, which, as Mclachlan described, states that  [1, p. 391]: “For training samples of finite size, the performance of a given discriminant rule in a frequentist framework does not keep on improving as the number p of feature variables is increased. Rather, its overall unconditional error rate will stop decreasing and start to increase as p is increased beyond a certain threshold”

Commonly, the certain point after which adding more features (variables) deteriorates the performance of the classifier is referred to as the optimal number of features  [23]. In pattern recognition, the peaking phenomenon was initially observed in Hughes’ work on characterizing the expected performance of a discrete classification rule as a function of sample size and dimensionality  [4]. As pointed out in  [25], the performance of a Bayesian classifier cannot worsen as long as the old set of features is recoverable from the new set. For example, the peaking phenomenon in Hughes’ work is due to the estimate of cell probabilities from finite number of observations and plugging them in the Bayes rule, which leads to a suboptimal procedure  [675]. These suboptimal procedures are what McLachlan is referring to as “rules in a frequentist framework” (see the comment above).

These studies and many other comments about the peaking phenomenon (see  [1, p. 15];  [8, p. 561]; and  [3]) or even the use of a phrase such as “curse of dimensionality” when this phenomenon is actually meant (e.g., see  [9, p. 25];  [10, p. 101]; and  [11, p. 9]) all point in one direction: in a non-Bayesian framework and above a certain dimensionality, the performance of the classifier starts deteriorating instead of improving steadily  [12].

There is rarely an analytical investigation of this phenomenon. The lack of an analytical study is mainly attributed to the paucity of a proper mathematical framework. Even when an analytical study exists, the underlying framework is unsuited to studying the phenomenon. For example, consider the work of Jain and Waller, who analytically studied the phenomenon in connection with linear discriminant analysis (LDA) in a multivariate Gaussian model of binary classification. Let δp denote the Mahalanobis distance between two p-dimensional Gaussian distributions N(μ0,i,Σp), i = 0,1; to wit,

   Δ ∘ ----------T---1------------
δp =   (μ0,p - μ1,p)Σ p (μ0,p - μ1,p).
(1)

Jain and Waller  [2] employed the Bowker and Sitgreaves’ expressions of expected error of LDA  [1314] to show that the minimal increase in δp that justifies adding another feature to avoid peaking is approximately δp+1 -δp ---δ2p--
2n-p-3, where n is the sample size in each class. Nevertheless, this type of analysis has two limitations. First, the analysis only looks into adding one feature at a time. What if the cumulative discriminatory effect of adding many features overrides the detrimental effect of additional features? Second, the underlying mathematical groundwork to obtain Bowker and Sitgreaves’ expressions is based on the classical large-sample fixed-dimension asymptotics (n →∞, p fixed). In general, the result of such asymptotic analysis is valid in the finite-sample regime when the sample size is much larger than the number of variables  [15]. As a result, one may not use this type of analysis to study the effect of having many features comparable to or far exceeding the sample size.

In this letter, we employ the double asymptotic machinery grounded in the work of pioneers such as Raudys and Serdobolskii in pattern recognition  [161718] and Wigner in physics  [19]. We extend the framework to a multiple asymptotic setting that allows us to analytically study the effect of adding many features on the expected error of a simple classifier.

II. A Prototypical Example of Simple Classifiers

Consider a set of nT = n0 + n1 independent and identically-distributed training samples in p, where x1,x2,,xn0 come from population Π0 and xn0+1,xn0+2,⋅⋅⋅ ,xnT come from population Π1. Population Πi is assumed to follow a multivariate Gaussian distribution N(μi,Σ), for i = 0,1. With this assumption, the Bayes plug-in rule with a known covariance matrix is obtained by the linear discriminant,

             (    x0 + x1 )T
W (x0,x1,x) =  x - ---2---   Σ-1(x0 - x1),
(2)

where x0 = 1-
n0 i=1n0xi and x1 = -1
n1 i=n0+1nTxi are the sample means for each class. Following for example  [2021], we are assuming the covariance matrix Σ is known. This assumption is made not only to simplify the analysis, but more importantly, to avoid singularity of sample covariance matrix when p > nT — a setting that allows us to study the effect of incorporating many variables in the classifier. The designed LDA classifier, ψ(x), is then given by

      {
ψ(x) =  1, if W (x0,x1,x ) ≤ 0.
        0, if W (x0,x1,x ) > 0
(3)

When Σ in (2) is replaced by the identity matrix, this classifier reduces to the Euclidean-distance classifier  [22]. For i = 0,1, the actual error of ψ(x) is given by

          i                      
εi = P( ((- 1)W ((x0,x1,x ) ≤)0T | x ∈ Πi,x0,x1))
      (- 1)i+1-μi --x0+2x1-Σ--1(x0 --x1)
 = Φ      ∘ (x0 --x1)TΣ--1(x0 --x1)    ,
(4)

where Φ(.) denotes the cumulative distribution function of a standard normal random variable. Then the (overall) error ε is the probability of misclassification across both classes; to wit, ε = α0ε0 + α1ε1 where αi is the prior probability of class i.

III. Double and Multiple Asymptotic Conditions

Hereafter, and for simplicity, we assume n0 = n1 = n. Consider the following sequence of Gaussian discrimination problems relative to the classifier (3):

Ξp = {Π0,p,Π1,p,μ0,p,μ1,p,Σp,np,εi,p},
(5)

where subindex p denotes dependency of parameters on p and Ξp is restricted by the following conditions:

A.
np →∞,p →∞,p-
np J < .
B.
The Mahalanobis distance (1) is finite and limp→∞δp = δp, where δp is an arbitrary limiting constant.

Condition B assures the existence of limits of relevant statistics  [2324]. Under the usual ‘double asymptotic’ conditions A and B (abbreviation by d.a.), we have (see Theorem 1 and its proof in  [23]),

                     (         )
                         - δ2
lpi→m∞ E [εp] = plim εp = Φ ( ∘-2-p-) ,
           p→∞           δp + 2J
(6)

where

 lim (⋅) Δ=      lim      (⋅),
p→ ∞     p→∞,np→∞,npp→J
          δp=O(1),δp→ δp
(7)

where plimp→∞() denotes convergence in probability under a similar set of conditions as in (7). To compare the expected error of ψ(x) defined for observations of dimension p1 with the classifier defined in a higher dimensional space p1 + p2, we generalize the double asymptotic conditions A, B, and (5) to a multiple asymptotic space. To simplify notations, let

   Δ
pj,k= pj + pj+1 + ⋅⋅⋅+ pk,
(8)

where pj,jΔ= pj, and j is an integer, 0 < j k. To formalize the setting, assume the following sequence of Gaussian discrimination problems:

Ξpj,k = {Π0,Π1,μ0,μ1,Σ, n0,n1,εi},
(9)

where all parameters depend on pj,k (omitted to simplify the notation), and Ξpj,k is restricted by the following conditions:

A.
pm →∞,n →∞,pm-
 n Jm < ,m = j,⋅⋅⋅,k,
B.
For j m k,
             -
 plim→∞  δ2pj,k = δ2pj,k,
pjj+1→...∞
 pk→∞
(10)

where δpj,k2 is obtained from (1) by replacing p with pj,k.

Intuitively, the studied problems consist of n observations from k feature sets of size pj (0 < j k). The multiple asymptotic assumptions imply that the feature-sample space scale up at constant ratios of Jj, as n →∞. This can be envisaged as a stretching of the feature-sample space in both directions (features and samples) at constant ratios.

IV. The Possibility of a Multi-hump Phenomenon

From condition B in (10), it is straightforward to see that,

--    Δ                 -2    -2
Δpj,pk=pl1im→∞ δ2p1,k - δ2p1,j = δp1,k -δp1,j,1 ≤ j < k.
       p2→..∞.
       pk→ ∞
(11)

Lemma 1: For integers j and k where 1 j < k, we have:

(i)
Δpj,pk 0 ,
(ii)
Δp1,pk = Δp1,p2 + Δp2,p3 + ⋅⋅⋅ + Δpk-1,pk.

Proof: Let εi,p1,jB denote the error of the Bayes (optimal) classifier for p1,j variables over class i = 0,1. Replacing the sample means in (4) by the actual means, we have εi,p1,jB = Φ(-δp21,j). Consider another set of p1,k variables from which the previous set of p1,j variables is recoverable; that is to say, the smaller set of variables is a subset of the larger one  [5]. Theorem 2 in  [5] yields εi,p1,kB εi,p1,jB and, therefore, δp1,j δp1,k. Using this result in (11) leads to (i). The proof of (ii) is straightforward from condition B and (11).

Theorem 1: Consider the sequence of Gaussian problems defined by (9). For a positive integer k we have

                           (                     )

                           ||    - (δ2 + Δp ,p)   ||
pli1→m∞ E[εp1,k] = plim  εp1,k = Φ || ∘-----p1----1-k-----||
p2→...∞          pp1→2→ ∞∞         (  δ2 + Δ-    + 2∑k J )
pk→ ∞          p.k→..∞             p1    p1,pk   l=1 l
(12)

where Δp1,pk is obtained from (ii).

Proof: From (11) and by setting j = 1 we have δp1,k2 = δp12 + Δp1,pk. Therefore, (12) is obtained from (6) by replacing δp2, p →∞, and J, by δp1,k2, p1,k →∞, and J1 + ⋅⋅⋅ + Jk, respectively.

Hereafter, and for ease of notations, we only consider the ordinary convergence of expected error and leave convergence in probability of true error implicit. Using Theorem 2, we seek the condition where E[εp1] E[εp1+p2]. In this regard, we have the following theorem.

Theorem 2: Consider the sequence of Gaussian problems defined by (9). Then

pl1im→ ∞E [εp1] ⋚ pli1m→∞ E[εp1+p2] ⇒
             p2→∞
 -2      --2      -4     -2  --        -4
(δp1 + 2J1)Δ p1,p2 + (δp1 +4J1δp1)Δp1,p2 - 2J2δp1 ⋚ 0,
(13)

Proof: From (6), by using Theorem 1 for p1 + p2 and the monotonicity of Φ(), inequality (13) is obtained.

Solving (13) with respect to J2 and Δp1,p2 leads to the following corollaries:

Corollary 1:

      -2     --
J2 ⋛ g(δp1,J1,Δp1,p2) ⇒ pli1→m∞ E[εp1] > pl1im→∞ E [εp1+p2],
                                  p2→∞
(14)

where

       Δ c   1(      c2)   bc2
g(a,b,c)= 2 + a- 2bc+ 2-  + a2-.
(15)

Corollary 2:

Δp1,p2 ⋚ h(δ2p,J1,J2) ⇒ lim  E[εp1] ⋚ lim  E [εp1+p2],
            1         p1→ ∞        pp12→→∞∞
(16)

where

                      4ac
h(a,b,c) Δ= ∘-2------------------------ .
           a  +8(b+ c)(a+ 2b)+ (a + 4b)
(17)

In general, we have the following theorem, which shows the possibility of a multi-hump phenomenon in the expected error of the classifier (in a multiple asymptotic sense) by adding more features with the limiting ratio of Ji+1 and the contribution Δpi,pi+1 to the Mahalanobis distance δp1,i2.

Theorem 3: Consider the sequence of Gaussian problems defined by (9) and let k be a positive integer. Then, depending on the underlying parameters of the sequence of Gaussian problems, (Δpi,pi+1,δp1,i2,J1 + ⋅⋅⋅ + Ji,Ji+1), any sequence of the inequalities constructed by choosing either < or is possible in

 lim  E[ε ] ⋚ lim  E [ε   ] ⋚ ...⋚ lim  E[ε  ].
p1→ ∞    p1   pp12→→∞∞    p1,2        p1→⋅⋅⋅∞    p1,k
                               pk→ ∞
(18)

Proof: Following similarly to the proof of (13) for 1 j k - 1, and for an arbitrary δp1,i2 > 0 and Δpi,pi+1 > 0 yields

Δ-     ⋚ h(δ2  ,J + ⋅⋅⋅+ J,J   )
 pi,pi+1     p1,i  1        i i+1
      ⇒ pl1im→∞ E [εp1,i] ⋚ pl1im→∞ E [εp1,i+1],
        pi⋅⋅→⋅∞          pi⋅⋅→⋅∞
                      pi+1→∞
(19)

where h(,,) is defined in (17). Similar results can be seen by considering g(δp1,i2,J1 + ⋅⋅⋅ + Ji,Δpi,pi+1) defined in (15). In that case we have,

        -2             --
Ji+1 ⋛ g(δp1,i,J1 + ⋅⋅⋅+ Ji,Δpi,pi+1)
    ⇒  lim E [εp1,i] ⋚  lim  E [εp1,i+1]. ■
       p1→⋅⋅⋅∞          p1→⋅⋅⋅∞
       pi→∞          ppi+i→1→∞∞
(20)

Theorems 2 (and its corollaries) have important consequences. Consider equation (14). The function g(δp12,J1,Δp1,p2) depends on: (1) δp12, which is the limiting Mahalanobis distance of infinitely many features with an increasing rate of J1; and (2) Δp1,p2, which is the increase in the limiting Mahalanobis distance by adding a second infinite set of features with the limiting ratio J2. Therefore, one may interpret g(δp12,Δp1,p2,J1) as the asymptotic relative cumulative effectiveness of adding the second infinite set of features with respect to the first. Therefore, according to (14), if the asymptotic ratio of the number of additional features to the sample size (i.e., J2) is larger (smaller) than the relative effectiveness of this additional infinite set of features, then the expected true error induced by the second set of features increases (decreases). Therefore, a consequence of Corrollary 1 is that the peaking phenomenon, as it is classically stated, does not necessarily exists in an asymptotic sense—from (18) we observe that adding (unboundedly) various sets of features can lead to an arbitrary oscillating sequence of expected error.

V. Implications in Finite-Sample Regime

In order to apply and interpret the asymptotic expressions found in the previous section in a finite-sample regime, we need to replace Jm and Δpj,pk with their finite-sample equivalents  [252623]; that is to say, with p
-mn- and Δpj,pk Δ
=δp1,k2 - δp1,j2 (cf. (11)), respectively. Consequently, Corollary 1 suggests the following finite-sample approximation to quantify the number of additional features, p2, that can be added to have a larger or smaller expected true error with respect to the previously p1 features:

       (   p1      )
p2 ⋛ ng δ2p1,n-,Δp1,p2  ⇒ E [ϵn,p1] ⋚ E[ϵn,p1+p2],
(21)

where g(,,) is defined in (15). Here, δp12 and Δp1,p2 are the Mahalanobis distance between class conditional densities defined over p1 features and the increase in the Mahalanobis distance by adding p2 features to the first p1 features, respectively. Note that for a fixed p1, increasing quantities on the right side of inequality (21) (i.e., n and Δp1,p2) leads to a smaller expected error. On the other hand, p2 is the only factor that results in a larger expected error. Therefore, (21) implies that as long as the number of additional features is greater (smaller) than their cumulative efficacy in decreasing the error rate, the expected error increases (decreases).

Special Case 1: If ng(δp12,p1
n,Δp1,p2) on the right side of (21) is less than 1, then the inequality with “<” is not feasible and the expected error rate always increases by adding p2 features.

Special Case 2: If Δp1,p2 = 0 (i.e., no contribution to the Mahalanobis distance by adding p2 features), the expected true error keeps increasing by adding any number of features, i.e. E[ϵn,p1] < E[ϵn,p1+p2].

Nevertheless, ng(δp12,p1
n,Δp1,p2) can be in general quite large and, therefore, the expected error rate may decrease or increase depending on p2. We note that the feasibility of the similar inequality was not a concern in the asymptotic sense (i.e., feasibility of (14)), because g(δp12,J1,Δp1,p2) is a fixed positive value and there is always a positive J2 that satisfies the inequality there (a similar argument holds for Theorem 3).

Special Case 3: Let Δp1,p2 > 0 and Δp1+p2,p3 > 0. Then similar to Theorem 3, and by taking tΔ=ng(δ p12,pn1,Δp1,p2) and sΔ
=ng(δp12 + Δp1,p2,p1
 n + p2
n,Δp1+p2,p3), we have (assuming p3 < s is feasible):

p2 > t,p3 < s ⇒ E [ϵn,p1] < E [ϵn,p1+p2] > E[ϵn,p1+p2+p3],

which shows that in a finite-sample setting, the expected error can increase first and decrease afterwards. Therefore, the peaking phenomenon, i.e., the existence of a certain point after which the error rate keeps increasing, does not necessary hold. In practice, the increase/decrease of the expected error depends on the nature of the problem (on δp12, Δp1,p2, Δp1+p2,p3, the number of features p1, p2, and p3, and the classifier).

VI. Numerical Experiments

As described in  [3], “usually the best features are added first, and less-useful features are added later”—this is the strategy we follow in both experiments presented in this section.

Experiment 1: In the first experiment, we examine the finite-sample accuracy of expressions (6) and (21), the former and the latter being respectively the foundation and the consequence of (12). Let a(p) denote a column vector of size p with identical elements a. Suppose the two populations follow 1000 dimensional multivariate Gaussian distributions with μ0 = 0(1000), μ1 = [0.1(100)T,0.05(900)T]T and a common identity covariance matrix. Note that as (6) suggests, the influence of distributional parameters such as means and covariance matrix on the expected error of classifier (3) is summarized in the Mahalanobis distance defined in (1).

We examine a sequence of classifiers to determine the expected error as a function of dimensionality. For a fixed p, we generate a set of n = 50,100,200 training observations of p dimension from each class, construct the classifier using (2) and (3), and find the exact error rate from (4). Note that using (4) to determine the exact error rate is possible because we have the actual distributional parameters. For each p, we repeat this procedure 100 times to achieve a distribution of error rate.

Fig. 1(a),(c),(e) plot the expected error rate of the classifier as a function of p using the aforementioned Monte Carlo procedure as well as the finite-sample approximation obtained from (6) by replacing J and δp2 with p∕n and δp2, respectively. The results show substantial agreement between Monte Carlo estimates and the analytical approximations. Fig. 1(b),(d),(f) present the magnitude of function ng(δp12,p1
n,Δp1,p2) and p2. Here, we take p1 = 100 and we seek the number of features p2 to add to p1 to obtain an identical expected error rate.


PIC Fig. 1.   Left column: comparison of the expected error and its standard deviation bar as a function of p, using the double asymptotic approximation with Monte Carlo estimates for n = 50 (a); n = 100 (b); and n = 200 (c). The expected error rate has a notch at p1 = 100; however, by adding p2 more variables, an identical expected error to that of p1 is obtained. Going beyond p = p1 + p2 results in smaller expected errors; The analytical and empirical results show substantial agreement. Right column: the magnitude of function ng(δ1002,100∕n,Δ100,p2) and p2 for n = 50 (b); n = 100 (d); and n = 200 (e). Note that at the point where the p2 line crosses ng(δ1002,100∕n,Δ100,p2) curves (plots in the right column), then E[ϵn,p1] E[ϵn,p1+p2] (from empirical results in the left column).


Experiment 2: Having laid down the accuracy of finite-sample approximation obtained from asymptotic results, in this experiment we use these analytic expressions to study various forms of expected error that would appear as a function of p. Suppose that the class conditional densities follow multivariate Gaussian distributions with a Mahalanobis distance determined from the three models depicted in Fig. 2(a). In order to obtain these models, one may consider two q = 50,000 dimensional Gaussian distributions with the following parameters and take the first p dimensions: Σ = Iq, μ0 = 0(q), and,

case I :
μ1 = [(0.5,0.4)T5,(0.4,0.1)T10,(0.1,0.05)T2000,(0.05,0)Tq- 2015]T;
case II : μ1 = e-y,y ∈ (0,50)q;case III : μ1 = (0.04,0.03)q,

where (a,b)r denotes a column vector containing r equally spaced numbers between a and b. Fig. 2 (b) plots the expected error of the classifier (3) as a function of dimensionality. As we observe, the expected error curve can have a single valley (the classical notion of the peaking phenomenon), a multi-hump behavior, or a monotonic decreasing trend depending on the underlying distributional parameters.


PIC Fig. 2.   (a) Mahalanobis distance as a function of dimensionality with distributional parameters defined in Experiment 2; (b) Expected error as a function of dimensionality. Solid, dashed, and dash-dotted lines correspond to case I, II, and III presented in Experiment 2, respectively.


VII. Conclusion

In this letter, we conducted a multiple asymptotic analysis in connection with a simple classifier to analytically study the so-called peaking phenomenon. Our analytical (and empirical) results provide a different picture of the phenomenon. That is to say an initial increase (decrease) in the expected error of a classification rule might stop and start to decrease (increase) depending on the cumulative discriminatory effect of additional predictors.

References

[1]    G. McLachlan, Discriminant Analysis and Statistical Pattern Recognition, Wiley, New York, 2004.

[2]    A. Jain and W. Waller, “On the optimal number of features in the classification of multivariate gaussian data,” Pattern Recogn., vol. 10, pp. 365–374, 1978.

[3]    S. J. Raudys and A. K. Jain, “Small sample size effects in statistical pattern recognition: Recommendations for practitioners,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, pp. 252–264, 1991.

[4]    G. F. Hughes, “On the mean accuracy of statistical pattern recognizers,” IEEE Trans. Information Theory, vol. 14, no. 1, pp. 5563, 1968.

[5]    W. Waller and A. Jain, “On the monotonicity of the performance of bayesian classifiers,” IEEE Trans. Information Theory, vol. 24, pp. 392394, 1978.

[6]    K. Abend and Jr. T. J. Harley, “Comments “on the mean accuracy of statistical pattern recognizers,” IEEE Trans. Information Theory, vol. 14, no. 1, pp. 420–423, 1969.

[7]    J. M. Van Campenhout, “On the peaking of the hughes mean recognition accuracy: the resolution of an apparent paradox,” IEEE Trans. Systems, Man and Cybernetics, vol. 8, pp. 390395, 1978.

[8]    L. Devroye, L. Gyorfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, New York, 1996.

[9]    U.M. Braga-Neto and E.R. Dougherty, Error Estimation for Pattern Recognition, Wiley-IEEE Press, New Jersey, 2015.

[10]    G. Niu, Data-Driven Technology for Engineering Systems Health Management, Science Press-Springer, Beijing, 2017.

[11]    N. Zheng and J. Xue, Statistical Learning and Pattern Analysis for Image and Video Processing, Springer, New York, 2009.

[12]    B. Chandrasekaran and A. K. Jain, “Quantization complexity and independent measurements,” IEEE Trans. on Computers, vol. 23, pp. 102–106, 1974.

[13]    R. Sitgreaves, “Some results on the distribution of the W-classification,” in Studies in Item Analysis and Prediction, H. Solomon, Ed., pp. 241–251. Stanford University Press, 1961.

[14]    A.H. Bowker and R. Sitgreaves, “An asymptotic expansion for the distribution function of the w-classification statistic,” in Studies in Item Analysis and Prediction, H. Solomon, Ed., pp. 292–310. Stanford University Press, 1961.

[15]    B. Efron, “Bayesian, frequentists, and scientists,” Journal of the American Statistical Association, vol. 100, pp. 1–5, 2005.

[16]    S. Raudys, “On determining training sample size of a linear classifier,” Computer Systems, vol. 28, pp. 79–87, 1967, in Russian.

[17]    S. Raudys and D. M. Young, “Results in statistical discriminant analysis: A review of the former soviet union literature,” Journal of Multivariate Analysis, vol. 89, pp. 1–35, 2004.

[18]    V. I. Serdobolskii, “On minimum error probability in discriminant analysis,” Soviet Math. Dokl., vol. 27, no. 3, pp. 720–725, 1983.

[19]    E. P. Wigner, “On the distribution of the roots of certain symmetric matrices,” Ann. Math., vol. 67, no. 2, pp. 325–327, 1958.

[20]    M.A. Moran, “On the expectation of errors of allocation associated with a linear discriminant function,” Biometrika, vol. 62, no. 1, pp. 141–148, 1975.

[21]    M. J. Sorum, “Estimating the expected probability of misclassification for a rule based on the linear discriminant function: Univariate normal case,” Technometrics, vol. 15, pp. 329–339, 1973.

[22]    S. Raudys, Statistical and Neural Classifiers, An Integrated Approach to Design, Springer-Verlag, London, 2001.

[23]    A. Zollanvari, U. M. Braga-Neto, and E. R. Dougherty, “Analytic study of performance of error estimators for linear discriminant analysis,” IEEE Trans. Sig. Proc., vol. 59, no. 9, pp. 4238–4255, 2011.

[24]    V. I. Serdobolskii, Multivariate Statistical Analysis: A High-Dimensional Approach, Kluwer Academic Publishers, 2000.

[25]    M. Zhang, F. Rubio, D. P. Palomar, and X. Mestre, “Finite-sample linear filter optimization in wireless communications and financial systems,” IEEE Trans. Sig. Proc., vol. 61, no. 20, pp. 5014–5025, 2013.

[26]    F. Rubio, X. Mestre, and D. P. Palomar, “Performance analysis and optimal selection of large minimum variance portfolios under estimation risk,” IEEE J. Sel. Topics Signal Process, vol. 6, no. 4, pp. 337–350, 2012.