qfratio: Moments of Ratios of Quadratic Forms

\[ \DeclareMathOperator{\qfrE}{E} \DeclareMathOperator{\qfrtr}{tr} \newcommand{\qfrGmf}[1]{\Gamma \! \left( #1 \right)} \newcommand{\qfrbrc}[1]{\left[ {#1} \right]} \newcommand{\qfrCit}[7]{ C^{#1, #2, #3}_{#4} \! \left( #5, #6, #7 \right) } \newcommand{\qfrrf}[2][k]{\left( #2 \right)_{#1}} \newcommand{\qfrdk}[2][k]{ d_{#1} \! \left( #2 \right) } \newcommand{\qfrdijk}[4][k]{ d_{#1} \! \left( #2, #3, #4 \right) } \newcommand{\qfrdijkl}[5][k]{ d_{#1} \! \left( #2, #3, #4, #5 \right) } \newcommand{\qfrdtk}[3][k]{ \tilde{d}_{#1} \! \left( #2, #3 \right) } \newcommand{\qfrhij}[3][k]{ h_{#1} \! \left( #2, #3 \right) } \newcommand{\qfrhijk}[4][k]{ h_{#1} \! \left( #2, #3, #4 \right) } \newcommand{\qfrhtij}[3][k]{ \tilde{h}_{#1} \! \left( #2; #3 \right) } \newcommand{\qfrhtijk}[4][k]{ \tilde{h}_{#1} \! \left( #2; #3, #4 \right) } \newcommand{\qfrhhij}[3][k]{ \hat{h}_{#1} \! \left( #2; #3 \right) } \newcommand{\qfrhhijk}[4][k]{ \hat{h}_{#1} \! \left( #2; #3, #4 \right) } \renewcommand{\det}[1]{\left\lvert #1 \right\rvert} \newcommand{\qfrlmax}{\lambda_{\max}} \newcommand{\qfrmvnorm}[3][n]{N_{#1} \! \left( #2 , #3 \right)} \]

This document is to provide an overview of the moment-related functionalities of the R package qfratio (CRAN; GitHub). Distribution-related functionalities are documented in a separate vignette: vignette("qfratio_distr").

Symbols used

The use of \(n\) for the number of variables and \(p, q, \dots\) for exponents is to conform with Bao and Kan (2013).

Target

The package primarily concerns moments of ratios of quadratic forms of the following forms: \[\begin{equation} \frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q}, \quad \frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q \left( \mathbf{x}^T \mathbf{D} \mathbf{x} \right)^r}. \end{equation}\] In this package, these are called a simple and multiple ratio, respectively, following Smith (1989) (to be precise, the latter is a very specific case of the multiple ratios treated therein).

Quantities of the above forms frequently occur in the statistical and econometric literature (Mathai and Provost, 1992; Hillier et al., 2009, 2014). In evolutionary biology, the quantities known as evolvability measures have the same general form, and evaluation of their expectation (average or mean evolvability measures) can be of practical interest (Hansen and Houle, 2008; Melo et al., 2016; Watanabe, 2023).

The package is mainly for evaluating the moments of the above quantity assuming the multivariate normality of \(\mathbf{x}\), that is, \[\begin{equation} \qfrE \left( \frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q} \right), \quad \qfrE \left( \frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q \left( \mathbf{x}^T \mathbf{D} \mathbf{x} \right)^r} \right), \end{equation}\] where \(\mathbf{x} \sim \qfrmvnorm{\boldsymbol{\mu}}{\boldsymbol{\Sigma}}\).

First examples

Here are simple example use cases:

## Simple matrices
nv <- 4
A <- diag(1:nv)
B <- diag(sqrt(nv:1))
D <- diag((nv:1) ^ 2)

## Expectation of (x^T A x)^2 / (x^T x)^1 where x ~ N(0, I)
qfrm(A, p = 2, q = 1) # By default, B = I
#> 
#>  Moment of ratio of quadratic forms
#> 
#> Moment = 26.66667
#> This value is exact

## Expectation of (x^T A x)^2 / (x^T B x)^2 where x ~ N(0, I)
qfrm(A, B, p = 2) # By default, q = p
#> 
#>  Moment of ratio of quadratic forms
#> 
#> Moment = 3.467871, Error = 0 (one-sided)
#> Possible range:
#>  3.46787143 3.46787143

## Expectation of (x^T A x)^1/2 / (x^T B x)^1 where x ~ N(0, I)
qfrm(A, B, p = 1/2, q = 1)
#> 
#>  Moment of ratio of quadratic forms
#> 
#> Moment = 0.6652398
#> Error bound unavailable; recommended to inspect plot() of this object

## Expectation of (x^T A x)^2 / ((x^T B x)^1 (x^T D x)^1) where x ~ N(0, I)
qfmrm(A, B, D, p = 2, q = 1, r = 1)
#> 
#>  Moment of ratio of quadratic forms
#> 
#> Moment = 1.135126
#> Error bound unavailable; recommended to inspect plot() of this object

## Expectation of (x^T A x)^1 / (x^T B x)^1 where x ~ N(mu, Sigma)
## with arbitrary mu and Sigma
mu <- 1:nv / nv
Sigma <- diag(runif(nv) * 3)
qfrm(A, B, p = 1, q = 1, mu = mu, Sigma = Sigma, m = 300)
#> 
#>  Moment of ratio of quadratic forms
#> 
#> Moment = 2.210412, Error = 0 (two-sided)
#> Possible range:
#>  2.21041201 2.21041201

In each of the above examples, the function qfrm() (quadratic form ratio moment) or qfmrm() (quadratic form multiple ratio moment) is called to calculate the moment. The function calls are followed by outputs from the print method for the results, which show the values of the moments.

You might have wondered what it means by results being “exact” or having “error”, or about the unexplained argument m = 300 in the last example. Seeing this requires some excursion into the math involved.

Mathematical details

Moment existence conditions

For the above quantities to be well-defined and have finite moments:

For the simple ratio, the necessary and sufficient conditions for the moment to exist is stated in Bao and Kan (2013, proposition 1):

  1. When \(\mathbf{B}\) is positive definite, the moment exists if and only if \(\frac{n}{2} + p > q\).
  2. When \(\mathbf{B}\) is positive semidefinite with rank \(l\), consider a matrix of its eigenvectors \(\mathbf{P} = (\mathbf{P}_1, \mathbf{P}_2)\), with \(\mathbf{P}_1\) and \(\mathbf{P}_2\) being \(n \times l\) and \(n \times (n - l)\) submatrices corresponding to the nonzero and zero eigenvalues, respectively, and let \(\tilde{\mathbf{A}}_{12} = \mathbf{P}_1^T \mathbf{A} \mathbf{P}_2\) and \(\tilde{\mathbf{A}}_{22} = \mathbf{P}_2^T \mathbf{A} \mathbf{P}_2\). Then,
    • when \(\tilde{\mathbf{A}}_{12} = \mathbf{0}_{l \times (n - l)}\) and \(\tilde{\mathbf{A}}_{22} = \mathbf{0}_{(n - l) \times (n - l)}\), the moment exists if and only if \(\frac{l}{2} + p > q\);
    • when \(\tilde{\mathbf{A}}_{12} \neq \mathbf{0}_{l \times (n - l)}\) and \(\tilde{\mathbf{A}}_{22} = \mathbf{0}_{(n - l) \times (n - l)}\), the moment exists if and only if \(\frac{l + p}{2} > q\); and
    • when \(\tilde{\mathbf{A}}_{22} \neq \mathbf{0}_{(n - l) \times (n - l)}\), the moment exists if and only if \(\frac{l}{2} > q\).

For the multiple ratio, the existence conditions are more difficult to define. When the null space of \(\mathbf{B}\) is a subspace of that of \(\mathbf{D}\) (or vice versa), the existence problem reduces to that of the simple ratio, and the above conditions apply by inserting the rank of \(\mathbf{B}\) and \(q + r\) into \(l\) and \(q\) above (Watanabe, 2023). However, when both \(\mathbf{B}\) and \(\mathbf{D}\) are singular and neither of their ranges is a subspace of each other, no general existence conditions seem to be known in the literature. See the documentations for qfrm and qfmrm for the behaviors when the existence conditions are not satisfied.

Expressions for moments

There are several equivalent expressions for moments of ratios of quadratic forms in normal variables, only some of which are amenable to straightforward numerical evaluation. The package uses one class of such expressions, based on a series expansion and term-by-term integration of a ratio (Smith, 1989, 1993; Hillier, 2001; Hillier et al., 2009, 2014; Bao and Kan, 2013).

The following expressions assume \(\mathbf{x} \sim \qfrmvnorm{\boldsymbol{\mu}}{\mathbf{I}_n}\). For arbitrary but nonsingular \(\boldsymbol{\Sigma}\), it is always possible to transform the variables and argument matrices by using \(\mathbf{x}_{\mathrm{new}} = \mathbf{K}^{-1} \mathbf{x}\), \(\mathbf{A}_{\mathrm{new}} = \mathbf{K}^T \mathbf{A} \mathbf{K}\), etc., where \(\mathbf{K}\) is an \(n \times n\) matrix that satisfies \(\mathbf{K} \mathbf{K}^T = \boldsymbol{\Sigma}\) (Mathai and Provost, 1992, chap. 3). When \(\boldsymbol{\Sigma}\) is singular, certain conditions need to be met by the argument matrices, \(\boldsymbol{\Sigma}\), and \(\boldsymbol{\mu}\) for this transformation, and hence the following expressions, to be valid (Watanabe, 2023, appendix C).

For the simple ratio with an arbitrary (positive real) \(p\), when the moment exists, the expression is (Bao and Kan, 2013, eq. (12)) \[\begin{equation} \begin{aligned} \qfrE \left( \frac{(\mathbf{x}^T \mathbf{A} \mathbf{x})^p}{(\mathbf{x}^T \mathbf{B} \mathbf{x})^q} \right) = & 2^{p - q} \beta_{\mathbf{A}}^{-p} \beta_{\mathbf{B}}^{q} \qfrGmf{ \frac{n}{2} + p - q } \\ & \quad \cdot \sum_{i=0}^{\infty} \sum_{j=0}^{\infty} \frac{ \qfrrf[i]{-p} \qfrrf[j]{q} }{ \qfrGmf{\frac{n}{2} + i + j } } \\ & \qquad \cdot \qfrhij[i,j]{ \mathbf{I}_n - \beta_{\mathbf{A}} \mathbf{A} }{ \mathbf{I}_n - \beta_{\mathbf{B}} \mathbf{B} } , \end{aligned} \tag{1} \end{equation}\] and for a positive-integral \(p\), this can be simplified into (Bao and Kan, 2013, eq. (13), correcting a typo): \[\begin{equation} \begin{aligned} \qfrE \left( \frac{(\mathbf{x}^T \mathbf{A} \mathbf{x})^p}{(\mathbf{x}^T \mathbf{B} \mathbf{x})^q} \right) = & 2^{p - q} \beta_{\mathbf{B}}^{q} p! \qfrGmf{ \frac{n}{2} + p - q } \sum_{j=0}^{\infty} \frac{ \qfrrf[j]{q} }{ \qfrGmf{ \frac{n}{2} + p + j } } \qfrhtij[p;j]{ \mathbf{A} }{ \mathbf{I}_n - \beta_{\mathbf{B}} \mathbf{B} } . \end{aligned} \tag{2} \end{equation}\]

For the multiple ratio, the expression for an arbitrary \(p\) is (after Smith, 1989, eq. (5.2)) \[\begin{equation} \begin{aligned} \qfrE \! \left( \frac{(\mathbf{x}^T \mathbf{A} \mathbf{x})^p}{(\mathbf{x}^T \mathbf{B} \mathbf{x})^q (\mathbf{x}^T \mathbf{D} \mathbf{x})^r} \right) = & 2^{p - q - r} \beta_{\mathbf{B}}^{q} \beta_{\mathbf{D}}^{r} \qfrGmf{ \frac{n}{2} + p - q - r } \\ & \quad \cdot \sum_{i=0}^{\infty} \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \frac{ \qfrrf[i]{-p} \qfrrf[j]{q} \qfrrf[k]{r} }{ \qfrGmf{ \frac{n}{2} + i + j + k } } \\ & \quad \quad \cdot \qfrhijk[i, j, k]{ \mathbf{I}_n - \beta_{\mathbf{A}} \mathbf{A} }{ \mathbf{I}_n - \beta_{\mathbf{B}} \mathbf{B} }{ \mathbf{I}_n - \beta_{\mathbf{D}} \mathbf{D} }, \end{aligned} \tag{3} \end{equation}\] and for a positive-integral \(p\) (Watanabe, 2023, eq. (B28)), \[\begin{equation} \begin{aligned} \qfrE \! \left( \frac{(\mathbf{x}^T \mathbf{A} \mathbf{x})^p}{(\mathbf{x}^T \mathbf{B} \mathbf{x})^q (\mathbf{x}^T \mathbf{D} \mathbf{x})^r} \right) = & 2^{p - q - r} \beta_{\mathbf{B}}^{q} \beta_{\mathbf{D}}^{r} p! \qfrGmf{ \frac{n}{2} + p - q - r } \\ & \quad \cdot \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \frac{ \qfrrf[j]{q} \qfrrf[k]{r} }{ \qfrGmf{ \frac{n}{2} + p + j + k } } \\ & \quad \quad \cdot \qfrhtijk[p; j,k]{ \mathbf{A} }{ \mathbf{I}_n - \beta_{\mathbf{B}} \mathbf{B} }{ \mathbf{I}_n - \beta_{\mathbf{D}} \mathbf{D} }, \end{aligned} \tag{4} \end{equation}\]

In these expressions, \(\beta_{\cdot}\) are arbitrary scaling constants that satisfy \(0 < \beta < 2 / \qfrlmax\), with \(\qfrlmax\) being the largest eigenvalue of the argument matrix. In addition, \(h_{k_1, \dots k_s}\) and \(\tilde{h}_{k_1; k_2 \dots k_s}\) are the coefficients of \(t_1^{k_1} \dots t_s^{k_s}\) in the following power series expansions: \[\begin{multline} \det{ \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s }^{-\frac{1}{2}} \\ \cdot \exp \left( \frac{ (1 - t_1 - \dots - t_s) \boldsymbol{\mu}^T \left( \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s \right)^{-1} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\mu} }{2} \right) \\ = \sum_{k_1 = 0}^{\infty} \dots \sum_{k_s = 0}^{\infty} \qfrhijk[k_1, \dots, k_s]{ \mathbf{A}_1 }{\dots}{ \mathbf{A}_s } t_1^{k_1} \dots t_s^{k_s}, \label{gfun_hij} \\ \end{multline}\] \[\begin{multline} \det{ \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s }^{-\frac{1}{2}} \\ \cdot \exp \left( \frac{ (1 - t_2 - \dots - t_s) \boldsymbol{\mu}^T \left( \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s \right)^{-1} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\mu} }{2} \right) \\ = \sum_{k_1 = 0}^{\infty} \dots \sum_{k_s = 0}^{\infty} \qfrhtijk[k_1; k_2, \dots, k_s]{ \mathbf{A}_1 }{ \mathbf{A}_2, \dots }{ \mathbf{A}_s } t_1^{k_1} t_2^{k_2} \dots t_s^{k_s}. \end{multline}\] (These notations follow Bao and Kan (2013); Hillier et al. (2014) used the symbol \(h\) for \(\tilde{h}\) here.)

A common special case is when \(\boldsymbol{\mu} = \mathbf{0}_n\), in which case the coefficients \(d\) in the following expansion is used: \[\begin{equation} \det{ \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s }^{-\frac{1}{2}} = \sum_{k_1 = 0}^{\infty} \dots \sum_{k_s = 0}^{\infty} \qfrdijk[k_1, \dots, k_s]{ \mathbf{A}_1 }{\dots}{ \mathbf{A}_s } t_1^{k_1} \dots t_s^{k_s} . \end{equation}\]

Actually, the above expressions (1)–(4) can be written in terms of \(d\) with an additional argument (Hillier et al., 2009; Bao and Kan, 2013), but those alternative expressions are not utilized in the package.

A theoretically important result is that this \(d\) is related to the top-order zonal and invariant polynomials of matrix arguments, \(C\) (Chikuse, 1987; Hillier et al., 2009): \[\begin{align} \qfrdijk[k_1, \dots, k_s]{ \mathbf{A}_1 }{\dots}{ \mathbf{A}_s } = \frac{ \qfrrf[k_1 + \dots + k_s]{\frac{1}{2}} }{ k_1 ! \dots k_s !} \qfrCit{\qfrbrc{k_1}}{\dots}{\qfrbrc{k_s}}{\qfrbrc{k_1 + \dots + k_s}}{ \mathbf{A}_1 }{\dots}{ \mathbf{A}_s } , \end{align}\] where \(\qfrbrc{k_i}\) and the like denote the top-order partitions of \(k_i\), in the notation of Smith (1989).

Note that the order of the argument matrices is arbitrary in \(d\) or \(h\), but not in \(\tilde{h}\) where \(\mathbf{A}_1\) is not interchangeable with the other arguments.

Special cases

A notable special case for moments of simple ratios is when \(p\) is a positive integer and \(\mathbf{B} = \mathbf{I}_n\). In this case, (2) simplifies into (Hillier et al., 2014, theorem 4) \[\begin{equation} \qfrE \left( \frac{(\mathbf{x}^T \mathbf{A} \mathbf{x})^p}{(\mathbf{x}^T \mathbf{x})^q} \right) = 2^{p - q} p! \sum_{k=0}^{p} \frac{ \qfrGmf{ \frac{n}{2} + p - q + k } }{ 2^k k! \qfrGmf{ \frac{n}{2} + p + k } } {}_1 F_1 \left( q ; \frac{n}{2} + p + k ; - \frac{\boldsymbol{\mu}^T \boldsymbol{\mu}}{2} \right) a_{p, k} \left( \mathbf{A}, \boldsymbol{\mu} \right) , \tag{5} \end{equation}\] where \({}_1 F_1 \left( \cdot ; \cdot ; \cdot \right)\) is the confluent hypergeometric function, and \(a_{p, k}\) are the coefficients of \(t^p\) in \[\begin{align} & \det{ \mathbf{I}_n - t \mathbf{A} }^{-\frac{1}{2}} \left( \boldsymbol{\mu}^T \left( \mathbf{I}_n - t \mathbf{A} \right)^{-1} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\mu} \right) ^ k = \sum_{i = k}^{\infty} a_{p, i} \left( \mathbf{A}, \boldsymbol{\mu} \right) t^{i}. \end{align}\] As the lowest-order term in this last expansion is that of \(t^k\), \(a_{p, k} = 0\) for \(k > p\), so that the series in (5) truncates at \(k = p\). This truncation enables quasi-exact evaluation of the moment, given accurate evaluation of the hypergeometric function.

When \(\boldsymbol{\mu} = \mathbf{0}_n\) in addition, (5) further simplifies as \(a_{p, k} = 0\) for \(k > 0\), so that the series essentially vanishes leaving only the term for \(k = 0\). Similar simplification happens in (1), (3), and (4) as well when any of the argument matrices is \(\mathbf{I}_n\) and \(\boldsymbol{\mu} = \mathbf{0}_n\), although they will still involve a single or double infinite series.

Numerical evaluation

The above expressions (1)–(4) are exact, but their exact values cannot generally be evaluated as they involve infinite series, except when the conditions for (5) apply. Fortunately, the series are absolutely convergent, so that their partial sums up to certain orders typically yield a sufficiently accurate result. This is what is done by the qfrm() and qfmrm() functions. The order of evaluation is determined by the argument m; the functions calculate the terms from \(i = j = k = 0\) to \(i + j + k \leq m\) (as \(i,j,k\) appear in the above summations) and then return an \(m + 1\) vector that contains sums of the same-order terms, along with other things.

The default is m = 100, which sufficed for most of the above examples, but for large problems, the user would want to use a larger value (which can be hundreds of thousands for very large problems; see below).

The coefficients \(d\), \(h\), \(\tilde{h}\) are evaluated by recursive algorithms described below. At the core of this package are internal functions to implement these recursive algorithms, which can be computationally intensive as the problem becomes larger.

Truncation error

A great advantage in the above expressions is that an error bound is available for a partial (truncated) sum for the simple ratio, provided that \(p\) is a positive integer and \(\mathbf{B}\) is positive definite. By denoting the expression (2) as \(M \left( \mathbf{A}, \mathbf{B}, p, q \right)\) and the partial sum of the same up to \(j = m\) as \(\hat{M}_m \left( \mathbf{A}, \mathbf{B}, p, q \right)\), the error bound is (Hillier et al., 2014, theorem 7) \[\begin{multline} \lvert M \left( \mathbf{A}, \mathbf{B}, p, q \right) - \hat{M}_m \left( \mathbf{A}, \mathbf{B}, p, q \right) \rvert \\ \leq \frac{ 2^{p - q} \beta_{\mathbf{B}}^{q} p! \qfrGmf{ \frac{n}{2} + p - q } \qfrrf[m+1]{q} } { \qfrGmf{ \frac{n}{2} + p + m + 1 } } \left[ \frac{ \exp \left( \frac{ \bar{\boldsymbol{\mu}}^T \bar{\boldsymbol{\mu}} - \boldsymbol{\mu}^T \boldsymbol{\mu} }{2} \right) \qfrdtk[p]{ \bar{A} }{ \bar{\boldsymbol{\mu}} } }{ \det{\beta_{\mathbf{B}} \mathbf{B}}^{\frac{1}{2}} } - \sum_{j=0}^{m} \qfrhhij[p;j]{ \mathbf{A}^+ }{ \mathbf{I}_n - \beta_{\mathbf{B}} \mathbf{B} } \right] , \end{multline}\] where \(\mathbf{A}^+\) is a symmetric matrix constructed from the eigenvectors and “positivized” eigenvalues of \(\mathbf{A}\) (above), \(\bar{\boldsymbol{\mu}} = \sqrt{2}\left( \beta_\mathbf{B} \mathbf{B} \right)^{-\frac{1}{2}} \boldsymbol{\mu}\), \(\bar{\mathbf{A}} = \beta_{\mathbf{B}}^{-1} \mathbf{B}^{-\frac{1}{2}} \mathbf{A}^+ \mathbf{B}^{-\frac{1}{2}}\), and \(\tilde{d}\) and \(\hat{h}\) are coefficients in the following generating functions: \[\begin{equation} \det{ \mathbf{I}_n - t \mathbf{A} }^{-\frac{1}{2}} \exp \left( \frac{ \boldsymbol{\mu}^T \left( \mathbf{I}_n - t \mathbf{A} \right)^{-1} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\mu} }{2} \right) = \sum_{k = 0}^{\infty} \qfrdtk[k]{ \mathbf{A} }{ \boldsymbol{\mu} } t^{k} , \end{equation}\] \[\begin{multline} \det{ \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s }^{-\frac{1}{2}} \\ \quad \cdot \exp \left( \frac{ (1 + t_2 + \dots + t_s) \boldsymbol{\mu}^T \left( \mathbf{I}_n - t_1 \mathbf{A}_1 - \dots - t_s \mathbf{A}_s \right)^{-1} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\mu} }{2} \right) \\ = \sum_{k_1 = 0}^{\infty} \dots \sum_{k_s = 0}^{\infty} \qfrhhijk[k_1; k_2, \dots, k_s]{ \mathbf{A}_1 }{ \mathbf{A}_2, \dots }{ \mathbf{A}_s } t_1^{k_1} t_2^{k_2} \dots t_s^{k_s} . \end{multline}\]

For the multiple ratio, a similar error bound is available when \(\mathbf{D} = \mathbf{I}_n\), in addition to \(p\) being a positive integer. By denoting the expression (4) as \(M \left( \mathbf{A}, \mathbf{B}, \mathbf{D}, p, q, r \right)\), and the partial sum of (4) up to \(j + k = m\) as \(\hat{M}_m \left( \mathbf{A}, \mathbf{B}, \mathbf{D}, p, q, r \right)\), the error bound is \[\begin{equation} \begin{aligned} & \lvert M \left( \mathbf{A}, \mathbf{B}, \mathbf{I}_n, p, q, r \right) - \hat{M}_m \left( \mathbf{A}, \mathbf{B}, \mathbf{I}_n, p, q, r \right) \rvert \\ & \quad \leq \frac{ 2^{p - q - r} \beta_{\mathbf{B}}^{q} p! \qfrGmf{ \frac{n}{2} + p - q - r } \qfrrf[m+1]{\max \left( q, r \right)} } { \qfrGmf{ \frac{n}{2} + p + m + 1 } } \\ & \quad \quad \cdot \left[ \frac{ \exp \left( \frac{ \tilde{\boldsymbol{\mu}}^T \tilde{\boldsymbol{\mu}} - \boldsymbol{\mu}^T \boldsymbol{\mu} }{2} \right) \qfrdtk[p]{ \bar{A} }{ \tilde{\boldsymbol{\mu}} } }{ \det{\beta_{\mathbf{B}} \mathbf{B}}^{\frac{1}{2}} } - \sum_{k=0}^{m} \sum_{j=0}^{k} \qfrhhijk[p;k-j,k]{ \mathbf{A}^+ }{ \mathbf{I}_n - \beta_{\mathbf{B}} \mathbf{B} }{ \mathbf{0}_{n \times n} } \right] , \end{aligned} \end{equation}\] where \(\tilde{\boldsymbol{\mu}} = \sqrt{3}\left( \beta_\mathbf{B} \mathbf{B} \right)^{-\frac{1}{2}} \boldsymbol{\mu}\) and other notations are as above.

Recursions

The coefficients \(h\), \(\tilde{h}\), and \(\hat{h}\) in the above series expressions are evaluated by the “super-short” recursive algorithm developed by Hillier and colleagues (Bao and Kan, 2013; Hillier et al., 2014). That is, \[\begin{align} h_{i,j,k} = & \frac{\qfrtr{\mathbf{G}_{i,j,k}} + \boldsymbol{\mu}^T \mathbf{g}_{i,j,k}}{2(i + j + k)}, \\ \tilde{h}_{i,j,k} = & \frac{\qfrtr{\tilde{\mathbf{G}}_{i,j,k}} + \boldsymbol{\mu}^T \tilde{\mathbf{g}}_{i,j,k}}{2(i + j + k)}, \\ \hat{h}_{i,j,k} = & \frac{\qfrtr{\hat{\mathbf{G}}_{i,j,k}} + \boldsymbol{\mu}^T \hat{\mathbf{g}}_{i,j,k}}{2(i + j + k)}, \end{align}\] where \[\begin{align} \mathbf{G}_{i,j,k} = & \mathbf{A}_1 \left( h_{i-1,j,k} \mathbf{I}_n + \mathbf{G}_{i-1,j,k} \right) \\ & + \mathbf{A}_2 \left( h_{i,j-1,k} \mathbf{I}_n + \mathbf{G}_{i,j-1,k} \right) \\ & + \mathbf{A}_3 \left( h_{i,j,k-1} \mathbf{I}_n + \mathbf{G}_{i,j,k-1} \right) , \\ \mathbf{g}_{i,j,k} = & \left( \mathbf{G}_{i,j,k} - \mathbf{G}_{i-1,j,k} - \mathbf{G}_{i,j-1,k} - \mathbf{G}_{i,j,k-1} \right) \boldsymbol{\mu} \\ & - \left( h_{i-1,j,k} + h_{i,j-1,k} + h_{i,j,k-1} \right) \boldsymbol{\mu} \\ & + \mathbf{A}_1 \mathbf{g}_{i-1,j,k} + \mathbf{A}_2 \mathbf{g}_{i,j-1,k} + \mathbf{A}_3 \mathbf{g}_{i,j,k-1} , \\ \tilde{\mathbf{G}}_{i,j,k} = & \mathbf{A}_1 \left( \tilde{h}_{i-1,j,k} \mathbf{I}_n + \tilde{\mathbf{G}}_{i-1,j,k} \right) \\ & + \mathbf{A}_2 \left( \tilde{h}_{i,j-1,k} \mathbf{I}_n + \tilde{\mathbf{G}}_{i,j-1,k} \right) \\ & + \mathbf{A}_3 \left( \tilde{h}_{i,j,k-1} \mathbf{I}_n + \tilde{\mathbf{G}}_{i,j,k-1} \right) , \\ \tilde{\mathbf{g}}_{i,j,k} = & \left( \tilde{\mathbf{G}}_{i,j,k} - \tilde{\mathbf{G}}_{i,j-1,k} - \tilde{\mathbf{G}}_{i,j,k-1} \right) \boldsymbol{\mu} \\ & - \left( \tilde{h}_{i,j-1,k} + \tilde{h}_{i,j,k-1} \right) \boldsymbol{\mu} \\ & + \mathbf{A}_1 \tilde{\mathbf{g}}_{i-1,j,k} + \mathbf{A}_2 \tilde{\mathbf{g}}_{i,j-1,k} + \mathbf{A}_3 \tilde{\mathbf{g}}_{i,j,k-1} , \\ \hat{\mathbf{G}}_{i,j,k} = & \mathbf{A}_1 \left( \hat{h}_{i-1,j,k} \mathbf{I}_n + \hat{\mathbf{G}}_{i-1,j,k} \right) \\ & + \mathbf{A}_2 \left( \hat{h}_{i,j-1,k} \mathbf{I}_n + \hat{\mathbf{G}}_{i,j-1,k} \right) \\ & + \mathbf{A}_3 \left( \hat{h}_{i,j,k-1} \mathbf{I}_n + \hat{\mathbf{G}}_{i,j,k-1} \right) , \\ \hat{\mathbf{g}}_{i,j,k} = & \left( \hat{\mathbf{G}}_{i,j,k} + \hat{\mathbf{G}}_{i,j-1,k} + \hat{\mathbf{G}}_{i,j,k-1} \right) \boldsymbol{\mu} \\ & + \left( \hat{h}_{i,j-1,k} + \hat{h}_{i,j,k-1} \right) \boldsymbol{\mu} \\ & + \mathbf{A}_1 \hat{\mathbf{g}}_{i-1,j,k} + \mathbf{A}_2 \hat{\mathbf{g}}_{i,j-1,k} + \mathbf{A}_3 \hat{\mathbf{g}}_{i,j,k-1} , \\ \end{align}\] with the initial conditions \(h_{0,0,0} = \tilde{h}_{0,0,0} = \hat{h}_{0,0,0} = 1\) (as per Hillier et al. (2014), but not \(0\), as incorrectly printed in Bao and Kan (2013)), \(\mathbf{G}_{0,0,0} = \tilde{\mathbf{G}}_{0,0,0} = \hat{\mathbf{G}}_{0,0,0} = \mathbf{0}_{n \times n}\), and \(\mathbf{g}_{0,0,0} = \tilde{\mathbf{g}}_{0,0,0} = \hat{\mathbf{g}}_{0,0,0} = \mathbf{0}_n\). If any of the indices is negative, the term is treated as \(0\).

The recursion for \(d\) can be obtained by just inserting \(\boldsymbol{\mu} = \mathbf{0}_n\) (and hence omitting \(\mathbf{g}\) altogether) in that for \(h\), and those for the two-argument versions are by omitting any terms with \(\mathbf{A}_3\) or \(k - 1\) as an index in the corresponding three-argument versions.

The recursion for \(\qfrdtk{\mathbf{A}}{\boldsymbol{\mu}}\) is (Bao and Kan, 2013; Hillier et al., 2014): \[\begin{equation} \tilde{d}_{k} = \frac{1}{2k} \sum_{i=1}^n \left(u_{k,i} + v_{k,i} \right), \end{equation}\] where \[\begin{align} u_{k,i} = & \lambda_i \left( \tilde{d}_{k-1} + u_{k-1,i} \right), \\ v_{k,i} = & \delta_i u_{k,i} + \lambda_i v_{k-1,i}, \end{align}\] with \(\lambda_i\) being the \(i\)th eigenvalue of \(\mathbf{A}\), \(\delta_i = \left( \mathbf{h}_i^T \boldsymbol{\mu} \right)^2\), \(\mathbf{h}_i\) being the eigenvector of \(\mathbf{A}\) corresponding to \(\lambda_i\), and the initial conditions are \(\tilde{d}_0 = 1\) and \(u_{0,i} = v_{0,i} = 0\) for all \(i\).

The recursion for \(a_{p,k} \left( \mathbf{A}, \boldsymbol{\mu} \right)\) is (Hillier et al., 2014, eqs. (31), (32), noting that \(w\) should be indexed by \(k\) as well): \[\begin{equation} a_{p,k} = \sum_{i=1}^n \delta_i w_{p,k,i}, \end{equation}\] where \[\begin{equation} w_{p,k,i} = \lambda_i \left( a_{p-1,k-1} + w_{p-1,k,i} \right), \end{equation}\] with \(a_{p,0} = \qfrdk[p]{\mathbf{A}}\) and \(w_{0,k,i} = 0\) for any \(k, i\).

The recursion for \(\qfrdk{\mathbf{A}}\) can be obtained either as that for \(\qfrdtk{\mathbf{A}}{\mathbf{0}_n}\) (where all \(v\)’s are \(0\)) or by dropping any terms with \(\mathbf{A}_2\) or \(j-1\) as an index from that for the two-argument version of \(d\) above.

Functions

The package is designed for evaluation of moments of ratios of quadratic forms using the recursive algorithms described above. The primary front-end functions are qfrm() and qfmrm(), which return a qfrm object containing terms in the truncated series and error bound when available. Simple print and plot methods are defined for the qfrm class, for quick inspection of evaluation results.

qfrm(): quadratic form ratio moment

This function is for evaluating moments of the simple ratios of quadratic forms, \(\qfrE \left( \frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q} \right)\), where \(\mathbf{x} \sim \qfrmvnorm{\boldsymbol{\mu}}{\boldsymbol{\Sigma}}\).

Usage

qfrm(A, B, p = 1, q = p, m = 100L,
     mu = rep.int(0, n), Sigma = diag(n),
     tol_zero = .Machine$double.eps * 100,
     tol_sing = tol_zero, ...)

A and B are the argument matrices; when one is missing, \(\mathbf{I}_n\) is inserted.

m determines the order of evaluation, so will be one of the most important arguments specified by the user.

mu and Sigma specify the mean and covariance of \(\mathbf{x}\), so by default \(\mathbf{x} \sim \qfrmvnorm{\mathbf{0}_n}{\mathbf{I}_n}\).

tol_zero and tol_sing just determine the numerical tolerance for assessing identity and singularity, respectively, so usually need not be changed except when handling edgy cases.

Inside the function

When Sigma is provided and not \(\mathbf{I}_n\), this function calls itself recursively with A, B, and mu transformed as described above (and dropping Sigma).

As the evaluation is based on different series expressions depending on whether \(\mathbf{B} = \mathbf{I}_n\) or not and whether \(p\) is an integer or not, the actual calculation is done by one of the following four “internal” functions, for which qfrm() acts as a switcher:

Function Argument matrices \(p\) Expression Error bound Number of recursions
qfrm_ApIq_int() \(\mathbf{B} = \mathbf{I}_n\) integer (5) Exact* \(\sim p\)
qfrm_ApIq_npi() \(\mathbf{B} = \mathbf{I}_n\) non-positive-integer (1) (simplified when \(\boldsymbol{\mu} = \mathbf{0}_n\)) Available only when \(\boldsymbol{\mu} = \mathbf{0}_n\) \(\sim m\) when \(\boldsymbol{\mu} = \mathbf{0}_n\); \(\sim m^2 / 2\) otherwise
qfrm_ApBq_int() \(\mathbf{B}\) arbitrary integer (2) Available \(\sim (p + 1)m\)
qfrm_ApBq_npi() \(\mathbf{B}\) arbitrary non-positive-integer (1) Unavailable \(\sim m^2 / 2\)

*: assuming accurate evaluation of the confluent hypergeometric function when \(\boldsymbol{\mu} \neq \mathbf{0}_n\)

These functions are exported into the package namespace so can be directly used if necessary.

They do the following operations:

  1. Check structures of arguments (e.g., A, B being square, p, q being scalar)
  2. Symmetrize the argument matrices (A <- (A + t(A)) / 2; same for B)
  3. Rotate A, B, and mu with the eigenvectors of B, to
    • check for the moment existence conditions (above)
    • see whether A is co-diagonalized, in which case computation can be simplified
  4. Calculate \(d\), \(h\), or \(\tilde{h}\) up to the order m with an internal function
  5. Multiply the result with the appropriate coefficients in (1), (2) or (5) with an internal function
  6. In qfrm_ApBq_npi(), the same-order terms in the double series are summed up
  7. Conduct a rough check of numerical convergence
  8. Calculate the sequence of error bound as in the steps 4–6 when applicable
  9. Return a structured object including the series and error bound

By default, the steps 4–8 are handled by internal C++ functions. The steps 7 and 8 can be suppressed if desired (by using check_convergence = "none" or FALSE and error_bound = FALSE, respectively).

qfmrm(): quadratic form multiple ratio moment

This function is for evaluating moments of the multiple ratios of quadratic forms, \(\qfrE \left( \frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q\left( \mathbf{x}^T \mathbf{D} \mathbf{x} \right)^r} \right)\), where \(\mathbf{x} \sim \qfrmvnorm{\boldsymbol{\mu}}{\boldsymbol{\Sigma}}\).

It works in a way very similar to qfrm(), just with the additional arguments D and r. The corresponding “internal” functions are:

Function Argument matrices \(p\) Expression Error bound Number of recursions
qfmrm_ApBIqr_int() \(\mathbf{D} = \mathbf{I}_n\) integer (4) (simplified when \(\boldsymbol{\mu} = \mathbf{0}_n\)) Available \(\sim (p + 1) m\) when \(\boldsymbol{\mu} = \mathbf{0}_n\); \(\sim (p + 1) m^2 / 2\) otherwise
qfmrm_ApBIqr_npi() \(\mathbf{D} = \mathbf{I}_n\) non-positive-integer (3) (simplified when \(\boldsymbol{\mu} = \mathbf{0}_n\)) Unavailable \(\sim m^2 / 2\) when \(\boldsymbol{\mu} = \mathbf{0}_n\); \(\sim m^3 / 6\) otherwise
qfmrm_IpBDqr_gen() \(\mathbf{A} = \mathbf{I}_n\) general (4) simplified (when \(\boldsymbol{\mu} = \mathbf{0}_n\)) or (3) (otherwise) Unavailable \(\sim m^2 / 2\) when \(\boldsymbol{\mu} = \mathbf{0}_n\); \(\sim m^3 / 6\) otherwise
qfmrm_ApBDqr_int() Arbitrary integer (4) Unavailable \(\sim (p + 1) m^2 / 2\)
qfmrm_ApBDqr_npi() Arbitrary non-positive-integer (3) Unavailable \(\sim m^3 / 6\)

Usage

qfmrm(A, B, D, p = 1, q = p/2, r = q, m = 100L,
      mu = rep.int(0, n), Sigma = diag(n),
      tol_zero = .Machine$double.eps * 100,
      tol_sing = tol_zero, ...)

Essentially the same as qfrm().

qfm_Ap_int(), qfpm_ABpq_int(), qfpm_ABDpqr_int(): quadratic form (product) moment

These functions are for evaluating moments of (products of) quadratic forms, \(\qfrE \left( \left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p \right)\), \(\qfrE \left( \left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p \left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q \right)\), and \(\qfrE \left( \left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p \left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q \left( \mathbf{x}^T \mathbf{D} \mathbf{x} \right)^r \right)\), where \(\mathbf{x} \sim \qfrmvnorm{\boldsymbol{\mu}}{\boldsymbol{\Sigma}}\), using the algorithm described in Hillier et al. (2014, secs. 3–4). The exponents must be positive integers.

In principle, the same algorithm can be applied to a product of more than three quadratic forms, but this package has not implemented functions for them.

As these expressions only involve finite series, all the results are exact. The functions return a qfpm object, for which a simple print method is defined.

Usage

qfpm_ABDpqr_int(A, B, D, p = 1, q = 1, r = 1,
                mu = rep.int(0, n), Sigma = diag(n),
                use_cpp = TRUE, cpp_method = "double",
                tol_zero = .Machine$double.eps * 100,
                tol_sing = tol_zero)

The other two functions have the same arguments, just dropping the inapplicable argument matrices and exponents.

cpp_method is a dummy argument and ignored in these functions.

rqfr(), rqfmr(), rqfp(): random number generation

These functions generate a random sample of the simple ratio \(\frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q}\), multiple ratio \(\frac{\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p} {\left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q\left( \mathbf{x}^T \mathbf{D} \mathbf{x} \right)^r}\), and product \(\left( \mathbf{x}^T \mathbf{A} \mathbf{x} \right)^p \left( \mathbf{x}^T \mathbf{B} \mathbf{x} \right)^q \left( \mathbf{x}^T \mathbf{D} \mathbf{x} \right)^r\), respectively, of quadratic forms in normal variables \(\mathbf{x} \sim \qfrmvnorm{\boldsymbol{\mu}}{\boldsymbol{\Sigma}}\).

They can be used for verifying results from the above functions, or as a Monte Carlo surrogate when numerical convergence cannot be obtained within reasonable computational memory and time.

Usage

rqfr(nit = 1000L, A, B, p = 1, q = p, mu, Sigma, use_cpp = TRUE)
rqfmr(nit = 1000L, A, B, D, p = 1, q = p/2, r = q, mu, Sigma, use_cpp = TRUE)
rqfp(nit = 1000L, A, B, D, p = 1, q = 1, r = 1, mu, Sigma, use_cpp = TRUE)

Similar to the regular random number generation functions, e.g., stats::rnorm(); the first argument nit is the desired size of the random sample (or number of iteration), followed by other parameters.

d1_i(), d2_*j_*(), d3_*jk_*(), etc.: recursion algorithms

The package has a number of internal functions that implement the recursive algorithms described above. They are usually not accessed by the user, but are documented herein to aid understanding of the package structure. Note also that, by default, the actual calculations are handled by C++ equivalents, which are not exposed to the user.

Arranged in the target variable and the number of argument matrices, these are:

Target One matrix Two matrices Three matrices
\(d\) d1_i() d2_ij_*(), d2_pj_*() d3_ijk_*(), d3_pjk_*()
\(\tilde{d}\) dtil1_i_*() dtil2_pq_*() dtil3_pqr_*()
\(h\) h2_ij_*() h3_ijk_*()
\(\tilde{h}\) htil2_pj_*() htil3_pjk_*()
\(\hat{h}\) hhat2_pj_*() htil3_pjk_*()
\(a\) a1_pk()

The functions with *_ij_* or *_ijk_* are to calculate terms in double and triple infinite series, with the dimensions of the output determined by the argument m (typically used with non-integer \(p\), (1) or (3) above), whereas *_pj_* and *_pjk_* are for the terms in series truncated along the first argument matrix (used with integral \(p\), (2) or (4)). The one matrix functions assume either of these two cases, and dtil2_pq_*() and dtil3_pqr_*() assume finite series along all argument matrices.

All these functions, excepting d1_i() and a1_pk(), have two versions ending in *_m or *_v, which stand for the matrix and vector “methods”. The latter is for the situations where all the argument matrices are co-diagonalized by the same eigenvectors, in which case the above recursive algorithms reduce from full matrix arithmetic to element-wise operations of the eigenvalues (and the rotated mean vector), which is much less computationally intensive.

Most of these functions automatically scale the result to avoid numerical overflow, and (log of) the scaling factors are appended as an attribute of the return value, so that the front-end functions can appropriately scale the results back to the original one (see below).

Using the functionality

Checking for numerical convergence

In the last example above, the argument tol_conv was used to adjust the tolerance in assessing numerical convergence; by default (check_convergence = "relative"), a warning is thrown when the ratio of the last term in the series to the partial sum is larger than this tolerance, whose default is .Machine$double.eps ^ (1/4) (= ~1.2e-4). Using a smaller value for this argument as done above can safeguard against non-convergence, although it is up to the user’s discretion at which point the result is deemed satisfactory.

Other options:

Computational cost

The computational cost for the recursive algorithms increases drastically with the order of evaluation \(m\). An approximate number of recursions required in each function is listed in the tables for qfrm and qfmrm functions above; the computational memory and time increase at least proportionally to this value. Of course the computational cost is also influenced by the number of variables \(n\).

In extremely large problems, therefore, it may be impossible to obtain a satisfactory numerical convergence within reasonable amount of time and memory. A potential practical alternative is a Monte Carlo approach using rqfr().

## A large problem
nv <- 200
large_A <- diag(c(1000, rep.int(1, nv - 1)))
large_B <- diag(c(rep.int(1, nv - 1), 1000))
large_D <- diag((nv:1) ^ 2)

res <- qfmrm(large_A, large_B, large_D, 1, 1/2, 1/2, m = 500)
#> Warning in qfmrm_ApBDqr_int(A = A, B = B, D = D, p = p, q = q, r = r, m = m, : Last term is >1.2e-04 times as large as the series,
#>   suggesting non-convergence. Consider using larger m
res
#> 
#>  Moment of ratio of quadratic forms
#> 
#> Moment = 0.02519373
#> Error bound unavailable; recommended to inspect plot() of this object

plot(res) # Far to convergence


## This problem needs much larger m (>5000)
## taking computational time + memory
## (though still manageable with a regular machine in this particular case)

## Monte Carlo sample
MCres <- rqfmr(10000, large_A, large_B, large_D, 1, 1/2, 1/2)

## Monte Carlo 95% CI of the moment
mean(MCres) + sd(MCres) / sqrt(10000) * qt(c(0.025, 0.975), 10000 - 1)
#> [1] 0.02958917 0.03117620

References

Bao, Y. and Kan, R. (2013) On the moments of ratios of quadratic forms in normal random variables. Journal of Multivariate Analysis, 117, 229–245. doi:10.1016/j.jmva.2013.03.002.
Chikuse, Y. (1980) Invariant polynomials with matrix arguments and their applications. In: Gupta, R. P., (ed.), Multivariate Statistical Analysis. Amsterdam: North-Holland. pp. 53–68.
Chikuse, Y. (1987) Methods for constructing top order invariant polynomials. Econometric Theory, 3, 195–207. doi:10.1017/S026646660001029X.
Davis, A. W. (1980) Invariant polynomials with two matrix arguments, extending the zonal polynomials. In: Krishnaiah, P. R., (ed.), Multivariate Analysis—V. Amsterdam: North-Holland. pp. 287–299.
Hansen, T. F. and Houle, D. (2008) Measuring and comparing evolvability and constraint in multivariate characters. Journal of Evolutionary Biology, 21, 1201–1219. doi:10.1111/j.1420-9101.2008.01573.x.
Hillier, G. (2001) The density of a quadratic form in a vector uniformly distributed on the n-sphere. Econometric Theory, 17, 1–28. doi:10.1017/S026646660117101X.
Hillier, G., Kan, R. and Wang, X. (2009) Computationally efficient recursions for top-order invariant polynomials with applications. Econometric Theory, 25, 211–242. doi:10.1017/S0266466608090075.
Hillier, G., Kan, R. and Wang, X. (2014) Generating functions and short recursions, with applications to the moments of quadratic forms in noncentral normal vectors. Econometric Theory, 30, 436–473. doi:10.1017/S0266466613000364.
Mathai, A. M. and Provost, S. B. (1992) Quadratic Forms in Random Variables: Theory and Applications. New York, New York: Marcel Dekker.
Melo, D., Garcia, G., Hubbe, A., Assis, A. P. and Marroig, G. (2016) EvolQG—an R package for evolutionary quantitative genetics [version 3; referees: 2 approved, 1 approved with reservations]. F1000Research, 4, 925. doi:10.12688/f1000research.7082.3.
Muirhead, R. J. (1982) Aspects of Multivariate Statistical Theory. Hoboken, New Jersey: John Wiley & Sons. doi:10.1002/9780470316559.
Smith, M. D. (1989) On the expectation of a ratio of quadratic forms in normal variables. Journal of Multivariate Analysis, 31, 244–257. doi:10.1016/0047-259X(89)90065-1.
Smith, M. D. (1993) Expectations of ratios of quadratic forms in normal variables: Evaluating some top-order invariant polynomials. Australian Journal of Statistics, 35, 271–282. doi:10.1111/j.1467-842X.1993.tb01335.x.
Watanabe, J. (2023) Exact expressions and numerical evaluation of average evolvability measures for characterizing and comparing G matrices. Journal of Mathematical Biology, 86, 95. doi:10.1007/s00285-023-01930-8.