Xianglu Wang is currently a master student at the School of Management, University of Science and Technology of China (USTC). He received his B.S. degree from USTC in 2019. His research mainly focuses on high-dimensional variable selection and inference
It is well known that regression methods designed for clean data will lead to erroneous results if directly applied to corrupted data. Despite the recent methodological and algorithmic advances in Gaussian graphical model estimation, how to achieve efficient and scalable estimation under contaminated covariates is unclear. Here a new methodology called convex conditioned innovative scalable efficient estimation (COCOISEE) for Gaussian graphical models under both additive and multiplicative measurement errors is developed. It combines the strengths of the innovative scalable efficient estimation in the Gaussian graphical model and the nearest positive semidefinite matrix projection, thus enjoying stepwise convexity and scalability. Comprehensive theoretical guarantees are provided and the effectiveness of the proposed methodology is demonstrated through numerical studies.
Graphical Abstract
The process of recovering the precision matrix in the presence of additive and multiplicative measurement errors.
Abstract
It is well known that regression methods designed for clean data will lead to erroneous results if directly applied to corrupted data. Despite the recent methodological and algorithmic advances in Gaussian graphical model estimation, how to achieve efficient and scalable estimation under contaminated covariates is unclear. Here a new methodology called convex conditioned innovative scalable efficient estimation (COCOISEE) for Gaussian graphical models under both additive and multiplicative measurement errors is developed. It combines the strengths of the innovative scalable efficient estimation in the Gaussian graphical model and the nearest positive semidefinite matrix projection, thus enjoying stepwise convexity and scalability. Comprehensive theoretical guarantees are provided and the effectiveness of the proposed methodology is demonstrated through numerical studies.
Public Summary
We propose a new methodology COCOISEE to achieve scalable and interpretable estimation for Gaussian graphical model under both additive and multiplicative measurement errors.
The method is stepwise convex, computationally stable, efficient and scalable.
Both theoretical and simulation results verify the feasibility of our method.
With the advent of massive data in recent years, great attention has been given to identifying relationships among multiple responses and predictors in various applications, including multi-task learning in machine learning[1–3], imaging genetics[4, 5] and genetic association[6, 7]. Specifically, in cancer genomic studies, some miRNAs are known to regulate protein expression of various genes in cellular processes and their dysregulation plays a crucial role in human cancer. Hence, investigating the relationship between miRNA expression and protein expression is of great importance for human cancer diagnosis[8–9]. A standard approach for quantifying these relationships is to perform a univariate regression model for each response separately via the least squares estimation. Although it is easy to implement, this method ignores the dependence information among response variables, and it is also not applicable to high-dimensional cases. Thus, it is necessary to develop statistical tools that account for the dependence structure of responses in multi-response regression under high-dimensional settings.
An enormous effort has been mounted for variable selection and coefficient estimation in high-dimensional multi-response regression. Among them, one popular way is to consider the reduced rank regression model, including some foundational works[10–12]. Based on this framework, several methods that apply a latent factor point of view have been proposed to estimate the unknown coefficient matrix[13,14]. Another class of methods relies on different kinds of regularization[15–18]. Specifically, a line of research for regularization methods focuses on some structural prior knowledge of the coefficient matrix, that is, the set of variables is assumed to be structured into several groups. Please refer to Refs. [19–21] for more details about this kind of method.
To further assign uncertainty, some important progress has been made in high-dimensional multi-response settings. For instance, Greenlaw et al.[5] developed a hierarchical Bayesian model and constructed confidence intervals for the unknown coefficient matrix. More recently, by generalizing low-dimensional projection estimation (LDPE)[22] in the univariate response case, Chevalier et al.[23] proposed the desparsified multi-task lasso method and applied it to source imaging. However, when deriving the asymptotic distributions of estimators, this approach requires the number of nonzero rows of coefficient matrix to be s=o(√n/log(p)) for p covariates and n samples. To further alleviate this requirement, we attempt to use the two-step projection technique proposed by Li et al.[24], which treats important variables and others differently and can greatly improve the inference efficiency.
In this paper, we aim to develop a new methodology for statistical inference in the setting of high-dimensional multi-task regression. By taking group structures of the unknown coefficient matrix into consideration, the proposed estimator is constructed in a row-wise manner based on the two-step projection technique, which enjoys the benefit of reducing the estimation bias induced by these important signals. Under suitable conditions, we establish the asymptotic normality of the proposed two-step projection estimator along with corresponding confidence intervals for all components of the unknown coefficient matrix. Moreover, the satisfactory numerical performance of the proposed method strongly supports the theoretical results.
The rest of the paper is organized as follows. Section 2 presents the model setting and the new inference procedure for high-dimensional multi-response regression. Theoretical properties are established in Section 3. Numerical results and a real data analysis are provided in Sections 4 and 5, respectively. We conclude this work and possible future work in Section 6. The proofs of main theoretical properties are delegated to Appendix.
Notations. For any matrix B=(b1⊤,⋯,bp⊤)⊤=(b1,⋯,bq)=(bi,j)∈Rp×q, denote by bi and bj its ith row and jth column, respectively. We use ‖ to denote the Frobenius norm for matrix {\boldsymbol{B}}. Given any K , {\boldsymbol{B}}_{K} denotes the submatrix of {\boldsymbol{B}} consisting of columns in K . We write \| {\boldsymbol{B}}\|_{\ell_{2,1}} = \displaystyle \sum\limits_{i = 1}^{p} \| {\boldsymbol{b}}^i\| and \| {\boldsymbol{B}}\|_{\ell_{2,\infty}} = \max \limits_{1\leqslant i \leqslant p} \| {\boldsymbol{b}}^i\|, where \| {\boldsymbol{b}}^i\| = \left( {\displaystyle \sum\limits_{j = 1}^{q}|b_{i,j}^2|} \right)^{1/2} denotes the Euclidean norm for the vector {\boldsymbol{b}}^i. Denote by \mathrm{supp}( {\boldsymbol{B}}) = \{ i \in \{1,\cdots,p \} : {\boldsymbol{b}}^i \neq {\bf{0}} \} the nonzero rows of {\boldsymbol{B}} with size |\mathrm{supp}( {\boldsymbol{B}})|.
2.
Inference procedure via two-step projection estimator
2.1
Model setting
Consider the multi-task learning problem with a high-dimensional multi-response linear regression model
where {\boldsymbol{y}} = (y_{1}, \cdots, y_{q})^{\top} is a q -dimensional response vector, {\boldsymbol{B}} = ( {{\boldsymbol{b}}^1}^{\top},\cdots, {{\boldsymbol{b}}^p}^{\top})^{\top} = ( {\boldsymbol{b}}_1,\cdots, {\boldsymbol{b}}_q) = (b_{i,j}) \in {\mathbb{R}}^{p \times q} is the unknown coefficient matrix, {\boldsymbol{x}} = (x_{1}, ... , x_{p})^{\top} is the p -dimensional covariate vector, and {\boldsymbol{e}} is the q -dimensional error vector, which is independent of {\boldsymbol{x}}. Following Li et al.[25], the dimension p is allowed to be much larger than the sample size n , and the dimension q is regarded as a fixed number in this paper.
Suppose we have n independent observations ( {\boldsymbol{x}}^{i}, {\boldsymbol{y}}^{i})_{i = 1}^n from ( {\boldsymbol{x}}, {\boldsymbol{y}}) in model (1). Using the matrix notation, the multi-response linear regression model (1) can be rewritten as
where {\boldsymbol{Y}} = ( {\boldsymbol{y}}_1,\cdots, {\boldsymbol{y}}_q) \in {\mathbb{R}}^{n \times q} is the response matrix, {\boldsymbol{X}} = ( {\boldsymbol{x}}_1,\cdots, {\boldsymbol{x}}_p) \in {\mathbb{R}}^{n \times p} is the design matrix, and {\boldsymbol{E}} = ( {\boldsymbol{e}}_1,\cdots, {\boldsymbol{e}}_q) = (e_{i,j})\in {\mathbb{R}}^{n \times q} is the random error matrix that is independent of {\boldsymbol{X}}. Without loss of generality, we assume that e_{i,j} s are independent and identically distributed random variables with mean zero and variance \sigma^2 .
We aim to construct confidence intervals for the coefficients in {\boldsymbol{B}}. Different from the classical assumption that {\boldsymbol{B}} is row-sparse, we allow {\boldsymbol{B}} to have a more complex sparsity structure. More specifically, we first establish the relationship between the distance correlation[25, 26] and sparsity structure. Define the population quantity
with 1\leqslant i \leqslant p for the effects caused by {\boldsymbol{b}}^i. Here, the distance correlation \mathrm{dcorr}( {\boldsymbol{u}}, {\boldsymbol{v}}) between two random vectors {\boldsymbol{u}} \in {\mathbb{R}}^{d_{u}} and {\boldsymbol{v}} \in {\mathbb{R}}^{d_{v}} is defined as
where the constant c_{d} = \dfrac{\pi^{(1+d)/2}}{\Gamma((1+d)/2)} for d = d_{u}, d_{v} , and f_{ {\boldsymbol{u}}, {\boldsymbol{v}}}( {\boldsymbol{t}}, {\boldsymbol{s}}), f_{ {\boldsymbol{u}}}( {\boldsymbol{t}}) and f_{ {\boldsymbol{v}}}( {\boldsymbol{s}}) are the characteristic functions of ( {\boldsymbol{u}}, {\boldsymbol{v}}), {\boldsymbol{u}} and {\boldsymbol{v}}, respectively. Please refer to Ref. [26] for more details about the distance correlation.
Because \mathrm{dcorr}( {\boldsymbol{u}}, {\boldsymbol{v}}) = 0 if and only if {\boldsymbol{u}} and {\boldsymbol{v}} are independent, we establish the following relationship between the population quantity \omega_{i} and the structure of {\boldsymbol{B}}:
\omega_{i} = 0\Leftrightarrow {\boldsymbol{b}}^i = {\bf{0}}, \ \ \ \ 1\leqslant i \leqslant p.
Similar to Li et al.[25], we regard x_i as an important predictor if \omega_i\geqslant cn^{-\kappa}, where c>0 and 0\leqslant \kappa < 1/2 are some constants. Moreover, we define S_A as follows to contain the indices of all important predictors:
The indices of the rest of the unimportant predictors are collected by S_A^c . Correspondingly, any {\boldsymbol{b}}^i with i\in S_A is regarded as an important signal in the form of a vector, while any {\boldsymbol{b}}^i with i\in S_A^c is treated as a weak signal.
2.2
Two-step projection estimator
By borrowing ideas from Li et al.[24], the proposed estimator is constructed in a row-wise manner, i.e.,
for any j \in \{1,\cdots,p\}, where the two-step projection residual vector {\boldsymbol{z}}_j is defined by the following two-step procedure:
Step 1: Given a prescreened set \hat{S} for those identifiable signals in {\boldsymbol{B}}, we obtain the following residual vector by an exact orthogonalization of {\boldsymbol{x}}_k against {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}:
where {\boldsymbol{P}}_{\hat{S}\backslash \{j\}} = {\boldsymbol{X}}_{\hat{S}\backslash \{j\}} ( {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}^{\top} {\boldsymbol{X}}_{\hat{S}\backslash \{j\}})^{-1} {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}^{\top} is the projection matrix of the column space of {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}.
Step 2: Then, {\boldsymbol{z}}_{j} is constructed by the residual of lasso regression of {\boldsymbol{\psi}}_{j}^{(j)} against {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} . That is,
where {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} is the matrix composed of column vectors {\boldsymbol{\psi}}_{k}^{(j)} for k \in \hat{S}^{c}\backslash \{j\} , and {\hat{\boldsymbol{\nu}}}_{j}(\lambda_{j}) is the lasso estimator depending on the regularization parameter \lambda_{j} .
Based on the above two-step strategy, the two-step projection residual vector {\boldsymbol{z}}_{j} satisfies the following two properties:
(a) It is strictly orthogonal to {\boldsymbol{X}}_{\hat{S} \backslash \{j\}} consisting of important covariates.
(b) It is relaxed orthogonally to {\boldsymbol{X}}_{\hat{S}^{c} \backslash \{j\}} consisting of other covariates.
This kind of hybrid orthogonalization brings the benefit of reducing the estimation bias of { \widehat{\boldsymbol{b}}}^j. To see this, plugging model (2) to (3) yields
It is easy to see from the above that the influence generated by these identifiable signals in \hat{S} \backslash \{j\} is eliminated from the bias term of the estimation error.
Denote by \tau_{j} = \| {\boldsymbol{z}}_{j}\| / | {\boldsymbol{z}}_{j}^{\top} {\boldsymbol{x}}_{j}|. Then the confidence interval of b_{j,\,k} is constructed by
for any j = 1,\cdots,p and k = 1,\cdots,q, where \varPhi denotes the standard normal distribution function, \alpha is the significance level, and \hat{\sigma} is a consistent estimator of \sigma . Following Chevalier et al.[23] and Reid et al.[27], in this paper, we suggest the cross-validation-based variance estimator
where \hat {\boldsymbol{e}}_t is the t th column of { \hat{\boldsymbol{E}}} = {\boldsymbol{Y}}- {\boldsymbol{X}} \hat {\boldsymbol{B}}_{\rm CV} with \hat {\boldsymbol{B}}_{\rm CV} being the multivariate group lasso estimator tuned by cross-validation, and \hat{s} = |{\rm supp}( \hat {\boldsymbol B}_{\rm CV})| means the number of nonzero rows of \hat {\boldsymbol{B}}_{\rm CV}.
Note that a prescreened set \hat{S} for those identifiable signals in {\boldsymbol{B}} is necessary for the proposed method. In this paper, we suggest utilizing DC-SIS[25] to obtain a suitable prescreened set \hat{S} and we will provide the sure screening property of DC-SIS in Proposition 1. In conclusion, the proposed method TPE is summarized in Algorithm 1. However, we cannot guarantee that all the truly important signals are retained in practice. In view of this potential scenario, we alternatively propose a variant two-step estimator based on the self-bias correction idea.
Given a preliminary estimate such as the multivariate group lasso estimate {\hat {\boldsymbol{B}}_0} = {\left( {\hat {\boldsymbol{b}}{{_0^1}^ \top }, \cdots ,\hat {\boldsymbol{b}}{{_0^p}^ \top }} \right)^ \top }, the variant two-step estimator {\hat {\boldsymbol{B}}_{\rm{V}}} = {\left( {\hat {\boldsymbol{b}}{{_{\rm{V}}^1}^ \top }, \cdots ,\hat {\boldsymbol{b}}{{_{\rm{V}}^p}^ \top }} \right)^ \top } can be defined through each row as
If some truly important features are omitted in the prescreening step, the variant procedure can be a reliable choice since the magnitude of the bias term depends on \displaystyle \sum\nolimits_{k \in \hat{S}^{c}, k \neq j} \| {\boldsymbol{b}}^k - { \hat{\boldsymbol{b}}}_{0}^k\| instead of the diverging term \displaystyle \sum\nolimits_{k \in \hat{S}^{c}, k \neq j} \| {\boldsymbol{b}}^k\|.
Algorithm 1 TPE algorithm
Require:{\boldsymbol{X}} \in {\mathbb{R}}^{n \times p}, {\boldsymbol{Y}} \in {\mathbb{R}}^{n \times q}, a prescreened set \hat{S} , a significance level \alpha ;
Ensure: {CI}_{j,k} for j=1,\cdots,p and k=1,\cdots,q
3.
Theoretical properties
In this section, we provide statistical properties for the proposed method TPE. First, we need to clarify some conditions on the model.
Condition 1. The rows of {\boldsymbol{X}} are independent and identically distributed (i.i.d.) from N( {\bf{0}},{\boldsymbol{\varSigma}}_{X}), and the eigenvalues of {\boldsymbol{\varTheta}} = {\boldsymbol{\varSigma}}_{X}^{-1} = (\theta_{i,j}) are bounded within the interval [1/L,L] for some L \geqslant 1.
Condition 2.s^{*} = \max_{1 \leqslant j \leqslant p} s_{j} = o(n/\log (p)), where s_{j} = |\{ k \in \{1,\cdots,p\}: k \neq j, {\theta}_{j,k} \neq 0 \}| is the sparsity with respect to rows of {\boldsymbol{\varTheta}}.
Condition 3. s = o(\max\{n/\log (p), n/s^{*} \}) with s = |S_{A}| , and \displaystyle \sum\nolimits_{k \in S_{A}^{c}} \| {\boldsymbol{b}}^k\| = o(1/\sqrt{\log (p)}).
Conditions 1 and 2 are the same as those in Ref. [24], which provide theoretical guarantees for estimation and prediction consistency in the two-step procedure. The first part of Condition 1 is a common Gaussian assumption, which can be relaxed to a general case such as sub-Gaussian. The second part of Condition 1 assumes that the eigenvalues of {\boldsymbol{\varTheta}} are well bounded from below and above, which is used for characterizing the identifiability of the design matrix {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} in Eq. (5) based on the bounded sign-restricted cone invertibility factor[28]. Condition 2 imposes a typical constraint on the maximum column sparsity of {\boldsymbol{\varTheta}}, which is needed to guarantee consistent estimation in the two-step procedure.
Condition 3 entails the main contributions of the proposed method. The first part of this condition allows the number of identifiable signals s = o(\max\{n/\log (p), n/s^{*} \}) , which is much weaker than o(\sqrt{n}/\log (p)) in Ref. [23]. Moreover, the order of s can be much larger than o(n/\log (p)) if s^{*} \ll \log(p) . The second part of Condition 3 imposes a constraint on the weak signals in S_{A}^{c} , which is used to guarantee that the influence on the weak signals cannot break the inference procedure. Next, the following definition characterizes the theoretical properties of a suitable prescreened set \hat{S}.
Definition 1 (suitable set). \hat{S} is called a suitable prescreened set if it satisfies: (a) \hat{S} is independent of ( {\boldsymbol{X}}, {\boldsymbol{Y}}); (b) with probability at least 1-\epsilon_{n,p} , S_{A} \subset \hat{S} and |\hat{S}| = O(s) , where \epsilon_{n,p} is asymptotically vanishing.
Definition 1 is similar to the definition of an acceptable set in Ref. [24]. The first part of this definition is applied to eliminate the dependence between \hat{S} and the proposed estimator, which can be achieved by the common sample splitting technique. The second part of this definition assumes the sure screening property of \hat{S} , which can be justified by the following proposition.
Proposition 1. Under Condition 1, there exists a constant c_{0}>0 such that
for any j = 1,\ldots, p , where {\boldsymbol{\varLambda}}_{j,\cdot} denotes the j th row of {\boldsymbol{\varLambda}}.
Part b: Further assume that Conditions 1–3 hold. For some constants \epsilon > 0 and \delta \geqslant 1, let \lambda_{j} = (1+\epsilon)\sqrt{2\delta \log(p)/(n\theta_{j,j})} with j = 1,\ldots, p , where \lambda_{j} is the regularization parameter in Eq. (5). Then, for any j = 1,\ldots, p , with probability at least 1-\epsilon_{n,p}-o(p^{1-\delta}) , we have
The first part of Theorem 1 presents that the error \sqrt{n}( \hat {\boldsymbol{B}}- {\boldsymbol{B}}) can be decomposed into a Gaussian term {\boldsymbol{\varLambda}} with zero mean and a bias term {\boldsymbol{\varDelta}}. The second part of this theorem shows that the bias term {\boldsymbol{\varDelta}} is asymptotically negligible with high probability. In particular, we prove that \hat{\theta}_{j,j}^{1/2} converges to {\theta}_{j,j}^{-1/2} with asymptotic probability one, which ensures the effectiveness of the inference in terms of the length of the confidence interval. It is worth noting that the product of the noise term \tau_{j} and n^{1/2} converges to the same constant as that of the estimator in Ref. [23] with asymptotic probability one. Since the noise factor is proportional to the variance of the estimator, the lengths of the confidence intervals for the two estimators are theoretically equal. Based on the conclusion in this theorem, we immediately obtain the asymptotic properties for all elements of the proposed two-step projection estimator, as shown in the following corollary.
Corollary 1. Under the conditions in Theorem 1, with a given significance level \alpha , we further have
Note that Corollary 1 still holds if the noise level \sigma is replaced by a consistent estimator \hat{\sigma}. Therefore this corollary provides theoretical guarantees for the constructed confidence interval in (6).
4.
Simulation studies
In this section, we conduct simulation studies to investigate the performance of the proposed method compared with the generalization of LDPE[22] for multi-task regression[23] (denoted by MLDPE for simplicity). The implementation of \hat {\boldsymbol{B}}_{\rm CV} with \ell_{2,1} group regularization is performed via the R package RMTL[29]. Based on \hat {\boldsymbol{B}}_{\rm CV}, we obtain a consistent estimator \hat{\sigma} for \sigma by applying the method stated in Section 2.2. Moreover, DC-SIS[25] is utilized to obtain a suitable prescreened set \hat{S} for those important predictors. To accurately control the size of \hat{S}, we take the least squares estimates on the subsets of \hat{S} and then use a BIC-type criterion[30] to choose the best subset.
In terms of the generation methods of the design matrix and error matrix, we conduct some simulations based on the following two models in both the sparse setting and the approximately sparse setting.
Model 1: The rows of the design matrix {\boldsymbol{X}} are sampled as independent and identically distributed copies from N( {\bf{0}},{\boldsymbol{\varSigma}}_{X}), where {\boldsymbol{\varSigma}}_{X} = (0.5^{|i-j|})_{p\times p}. The error items of {\boldsymbol{E}} are independent and identically distributed N(0,\sigma^{2}) with \sigma = 1 .
Model 2: The entries of the design matrix {\boldsymbol{X}} = (x_{i,j}) are Bernoulli random variables with a success probability of 0.8. All the columns of {\boldsymbol{X}} are centered to have zero mean. The entries of {\boldsymbol{E}} are generated from a t-distribution with 10 degrees of freedom.
We set the number of responses q = 200 . Then, the sample size n , the number of predictors p and the coefficient matrices {\boldsymbol{B}} in different settings are constructed as follows:
Sparse setting: We set (n,p) = (100,200), (150,400), (200,800), respectively. Similar to Ref. [31], the elements in the first five rows of {\boldsymbol{B}} are drawn from a uniform distribution on [-5,-1]\cup[1,5] , and the elements in other rows are set to be 0.
Approximately sparse setting: We set (n,p) = (100,400), (150,600), \,(200,1000), respectively. Similar to Refs. [22, 24], the j th important signal satisfies \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ} with \lambda_{\rm univ} = \sqrt{2\log(p)/n} for j = 40,\;80,\;120,\;160,\;200, and \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ}/j^{2} for all other j . More specifically, to generate the j th row of {\boldsymbol{B}} with \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ}, we first generate a q -dimensional vector {\boldsymbol{v}} = (v_{1}, ..., v_{q}) with items v_{k} \sim U[0,1],\, k = 1,\ldots,q. Then, we normalize {\boldsymbol{v}} such that \| {\boldsymbol{v}}\| = 1. Finally, we set {\boldsymbol{b}}^j = 3\lambda_{\rm univ} {\boldsymbol{v}}.
The primary purpose of our simulation is to yield the 95% confidence intervals for the regression coefficients {\boldsymbol{B}}. In each setting, we run 100 replications and calculate the same three performance measures as those in Ref. [24]: the average coverage probability for all regression coefficients (CPA), the average coverage probability for important coefficients (CPI), and the average length of confidence intervals for all regression coefficients (Length).
Tables 1 and 2 summarize the results for the two methods in different settings. Clearly, the results in the sparse setting and approximately sparse setting are similar. In Gaussian settings, it can be seen from CPA (or CPI) that the average coverage probabilities for all regression coefficients (or important coefficients) of the proposed method are approximately 95%, while the average coverage probabilities for all regression coefficients (or important coefficients) of MLDPE deviate from 95% slightly. In view of Length, the average lengths of the confidence intervals for the two methods are roughly the same. In non-Gaussian settings, the performance of both TPE and MLDPE tends to worsen.
Table
1.
Comparison of performance measures for two methods in the sparse setting.
In each setting, we further set \hat{\sigma} = 1 to calculate performance measures for comparison. We can see that the performance of the proposed method remains stable, while the performance of MLDPE fluctuates slightly. In summary, the proposed method outperforms MLDPE in both the sparse setting and the approximately sparse setting.
5.
Application to TCGA-OV data
The Cancer Genome Atlas (TCGA) is a cancer genomics program incorporating clinical data on human cancers and tumor subtypes, including aberrations in gene expression, epigenetics (miRNAs, methylation), and protein expression. In this section, we apply our methodology to the ovarian serous cystadenocarcinoma (TCGA-OV) data downloaded from the TCGA website①. We regarded the miRNA expression quantification obtained by the miRNA-seq experimental strategy as predictors, and protein expression quantification using the reversed-phase protein array technique as responses. Our goal is to explore the regulatory effect of miRNA on protein expression in ovarian cancer.
After preliminary data processing, the number of miRNAs was reduced to 1530, and the number of proteins was 216. Then, we utilize the DC-SIS method to perform feature screening on the predictors, select the first 400 important miRNAs as predictors, and choose the first 50 proteins with larger variance as response variables. Finally, we obtained n = 300 common samples with p = 400 miRNAs as predictors and q = 50 proteins as responses. It is worth noting that both predictors and responses are centered and normalized to have zero mean and a common \ell_{2} -norm \sqrt{n} .
By applying the proposed method, we obtain 95% confidence intervals for all unknown coefficients. As a result, the top 45 important miRNAs are selected in Table 3, 32 of which highlighted in bold are also chosen by MLDPE. These selected miRNAs play important regulatory roles in cancer. For example, compared to normal ovaries, some miRNAs are dysregulated in ovarian cancer. It is worth noting that the importance is reflected by the magnitude of the sum of absolute values for these estimated coefficients over all 50 proteins. Among these selected miRNAs, hsa-mir-182, hsa-mir-200a, hsa-mir-223, and hsa-mir-16 are upregulated, and hsa-mir-432, hsa-mir-493, hsa-mir-9 and hsa-mir-377 are downregulated[ 32]. Due to the dysregulation of these miRNAs, some of them could potentially be used as diagnostic biomarkers, including hsa-mir-214 and hsa-mir-21[33].
Table
3.
45 miRNAs selected by TPE. The miRNAs also selected by MLDPE are highlighted in bold.
Experimental results show that the average length of confidence intervals obtained by TPE method is 0.4372, while MLDPE method yields the average length of 0.4136. It is clear that the average lengths of both methods are around the same level. The No.1 miRNA chosen by both TPE and MLDPE is hsa-mir-486-2, which was also identified as a potential biomarker for lung adenocarcinoma[34]. For the unknown coefficients of hsa-mir-486-2, Fig. 1 displays their estimates and the corresponding 95% confidence intervals over all 50 proteins, which further justifies the important regulatory roles of hsa-mir-486-2.
Figure
1.
Estimates of the unknown coefficients of miRNA hsa-mir-486-2 (red squares for TPE and black dots for MLDPE) and the corresponding 95% confidence intervals (obtained by TPE) over all 50 proteins.
In this paper, we develop a new inference methodology based on the two-step projection estimator (TPE) in high-dimensional multi-task regression. The proposed estimator is established in a row-wise manner, which has the benefit of reducing the estimation bias induced by these important signals. In addition, we provide strict theoretical guarantees for our method, including asymptotic normality and corresponding confidence intervals. Moreover, the numerical results of the proposed method indicate that the proposed method works quite well. Specifically, we apply this approach to an ovarian cancer dataset and identify several miRNAs that are closely associated with protein expression levels. Results demonstrate that these miRNAs can potentially serve as biomarkers in disease research, aiding in the diagnosis of the ovarian cancer.
It would be interesting to extend our method to more general settings, such as multi-response linear regression models with measurement errors and generalized linear models. It is also of interest to build a relationship between the two-step projection technique and the partially penalized procedure in high-dimensional multi-response settings. These generalizations are interesting topics for future research.
Conflict of Interest
The authors declare that they have no conflict of interest.
We propose a new methodology COCOISEE to achieve scalable and interpretable estimation for Gaussian graphical model under both additive and multiplicative measurement errors.
The method is stepwise convex, computationally stable, efficient and scalable.
Both theoretical and simulation results verify the feasibility of our method.
Baselmans B M, Jansen R, Ip H F, et al. Multivariate genome-wide analyses of the well-being spectrum. Nature Genetics,2019, 51 (3): 445–451. DOI: 10.1038/s41588-018-0320-8
[2]
Yang K, Lee L F. Identification and QML estimation of multivariate and simultaneous equations spatial autoregressive models. Journal of Econometrics,2017, 196 (1): 196–214. DOI: 10.1016/j.jeconom.2016.04.019
[3]
Zhu X, Huang D, Pan R, et al. Multivariate spatial autoregressive model for large scale social networks. Journal of Econometrics,2020, 215 (2): 591–606. DOI: 10.1016/j.jeconom.2018.11.018
[4]
Han F, Liu H. Optimal rates of convergence for latent generalized correlation matrix estimation in transelliptical distribution. arXiv: 1305.6916, 2013.
[5]
Rubinstein M. Markowitz’s “portfolio selection”: A fifty-year retrospective. The Journal of Finance,2002, 57 (3): 1041–1045. DOI: 10.1111/1540-6261.00453
[6]
Wegkamp M, Zhao Y. Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas. Bernoulli,2016, 22 (2): 1184–1226. DOI: 10.3150/14-BEJ690
[7]
Fan J, Han F, Liu H. Challenges of big data analysis. National Science Review,2014, 1 (2): 293–314. DOI: 10.1093/nsr/nwt032
[8]
Cai T, Liu W, Luo X. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association,2011, 106 (494): 594–607. DOI: 10.1198/jasa.2011.tm10155
[9]
Tokuda T, Goodrich B, van Mechelen I, et al. Visualizing distributions of covariance matrices. New York: Columbia University, 2011.
[10]
Fan J, Peng H. Nonconcave penalized likelihood with a diverging number of parameters. The Annals of Statistics,2004, 32 (3): 928–961. DOI: 10.1214/009053604000000256
[11]
Yuan M, Lin Y. Model selection and estimation in the Gaussian graphical model. Biometrika,2007, 94 (1): 19–35. DOI: 10.1093/biomet/asm018
[12]
Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical Lasso. Biostatistics,2008, 9 (3): 432–441. DOI: 10.1093/biostatistics/kxm045
[13]
Banerjee O, El Ghaoui L, d’Aspremont A. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. The Journal of Machine Learning Research,2008, 9: 485–516. DOI: 10.5555/1390681.1390696
[14]
Meinshausen N, Bühlmann P. High-dimensional graphs and variable selection with the Lasso. The Annals of Statistics,2006, 34 (3): 1436–1462. DOI: 10.1214/009053606000000281
[15]
Wille A, Zimmermann P, Vranova E, et al. Sparse graphical Gaussian modeling of the isoprenoid gene network in Arabidopsis thaliana. Genome Biology,2004, 5 (11): R92. DOI: 10.1186/gb-2004-5-11-r92
[16]
Rothman A J, Bickel P J, Levina E, et al. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics,2008, 2: 494–515. DOI: 10.1214/08-EJS176
[17]
Lam C, Fan J. Sparsistency and rates of convergence in large covariance matrix estimation. The Annals of Statistics,2009, 37 (6B): 4254–4278. DOI: 10.1214/09-AOS720
[18]
Yuan M. High dimensional inverse covariance matrix estimation via linear programming. The Journal of Machine Learning Research,2010, 11: 2261–2286. DOI: 10.5555/1756006.1859930
[19]
Liu W, Luo X. High-dimensional sparse precision matrix estimation via sparse column inverse operator. arXiv: 1203.3896, 2012.
[20]
Sun T, Zhang C H. Sparse matrix inversion with scaled Lasso. The Journal of Machine Learning Research,2013, 14 (1): 3385–3418. DOI: 10.5555/2567709.2567771
[21]
Fan Y, Lv J. Innovated scalable efficient estimation in ultra-large Gaussian graphical models. The Annals of Statistics,2016, 44 (5): 2098–2126. DOI: 10.1214/15-AOS1416
[22]
Bickel P, Ritov Y. Efficient estimation in the errors in variables model. The Annals of Statistics,1987, 15 (2): 513–540. DOI: 10.1214/aos/1176350358
[23]
Ma Y, Li R. Variable selection in measurement error models. Bernoulli,2010, 16 (1): 274–300. DOI: 10.3150/09-bej205
[24]
Liang H, Li R. Variable selection for partially linear models with measurement errors. Journal of the American Statistical Association,2009, 104 (485): 234–248. DOI: 10.1198/jasa.2009.0127
[25]
Städler N, Bühlmann P. Missing values: Sparse inverse covariance estimation and an extension to sparse regression. Statistics and Computing,2012, 22 (1): 219–235. DOI: 10.1007/s11222-010-9219-7
[26]
Loh P L, Wainwright M J. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Advances in Neural Information Processing Systems,2012, 40 (3): 1637–1664. DOI: 10.1214/12-AOS1018
[27]
Belloni A, Rosenbaum M, Tsybakov A B. Linear and conic programming estimators in high dimensional errors-in-variables models. Journal of the Royal Statistical Society: Series B (Statistical Methodology),2017, 79 (3): 939–956. DOI: 10.1111/rssb.12196
[28]
Datta A, Zou H. Cocolasso for high-dimensional error-in-variables regression. The Annals of Statistics,2017, 45 (6): 2400–2426. DOI: 10.1214/16-AOS1527
[29]
Tibshirani R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological),1996, 58 (1): 267–288. DOI: 10.1111/j.2517-6161.1996.tb02080.x
[30]
Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association,2001, 96 (456): 1348–1360. DOI: 10.1198/016214501753382273
[31]
Zou H, Hastie T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology),2005, 67 (2): 301–320. DOI: 10.1111/j.1467-9868.2005.00503.x
[32]
Zou H. The adaptive Lasso and its oracle properties. Journal of the American Statistical Association,2006, 101 (476): 1418–1429. DOI: 10.1198/016214506000000735
[33]
Candes E, Tao T. The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics,2007, 35 (6): 2313–2351. DOI: 10.1214/009053606000001523
[34]
Bickel P J, Ritov Y, Tsybakov A B. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics,2009, 37 (4): 1705–1732. DOI: 10.1214/08-AOS620
[35]
Zhao P, Yu B. On model selection consistency of Lasso. The Journal of Machine Learning Research,2006, 7: 2541–2563. DOI: 10.5555/1248547.1248637
[36]
Wainwright M J. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained quadratic programming (Lasso). IEEE Transactions on Information Theory,2009, 55 (5): 2183–2202. DOI: 10.1109/TIT.2009.2016018
[37]
Buldygin V V, Kozachenko Yu V. Metric Characterization of Random Variables and Random Processes. Providence, RI: American Mathematical Society, 2000.
[38]
Sun T, Zhang C H. Scaled sparse linear regression. Biometrika,2012, 99 (4): 879–898. DOI: 10.1093/biomet/ass043
[39]
Ren Z, Sun T, Zhang C H, et al. Asymptotic normality and optimalities in estimation of large Gaussian graphical models. The Annals of Statistics,2015, 43 (3): 991–1026. DOI: 10.1214/14-AOS1286
[40]
Bickel P J, Levina E. Regularized estimation of large covariance matrices. The Annals of Statistics,2008, 36 (1): 199–227. DOI: 10.1214/009053607000000758
[41]
Bickel P J, Levina E. Covariance regularization by thresholding. The Annals of Statistics,2008, 36 (6): 2577–2604. DOI: 10.1214/08-AOS600
Baselmans B M, Jansen R, Ip H F, et al. Multivariate genome-wide analyses of the well-being spectrum. Nature Genetics,2019, 51 (3): 445–451. DOI: 10.1038/s41588-018-0320-8
[2]
Yang K, Lee L F. Identification and QML estimation of multivariate and simultaneous equations spatial autoregressive models. Journal of Econometrics,2017, 196 (1): 196–214. DOI: 10.1016/j.jeconom.2016.04.019
[3]
Zhu X, Huang D, Pan R, et al. Multivariate spatial autoregressive model for large scale social networks. Journal of Econometrics,2020, 215 (2): 591–606. DOI: 10.1016/j.jeconom.2018.11.018
[4]
Han F, Liu H. Optimal rates of convergence for latent generalized correlation matrix estimation in transelliptical distribution. arXiv: 1305.6916, 2013.
[5]
Rubinstein M. Markowitz’s “portfolio selection”: A fifty-year retrospective. The Journal of Finance,2002, 57 (3): 1041–1045. DOI: 10.1111/1540-6261.00453
[6]
Wegkamp M, Zhao Y. Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas. Bernoulli,2016, 22 (2): 1184–1226. DOI: 10.3150/14-BEJ690
[7]
Fan J, Han F, Liu H. Challenges of big data analysis. National Science Review,2014, 1 (2): 293–314. DOI: 10.1093/nsr/nwt032
[8]
Cai T, Liu W, Luo X. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association,2011, 106 (494): 594–607. DOI: 10.1198/jasa.2011.tm10155
[9]
Tokuda T, Goodrich B, van Mechelen I, et al. Visualizing distributions of covariance matrices. New York: Columbia University, 2011.
[10]
Fan J, Peng H. Nonconcave penalized likelihood with a diverging number of parameters. The Annals of Statistics,2004, 32 (3): 928–961. DOI: 10.1214/009053604000000256
[11]
Yuan M, Lin Y. Model selection and estimation in the Gaussian graphical model. Biometrika,2007, 94 (1): 19–35. DOI: 10.1093/biomet/asm018
[12]
Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical Lasso. Biostatistics,2008, 9 (3): 432–441. DOI: 10.1093/biostatistics/kxm045
[13]
Banerjee O, El Ghaoui L, d’Aspremont A. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. The Journal of Machine Learning Research,2008, 9: 485–516. DOI: 10.5555/1390681.1390696
[14]
Meinshausen N, Bühlmann P. High-dimensional graphs and variable selection with the Lasso. The Annals of Statistics,2006, 34 (3): 1436–1462. DOI: 10.1214/009053606000000281
[15]
Wille A, Zimmermann P, Vranova E, et al. Sparse graphical Gaussian modeling of the isoprenoid gene network in Arabidopsis thaliana. Genome Biology,2004, 5 (11): R92. DOI: 10.1186/gb-2004-5-11-r92
[16]
Rothman A J, Bickel P J, Levina E, et al. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics,2008, 2: 494–515. DOI: 10.1214/08-EJS176
[17]
Lam C, Fan J. Sparsistency and rates of convergence in large covariance matrix estimation. The Annals of Statistics,2009, 37 (6B): 4254–4278. DOI: 10.1214/09-AOS720
[18]
Yuan M. High dimensional inverse covariance matrix estimation via linear programming. The Journal of Machine Learning Research,2010, 11: 2261–2286. DOI: 10.5555/1756006.1859930
[19]
Liu W, Luo X. High-dimensional sparse precision matrix estimation via sparse column inverse operator. arXiv: 1203.3896, 2012.
[20]
Sun T, Zhang C H. Sparse matrix inversion with scaled Lasso. The Journal of Machine Learning Research,2013, 14 (1): 3385–3418. DOI: 10.5555/2567709.2567771
[21]
Fan Y, Lv J. Innovated scalable efficient estimation in ultra-large Gaussian graphical models. The Annals of Statistics,2016, 44 (5): 2098–2126. DOI: 10.1214/15-AOS1416
[22]
Bickel P, Ritov Y. Efficient estimation in the errors in variables model. The Annals of Statistics,1987, 15 (2): 513–540. DOI: 10.1214/aos/1176350358
[23]
Ma Y, Li R. Variable selection in measurement error models. Bernoulli,2010, 16 (1): 274–300. DOI: 10.3150/09-bej205
[24]
Liang H, Li R. Variable selection for partially linear models with measurement errors. Journal of the American Statistical Association,2009, 104 (485): 234–248. DOI: 10.1198/jasa.2009.0127
[25]
Städler N, Bühlmann P. Missing values: Sparse inverse covariance estimation and an extension to sparse regression. Statistics and Computing,2012, 22 (1): 219–235. DOI: 10.1007/s11222-010-9219-7
[26]
Loh P L, Wainwright M J. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Advances in Neural Information Processing Systems,2012, 40 (3): 1637–1664. DOI: 10.1214/12-AOS1018
[27]
Belloni A, Rosenbaum M, Tsybakov A B. Linear and conic programming estimators in high dimensional errors-in-variables models. Journal of the Royal Statistical Society: Series B (Statistical Methodology),2017, 79 (3): 939–956. DOI: 10.1111/rssb.12196
[28]
Datta A, Zou H. Cocolasso for high-dimensional error-in-variables regression. The Annals of Statistics,2017, 45 (6): 2400–2426. DOI: 10.1214/16-AOS1527
[29]
Tibshirani R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological),1996, 58 (1): 267–288. DOI: 10.1111/j.2517-6161.1996.tb02080.x
[30]
Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association,2001, 96 (456): 1348–1360. DOI: 10.1198/016214501753382273
[31]
Zou H, Hastie T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology),2005, 67 (2): 301–320. DOI: 10.1111/j.1467-9868.2005.00503.x
[32]
Zou H. The adaptive Lasso and its oracle properties. Journal of the American Statistical Association,2006, 101 (476): 1418–1429. DOI: 10.1198/016214506000000735
[33]
Candes E, Tao T. The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics,2007, 35 (6): 2313–2351. DOI: 10.1214/009053606000001523
[34]
Bickel P J, Ritov Y, Tsybakov A B. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics,2009, 37 (4): 1705–1732. DOI: 10.1214/08-AOS620
[35]
Zhao P, Yu B. On model selection consistency of Lasso. The Journal of Machine Learning Research,2006, 7: 2541–2563. DOI: 10.5555/1248547.1248637
[36]
Wainwright M J. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained quadratic programming (Lasso). IEEE Transactions on Information Theory,2009, 55 (5): 2183–2202. DOI: 10.1109/TIT.2009.2016018
[37]
Buldygin V V, Kozachenko Yu V. Metric Characterization of Random Variables and Random Processes. Providence, RI: American Mathematical Society, 2000.
[38]
Sun T, Zhang C H. Scaled sparse linear regression. Biometrika,2012, 99 (4): 879–898. DOI: 10.1093/biomet/ass043
[39]
Ren Z, Sun T, Zhang C H, et al. Asymptotic normality and optimalities in estimation of large Gaussian graphical models. The Annals of Statistics,2015, 43 (3): 991–1026. DOI: 10.1214/14-AOS1286
[40]
Bickel P J, Levina E. Regularized estimation of large covariance matrices. The Annals of Statistics,2008, 36 (1): 199–227. DOI: 10.1214/009053607000000758
[41]
Bickel P J, Levina E. Covariance regularization by thresholding. The Annals of Statistics,2008, 36 (6): 2577–2604. DOI: 10.1214/08-AOS600