Yu Wang is currently a graduate student at the University of Science and Technology of China. His research mainly focuses on federated learning statistical machine learning
Wenquan Cui is currently an Associate Professor at the University of Science and Technology of China (USTC). He received his Ph.D. degree in Statistics from USTC in 2004. His research mainly focuses on survival analysis, high-dimensional statistical inference, and statistical machine learning
Jianjun Xu received his Ph.D. degree in Statistics from the University of Science and Technology of China in 2022. His research mainly focuses on functional data analysis
From a statistical viewpoint, it is essential to perform statistical inference in federated learning to understand the underlying data distribution. Due to the heterogeneity in the number of local iterations and in local datasets, traditional statistical inference methods are not competent in federated learning. This paper studies how to construct confidence intervals for federated heterogeneous optimization problems. We introduce the rescaled federated averaging estimate and prove the consistency of the estimate. Focusing on confidence interval estimation, we establish the asymptotic normality of the parameter estimate produced by our algorithm and show that the asymptotic covariance is inversely proportional to the client participation rate. We propose an online confidence interval estimation method called separated plug-in via rescaled federated averaging. This method can construct valid confidence intervals online when the number of local iterations is different across clients. Since there are variations in clients and local datasets, the heterogeneity in the number of local iterations is common. Consequently, confidence interval estimation for federated heterogeneous optimization problems is of great significance.
Graphical Abstract
An online confidence interval estimation method called separated plug-in via rescaled federated averaging.
Abstract
From a statistical viewpoint, it is essential to perform statistical inference in federated learning to understand the underlying data distribution. Due to the heterogeneity in the number of local iterations and in local datasets, traditional statistical inference methods are not competent in federated learning. This paper studies how to construct confidence intervals for federated heterogeneous optimization problems. We introduce the rescaled federated averaging estimate and prove the consistency of the estimate. Focusing on confidence interval estimation, we establish the asymptotic normality of the parameter estimate produced by our algorithm and show that the asymptotic covariance is inversely proportional to the client participation rate. We propose an online confidence interval estimation method called separated plug-in via rescaled federated averaging. This method can construct valid confidence intervals online when the number of local iterations is different across clients. Since there are variations in clients and local datasets, the heterogeneity in the number of local iterations is common. Consequently, confidence interval estimation for federated heterogeneous optimization problems is of great significance.
Public Summary
Develop online statistical inference in federated learning for heterogeneous optimization.
Propose an online confidence interval estimation method being separated plug-in by rescaled federated averaging.
Establish the asymptotic normality and show the asymptotic covariance being inversely proportional to the client participation rate.
With the advent of massive data in recent years, great attention has been given to identifying relationships among multiple responses and predictors in various applications, including multi-task learning in machine learning[1–3], imaging genetics[4, 5] and genetic association[6, 7]. Specifically, in cancer genomic studies, some miRNAs are known to regulate protein expression of various genes in cellular processes and their dysregulation plays a crucial role in human cancer. Hence, investigating the relationship between miRNA expression and protein expression is of great importance for human cancer diagnosis[8–9]. A standard approach for quantifying these relationships is to perform a univariate regression model for each response separately via the least squares estimation. Although it is easy to implement, this method ignores the dependence information among response variables, and it is also not applicable to high-dimensional cases. Thus, it is necessary to develop statistical tools that account for the dependence structure of responses in multi-response regression under high-dimensional settings.
An enormous effort has been mounted for variable selection and coefficient estimation in high-dimensional multi-response regression. Among them, one popular way is to consider the reduced rank regression model, including some foundational works[10–12]. Based on this framework, several methods that apply a latent factor point of view have been proposed to estimate the unknown coefficient matrix[13,14]. Another class of methods relies on different kinds of regularization[15–18]. Specifically, a line of research for regularization methods focuses on some structural prior knowledge of the coefficient matrix, that is, the set of variables is assumed to be structured into several groups. Please refer to Refs. [19–21] for more details about this kind of method.
To further assign uncertainty, some important progress has been made in high-dimensional multi-response settings. For instance, Greenlaw et al.[5] developed a hierarchical Bayesian model and constructed confidence intervals for the unknown coefficient matrix. More recently, by generalizing low-dimensional projection estimation (LDPE)[22] in the univariate response case, Chevalier et al.[23] proposed the desparsified multi-task lasso method and applied it to source imaging. However, when deriving the asymptotic distributions of estimators, this approach requires the number of nonzero rows of coefficient matrix to be s=o(√n/log(p)) for p covariates and n samples. To further alleviate this requirement, we attempt to use the two-step projection technique proposed by Li et al.[24], which treats important variables and others differently and can greatly improve the inference efficiency.
In this paper, we aim to develop a new methodology for statistical inference in the setting of high-dimensional multi-task regression. By taking group structures of the unknown coefficient matrix into consideration, the proposed estimator is constructed in a row-wise manner based on the two-step projection technique, which enjoys the benefit of reducing the estimation bias induced by these important signals. Under suitable conditions, we establish the asymptotic normality of the proposed two-step projection estimator along with corresponding confidence intervals for all components of the unknown coefficient matrix. Moreover, the satisfactory numerical performance of the proposed method strongly supports the theoretical results.
The rest of the paper is organized as follows. Section 2 presents the model setting and the new inference procedure for high-dimensional multi-response regression. Theoretical properties are established in Section 3. Numerical results and a real data analysis are provided in Sections 4 and 5, respectively. We conclude this work and possible future work in Section 6. The proofs of main theoretical properties are delegated to Appendix.
Notations. For any matrix B=(b1⊤,⋯,bp⊤)⊤=(b1,⋯,bq)=(bi,j)∈Rp×q, denote by bi and bj its ith row and jth column, respectively. We use ‖ to denote the Frobenius norm for matrix {\boldsymbol{B}}. Given any K , {\boldsymbol{B}}_{K} denotes the submatrix of {\boldsymbol{B}} consisting of columns in K . We write \| {\boldsymbol{B}}\|_{\ell_{2,1}} = \displaystyle \sum\limits_{i = 1}^{p} \| {\boldsymbol{b}}^i\| and \| {\boldsymbol{B}}\|_{\ell_{2,\infty}} = \max \limits_{1\leqslant i \leqslant p} \| {\boldsymbol{b}}^i\|, where \| {\boldsymbol{b}}^i\| = \left( {\displaystyle \sum\limits_{j = 1}^{q}|b_{i,j}^2|} \right)^{1/2} denotes the Euclidean norm for the vector {\boldsymbol{b}}^i. Denote by \mathrm{supp}( {\boldsymbol{B}}) = \{ i \in \{1,\cdots,p \} : {\boldsymbol{b}}^i \neq {\bf{0}} \} the nonzero rows of {\boldsymbol{B}} with size |\mathrm{supp}( {\boldsymbol{B}})|.
2.
Inference procedure via two-step projection estimator
2.1
Model setting
Consider the multi-task learning problem with a high-dimensional multi-response linear regression model
where {\boldsymbol{y}} = (y_{1}, \cdots, y_{q})^{\top} is a q -dimensional response vector, {\boldsymbol{B}} = ( {{\boldsymbol{b}}^1}^{\top},\cdots, {{\boldsymbol{b}}^p}^{\top})^{\top} = ( {\boldsymbol{b}}_1,\cdots, {\boldsymbol{b}}_q) = (b_{i,j}) \in {\mathbb{R}}^{p \times q} is the unknown coefficient matrix, {\boldsymbol{x}} = (x_{1}, ... , x_{p})^{\top} is the p -dimensional covariate vector, and {\boldsymbol{e}} is the q -dimensional error vector, which is independent of {\boldsymbol{x}}. Following Li et al.[25], the dimension p is allowed to be much larger than the sample size n , and the dimension q is regarded as a fixed number in this paper.
Suppose we have n independent observations ( {\boldsymbol{x}}^{i}, {\boldsymbol{y}}^{i})_{i = 1}^n from ( {\boldsymbol{x}}, {\boldsymbol{y}}) in model (1). Using the matrix notation, the multi-response linear regression model (1) can be rewritten as
where {\boldsymbol{Y}} = ( {\boldsymbol{y}}_1,\cdots, {\boldsymbol{y}}_q) \in {\mathbb{R}}^{n \times q} is the response matrix, {\boldsymbol{X}} = ( {\boldsymbol{x}}_1,\cdots, {\boldsymbol{x}}_p) \in {\mathbb{R}}^{n \times p} is the design matrix, and {\boldsymbol{E}} = ( {\boldsymbol{e}}_1,\cdots, {\boldsymbol{e}}_q) = (e_{i,j})\in {\mathbb{R}}^{n \times q} is the random error matrix that is independent of {\boldsymbol{X}}. Without loss of generality, we assume that e_{i,j} s are independent and identically distributed random variables with mean zero and variance \sigma^2 .
We aim to construct confidence intervals for the coefficients in {\boldsymbol{B}}. Different from the classical assumption that {\boldsymbol{B}} is row-sparse, we allow {\boldsymbol{B}} to have a more complex sparsity structure. More specifically, we first establish the relationship between the distance correlation[25, 26] and sparsity structure. Define the population quantity
with 1\leqslant i \leqslant p for the effects caused by {\boldsymbol{b}}^i. Here, the distance correlation \mathrm{dcorr}( {\boldsymbol{u}}, {\boldsymbol{v}}) between two random vectors {\boldsymbol{u}} \in {\mathbb{R}}^{d_{u}} and {\boldsymbol{v}} \in {\mathbb{R}}^{d_{v}} is defined as
where the constant c_{d} = \dfrac{\pi^{(1+d)/2}}{\Gamma((1+d)/2)} for d = d_{u}, d_{v} , and f_{ {\boldsymbol{u}}, {\boldsymbol{v}}}( {\boldsymbol{t}}, {\boldsymbol{s}}), f_{ {\boldsymbol{u}}}( {\boldsymbol{t}}) and f_{ {\boldsymbol{v}}}( {\boldsymbol{s}}) are the characteristic functions of ( {\boldsymbol{u}}, {\boldsymbol{v}}), {\boldsymbol{u}} and {\boldsymbol{v}}, respectively. Please refer to Ref. [26] for more details about the distance correlation.
Because \mathrm{dcorr}( {\boldsymbol{u}}, {\boldsymbol{v}}) = 0 if and only if {\boldsymbol{u}} and {\boldsymbol{v}} are independent, we establish the following relationship between the population quantity \omega_{i} and the structure of {\boldsymbol{B}}:
\omega_{i} = 0\Leftrightarrow {\boldsymbol{b}}^i = {\bf{0}}, \ \ \ \ 1\leqslant i \leqslant p.
Similar to Li et al.[25], we regard x_i as an important predictor if \omega_i\geqslant cn^{-\kappa}, where c>0 and 0\leqslant \kappa < 1/2 are some constants. Moreover, we define S_A as follows to contain the indices of all important predictors:
The indices of the rest of the unimportant predictors are collected by S_A^c . Correspondingly, any {\boldsymbol{b}}^i with i\in S_A is regarded as an important signal in the form of a vector, while any {\boldsymbol{b}}^i with i\in S_A^c is treated as a weak signal.
2.2
Two-step projection estimator
By borrowing ideas from Li et al.[24], the proposed estimator is constructed in a row-wise manner, i.e.,
for any j \in \{1,\cdots,p\}, where the two-step projection residual vector {\boldsymbol{z}}_j is defined by the following two-step procedure:
Step 1: Given a prescreened set \hat{S} for those identifiable signals in {\boldsymbol{B}}, we obtain the following residual vector by an exact orthogonalization of {\boldsymbol{x}}_k against {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}:
where {\boldsymbol{P}}_{\hat{S}\backslash \{j\}} = {\boldsymbol{X}}_{\hat{S}\backslash \{j\}} ( {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}^{\top} {\boldsymbol{X}}_{\hat{S}\backslash \{j\}})^{-1} {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}^{\top} is the projection matrix of the column space of {\boldsymbol{X}}_{\hat{S}\backslash \{j\}}.
Step 2: Then, {\boldsymbol{z}}_{j} is constructed by the residual of lasso regression of {\boldsymbol{\psi}}_{j}^{(j)} against {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} . That is,
where {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} is the matrix composed of column vectors {\boldsymbol{\psi}}_{k}^{(j)} for k \in \hat{S}^{c}\backslash \{j\} , and {\hat{\boldsymbol{\nu}}}_{j}(\lambda_{j}) is the lasso estimator depending on the regularization parameter \lambda_{j} .
Based on the above two-step strategy, the two-step projection residual vector {\boldsymbol{z}}_{j} satisfies the following two properties:
(a) It is strictly orthogonal to {\boldsymbol{X}}_{\hat{S} \backslash \{j\}} consisting of important covariates.
(b) It is relaxed orthogonally to {\boldsymbol{X}}_{\hat{S}^{c} \backslash \{j\}} consisting of other covariates.
This kind of hybrid orthogonalization brings the benefit of reducing the estimation bias of { \widehat{\boldsymbol{b}}}^j. To see this, plugging model (2) to (3) yields
It is easy to see from the above that the influence generated by these identifiable signals in \hat{S} \backslash \{j\} is eliminated from the bias term of the estimation error.
Denote by \tau_{j} = \| {\boldsymbol{z}}_{j}\| / | {\boldsymbol{z}}_{j}^{\top} {\boldsymbol{x}}_{j}|. Then the confidence interval of b_{j,\,k} is constructed by
for any j = 1,\cdots,p and k = 1,\cdots,q, where \varPhi denotes the standard normal distribution function, \alpha is the significance level, and \hat{\sigma} is a consistent estimator of \sigma . Following Chevalier et al.[23] and Reid et al.[27], in this paper, we suggest the cross-validation-based variance estimator
where \hat {\boldsymbol{e}}_t is the t th column of { \hat{\boldsymbol{E}}} = {\boldsymbol{Y}}- {\boldsymbol{X}} \hat {\boldsymbol{B}}_{\rm CV} with \hat {\boldsymbol{B}}_{\rm CV} being the multivariate group lasso estimator tuned by cross-validation, and \hat{s} = |{\rm supp}( \hat {\boldsymbol B}_{\rm CV})| means the number of nonzero rows of \hat {\boldsymbol{B}}_{\rm CV}.
Note that a prescreened set \hat{S} for those identifiable signals in {\boldsymbol{B}} is necessary for the proposed method. In this paper, we suggest utilizing DC-SIS[25] to obtain a suitable prescreened set \hat{S} and we will provide the sure screening property of DC-SIS in Proposition 1. In conclusion, the proposed method TPE is summarized in Algorithm 1. However, we cannot guarantee that all the truly important signals are retained in practice. In view of this potential scenario, we alternatively propose a variant two-step estimator based on the self-bias correction idea.
Given a preliminary estimate such as the multivariate group lasso estimate {\hat {\boldsymbol{B}}_0} = {\left( {\hat {\boldsymbol{b}}{{_0^1}^ \top }, \cdots ,\hat {\boldsymbol{b}}{{_0^p}^ \top }} \right)^ \top }, the variant two-step estimator {\hat {\boldsymbol{B}}_{\rm{V}}} = {\left( {\hat {\boldsymbol{b}}{{_{\rm{V}}^1}^ \top }, \cdots ,\hat {\boldsymbol{b}}{{_{\rm{V}}^p}^ \top }} \right)^ \top } can be defined through each row as
If some truly important features are omitted in the prescreening step, the variant procedure can be a reliable choice since the magnitude of the bias term depends on \displaystyle \sum\nolimits_{k \in \hat{S}^{c}, k \neq j} \| {\boldsymbol{b}}^k - { \hat{\boldsymbol{b}}}_{0}^k\| instead of the diverging term \displaystyle \sum\nolimits_{k \in \hat{S}^{c}, k \neq j} \| {\boldsymbol{b}}^k\|.
Algorithm 1 TPE algorithm
Require:{\boldsymbol{X}} \in {\mathbb{R}}^{n \times p}, {\boldsymbol{Y}} \in {\mathbb{R}}^{n \times q}, a prescreened set \hat{S} , a significance level \alpha ;
Ensure: {CI}_{j,k} for j=1,\cdots,p and k=1,\cdots,q
3.
Theoretical properties
In this section, we provide statistical properties for the proposed method TPE. First, we need to clarify some conditions on the model.
Condition 1. The rows of {\boldsymbol{X}} are independent and identically distributed (i.i.d.) from N( {\bf{0}},{\boldsymbol{\varSigma}}_{X}), and the eigenvalues of {\boldsymbol{\varTheta}} = {\boldsymbol{\varSigma}}_{X}^{-1} = (\theta_{i,j}) are bounded within the interval [1/L,L] for some L \geqslant 1.
Condition 2.s^{*} = \max_{1 \leqslant j \leqslant p} s_{j} = o(n/\log (p)), where s_{j} = |\{ k \in \{1,\cdots,p\}: k \neq j, {\theta}_{j,k} \neq 0 \}| is the sparsity with respect to rows of {\boldsymbol{\varTheta}}.
Condition 3. s = o(\max\{n/\log (p), n/s^{*} \}) with s = |S_{A}| , and \displaystyle \sum\nolimits_{k \in S_{A}^{c}} \| {\boldsymbol{b}}^k\| = o(1/\sqrt{\log (p)}).
Conditions 1 and 2 are the same as those in Ref. [24], which provide theoretical guarantees for estimation and prediction consistency in the two-step procedure. The first part of Condition 1 is a common Gaussian assumption, which can be relaxed to a general case such as sub-Gaussian. The second part of Condition 1 assumes that the eigenvalues of {\boldsymbol{\varTheta}} are well bounded from below and above, which is used for characterizing the identifiability of the design matrix {\boldsymbol{\psi}}_{\hat{S}^{c}\backslash \{j\}}^{(j)} in Eq. (5) based on the bounded sign-restricted cone invertibility factor[28]. Condition 2 imposes a typical constraint on the maximum column sparsity of {\boldsymbol{\varTheta}}, which is needed to guarantee consistent estimation in the two-step procedure.
Condition 3 entails the main contributions of the proposed method. The first part of this condition allows the number of identifiable signals s = o(\max\{n/\log (p), n/s^{*} \}) , which is much weaker than o(\sqrt{n}/\log (p)) in Ref. [23]. Moreover, the order of s can be much larger than o(n/\log (p)) if s^{*} \ll \log(p) . The second part of Condition 3 imposes a constraint on the weak signals in S_{A}^{c} , which is used to guarantee that the influence on the weak signals cannot break the inference procedure. Next, the following definition characterizes the theoretical properties of a suitable prescreened set \hat{S}.
Definition 1 (suitable set). \hat{S} is called a suitable prescreened set if it satisfies: (a) \hat{S} is independent of ( {\boldsymbol{X}}, {\boldsymbol{Y}}); (b) with probability at least 1-\epsilon_{n,p} , S_{A} \subset \hat{S} and |\hat{S}| = O(s) , where \epsilon_{n,p} is asymptotically vanishing.
Definition 1 is similar to the definition of an acceptable set in Ref. [24]. The first part of this definition is applied to eliminate the dependence between \hat{S} and the proposed estimator, which can be achieved by the common sample splitting technique. The second part of this definition assumes the sure screening property of \hat{S} , which can be justified by the following proposition.
Proposition 1. Under Condition 1, there exists a constant c_{0}>0 such that
for any j = 1,\ldots, p , where {\boldsymbol{\varLambda}}_{j,\cdot} denotes the j th row of {\boldsymbol{\varLambda}}.
Part b: Further assume that Conditions 1–3 hold. For some constants \epsilon > 0 and \delta \geqslant 1, let \lambda_{j} = (1+\epsilon)\sqrt{2\delta \log(p)/(n\theta_{j,j})} with j = 1,\ldots, p , where \lambda_{j} is the regularization parameter in Eq. (5). Then, for any j = 1,\ldots, p , with probability at least 1-\epsilon_{n,p}-o(p^{1-\delta}) , we have
The first part of Theorem 1 presents that the error \sqrt{n}( \hat {\boldsymbol{B}}- {\boldsymbol{B}}) can be decomposed into a Gaussian term {\boldsymbol{\varLambda}} with zero mean and a bias term {\boldsymbol{\varDelta}}. The second part of this theorem shows that the bias term {\boldsymbol{\varDelta}} is asymptotically negligible with high probability. In particular, we prove that \hat{\theta}_{j,j}^{1/2} converges to {\theta}_{j,j}^{-1/2} with asymptotic probability one, which ensures the effectiveness of the inference in terms of the length of the confidence interval. It is worth noting that the product of the noise term \tau_{j} and n^{1/2} converges to the same constant as that of the estimator in Ref. [23] with asymptotic probability one. Since the noise factor is proportional to the variance of the estimator, the lengths of the confidence intervals for the two estimators are theoretically equal. Based on the conclusion in this theorem, we immediately obtain the asymptotic properties for all elements of the proposed two-step projection estimator, as shown in the following corollary.
Corollary 1. Under the conditions in Theorem 1, with a given significance level \alpha , we further have
Note that Corollary 1 still holds if the noise level \sigma is replaced by a consistent estimator \hat{\sigma}. Therefore this corollary provides theoretical guarantees for the constructed confidence interval in (6).
4.
Simulation studies
In this section, we conduct simulation studies to investigate the performance of the proposed method compared with the generalization of LDPE[22] for multi-task regression[23] (denoted by MLDPE for simplicity). The implementation of \hat {\boldsymbol{B}}_{\rm CV} with \ell_{2,1} group regularization is performed via the R package RMTL[29]. Based on \hat {\boldsymbol{B}}_{\rm CV}, we obtain a consistent estimator \hat{\sigma} for \sigma by applying the method stated in Section 2.2. Moreover, DC-SIS[25] is utilized to obtain a suitable prescreened set \hat{S} for those important predictors. To accurately control the size of \hat{S}, we take the least squares estimates on the subsets of \hat{S} and then use a BIC-type criterion[30] to choose the best subset.
In terms of the generation methods of the design matrix and error matrix, we conduct some simulations based on the following two models in both the sparse setting and the approximately sparse setting.
Model 1: The rows of the design matrix {\boldsymbol{X}} are sampled as independent and identically distributed copies from N( {\bf{0}},{\boldsymbol{\varSigma}}_{X}), where {\boldsymbol{\varSigma}}_{X} = (0.5^{|i-j|})_{p\times p}. The error items of {\boldsymbol{E}} are independent and identically distributed N(0,\sigma^{2}) with \sigma = 1 .
Model 2: The entries of the design matrix {\boldsymbol{X}} = (x_{i,j}) are Bernoulli random variables with a success probability of 0.8. All the columns of {\boldsymbol{X}} are centered to have zero mean. The entries of {\boldsymbol{E}} are generated from a t-distribution with 10 degrees of freedom.
We set the number of responses q = 200 . Then, the sample size n , the number of predictors p and the coefficient matrices {\boldsymbol{B}} in different settings are constructed as follows:
Sparse setting: We set (n,p) = (100,200), (150,400), (200,800), respectively. Similar to Ref. [31], the elements in the first five rows of {\boldsymbol{B}} are drawn from a uniform distribution on [-5,-1]\cup[1,5] , and the elements in other rows are set to be 0.
Approximately sparse setting: We set (n,p) = (100,400), (150,600), \,(200,1000), respectively. Similar to Refs. [22, 24], the j th important signal satisfies \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ} with \lambda_{\rm univ} = \sqrt{2\log(p)/n} for j = 40,\;80,\;120,\;160,\;200, and \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ}/j^{2} for all other j . More specifically, to generate the j th row of {\boldsymbol{B}} with \| {\boldsymbol{b}}^j\| = 3\lambda_{\rm univ}, we first generate a q -dimensional vector {\boldsymbol{v}} = (v_{1}, ..., v_{q}) with items v_{k} \sim U[0,1],\, k = 1,\ldots,q. Then, we normalize {\boldsymbol{v}} such that \| {\boldsymbol{v}}\| = 1. Finally, we set {\boldsymbol{b}}^j = 3\lambda_{\rm univ} {\boldsymbol{v}}.
The primary purpose of our simulation is to yield the 95% confidence intervals for the regression coefficients {\boldsymbol{B}}. In each setting, we run 100 replications and calculate the same three performance measures as those in Ref. [24]: the average coverage probability for all regression coefficients (CPA), the average coverage probability for important coefficients (CPI), and the average length of confidence intervals for all regression coefficients (Length).
Tables 1 and 2 summarize the results for the two methods in different settings. Clearly, the results in the sparse setting and approximately sparse setting are similar. In Gaussian settings, it can be seen from CPA (or CPI) that the average coverage probabilities for all regression coefficients (or important coefficients) of the proposed method are approximately 95%, while the average coverage probabilities for all regression coefficients (or important coefficients) of MLDPE deviate from 95% slightly. In view of Length, the average lengths of the confidence intervals for the two methods are roughly the same. In non-Gaussian settings, the performance of both TPE and MLDPE tends to worsen.
Table
1.
Comparison of performance measures for two methods in the sparse setting.
In each setting, we further set \hat{\sigma} = 1 to calculate performance measures for comparison. We can see that the performance of the proposed method remains stable, while the performance of MLDPE fluctuates slightly. In summary, the proposed method outperforms MLDPE in both the sparse setting and the approximately sparse setting.
5.
Application to TCGA-OV data
The Cancer Genome Atlas (TCGA) is a cancer genomics program incorporating clinical data on human cancers and tumor subtypes, including aberrations in gene expression, epigenetics (miRNAs, methylation), and protein expression. In this section, we apply our methodology to the ovarian serous cystadenocarcinoma (TCGA-OV) data downloaded from the TCGA website①. We regarded the miRNA expression quantification obtained by the miRNA-seq experimental strategy as predictors, and protein expression quantification using the reversed-phase protein array technique as responses. Our goal is to explore the regulatory effect of miRNA on protein expression in ovarian cancer.
After preliminary data processing, the number of miRNAs was reduced to 1530, and the number of proteins was 216. Then, we utilize the DC-SIS method to perform feature screening on the predictors, select the first 400 important miRNAs as predictors, and choose the first 50 proteins with larger variance as response variables. Finally, we obtained n = 300 common samples with p = 400 miRNAs as predictors and q = 50 proteins as responses. It is worth noting that both predictors and responses are centered and normalized to have zero mean and a common \ell_{2} -norm \sqrt{n} .
By applying the proposed method, we obtain 95% confidence intervals for all unknown coefficients. As a result, the top 45 important miRNAs are selected in Table 3, 32 of which highlighted in bold are also chosen by MLDPE. These selected miRNAs play important regulatory roles in cancer. For example, compared to normal ovaries, some miRNAs are dysregulated in ovarian cancer. It is worth noting that the importance is reflected by the magnitude of the sum of absolute values for these estimated coefficients over all 50 proteins. Among these selected miRNAs, hsa-mir-182, hsa-mir-200a, hsa-mir-223, and hsa-mir-16 are upregulated, and hsa-mir-432, hsa-mir-493, hsa-mir-9 and hsa-mir-377 are downregulated[ 32]. Due to the dysregulation of these miRNAs, some of them could potentially be used as diagnostic biomarkers, including hsa-mir-214 and hsa-mir-21[33].
Table
3.
45 miRNAs selected by TPE. The miRNAs also selected by MLDPE are highlighted in bold.
Experimental results show that the average length of confidence intervals obtained by TPE method is 0.4372, while MLDPE method yields the average length of 0.4136. It is clear that the average lengths of both methods are around the same level. The No.1 miRNA chosen by both TPE and MLDPE is hsa-mir-486-2, which was also identified as a potential biomarker for lung adenocarcinoma[34]. For the unknown coefficients of hsa-mir-486-2, Fig. 1 displays their estimates and the corresponding 95% confidence intervals over all 50 proteins, which further justifies the important regulatory roles of hsa-mir-486-2.
Figure
1.
Estimates of the unknown coefficients of miRNA hsa-mir-486-2 (red squares for TPE and black dots for MLDPE) and the corresponding 95% confidence intervals (obtained by TPE) over all 50 proteins.
In this paper, we develop a new inference methodology based on the two-step projection estimator (TPE) in high-dimensional multi-task regression. The proposed estimator is established in a row-wise manner, which has the benefit of reducing the estimation bias induced by these important signals. In addition, we provide strict theoretical guarantees for our method, including asymptotic normality and corresponding confidence intervals. Moreover, the numerical results of the proposed method indicate that the proposed method works quite well. Specifically, we apply this approach to an ovarian cancer dataset and identify several miRNAs that are closely associated with protein expression levels. Results demonstrate that these miRNAs can potentially serve as biomarkers in disease research, aiding in the diagnosis of the ovarian cancer.
It would be interesting to extend our method to more general settings, such as multi-response linear regression models with measurement errors and generalized linear models. It is also of interest to build a relationship between the two-step projection technique and the partially penalized procedure in high-dimensional multi-response settings. These generalizations are interesting topics for future research.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (12171451, 71873128).
Conflict of Interest
The authors declare that they have no conflict of interest.
Develop online statistical inference in federated learning for heterogeneous optimization.
Propose an online confidence interval estimation method being separated plug-in by rescaled federated averaging.
Establish the asymptotic normality and show the asymptotic covariance being inversely proportional to the client participation rate.
Hastie T, Friedman J, Tibshirani R. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer, 2001.
[2]
Berk R A. Statistical Learning from a Regression Perspective. New York: Springer, 2008.
[3]
James G, Witten D, Hastie T, et al. An Introduction to Statistical Learning: With Applications in R. New York: Springer, 2013.
[4]
Li T, Sahu A K, Talwalkar A, et al. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine,2020, 37 (3): 50–60. DOI: 10.1109/MSP.2020.2975749
[5]
McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017. Fort Lauderdale, FL: PMLR, 2017: 1273–1282.
[6]
Preuveneers D, Rimmer V, Tsingenopoulos I, et al. Chained anomaly detection models for federated learning: An intrusion detection case study. Applied Sciences,2018, 8 (12): 2663. DOI: 10.3390/app8122663
[7]
Mothukuri V, Khare P, Parizi R M, et al. Federated-learning-based anomaly detection for IoT security attacks. IEEE Internet of Things Journal,2021, 9 (4): 2545–2554. DOI: 10.1109/JIOT.2021.3077803
[8]
Amiri M M, Gündüz D. Federated learning over wireless fading channels. IEEE Transactions on Wireless Communications,2020, 19: 3546–3557. DOI: 10.1109/TWC.2020.2974748
[9]
Bharadhwaj H. Meta-learning for user cold-start recommendation. In: 2019 International Joint Conference on Neural Networks (IJCNN). Budapest, Hungary: IEEE, 2019: 1–8.
[10]
Chen S, Xue D, Chuai G, et al. FL-QSAR: A federated learning-based QSAR prototype for collaborative drug discovery. Bioinformatics,2021, 36: 5492–5498. DOI: 10.1093/bioinformatics/btaa1006
[11]
Tarcar A K. Advancing healthcare solutions with federated learning. In: Federated Learning. Cham, Switzerland: Springer, 2022: 499–508.
[12]
Zhao Y, Li M, Lai L, et al. Federated learning with non-IID data. arXiv: 1806.00582, 2018.
[13]
Zhang W, Wang X, Zhou P, et al. Client selection for federated learning with non-IID data in mobile edge computing. IEEE Access,2021, 9: 24462–24474. DOI: 10.1109/ACCESS.2021.3056919
[14]
Briggs C, Fan Z, Andras P. Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In: 2020 International Joint Conference on Neural Networks (IJCNN). Glasgow, UK: IEEE, 2020: 1–9.
[15]
Li X, Huang K, Yang W, et al. On the convergence of FedAvg on non-IID data. In: 2020 International Conference on Learning Representations. Appleton, WI: ICLR, 2020.
[16]
Stich S U. Local SGD converges fast and communicates little. arXiv: 1805.09767, 2018.
[17]
Zhou F, Cong G. On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2018: 3219–3227.
[18]
Wang S, Tuor T, Salonidis T, et al. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications,2019, 37 (6): 1205–1221. DOI: 10.1109/JSAC.2019.2904348
[19]
Yu H, Jin R, Yang S. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, CA: PMLR, 2019: 7184–7193.
[20]
Wang J, Liu Q, Liang H, et al. Tackling the objective inconsistency problem in heterogeneous federated optimization. In: Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Red Hook, NY: Curran Associates, Inc., 2020, 33: 7611–7623.
[21]
Li T, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks. In: Proceedings of Machine Learning and Systems 2020 (MLSys 2020). Austin, TX: mlsys.org, 2020, 2: 429–450.
[22]
Karimireddy S P, Kale S, Mohri M, et al. SCAFFOLD: Stochastic controlled averaging for federated learning. In: Proceedings of the 37th International Conference on Machine Learning. Online: PMLR, 2020: 5132–5143.
[23]
Liang X, Shen S, Liu J, et al. Variance reduced local SGD with lower communication complexity. arXiv: 1912.12844, 2019.
[24]
Ruppert D. Efficient estimations from a slowly convergent Robbins–Monro process. Ithaca, New York: Cornell University, 1988.
[25]
Polyak B T, Juditsky A B. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization,1992, 30 (4): 838–855. DOI: 10.1137/0330046
[26]
Zhu W, Chen X, Wu W. Online covariance matrix estimation in stochastic gradient descent. Journal of the American Statistical Association,2021, 118: 393–404. DOI: 10.1080/01621459.2021.1933498
[27]
Fang Y, Xu J, Yang L. Online bootstrap confidence intervals for the stochastic gradient descent estimator. The Journal of Machine Learning Research,2018, 19 (1): 3053–3073. DOI: 10.5555/3291125.3309640
[28]
Li X, Liang J, Chang X, et al. Statistical estimation and online inference via local SGD. In: Proceedings of Thirty Fifth Conference on Learning Theory. London: PMLR, 2022: 1613–1661.
[29]
Su W J, Zhu Y. Uncertainty quantification for online learning and stochastic approximation via hierarchical incremental gradient descent. arXiv: 1802.04876, 2018.
[30]
Reisizadeh A, Mokhtari A, Hassani H, et al. FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization. In: Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020. Palermo, Italy: PMLR, 2020, 108: 2021–2031.
[31]
Khaled A, Mishchenko K, Richtárik P. First analysis of local GD on heterogeneous data. arXiv: 1909.04715, 2019.
[32]
Nesterov Y. Introductory Lectures on Convex Optimization: A Basic Course. New York: Springer, 2003.
[33]
Toulis P, Airoldi E M. Asymptotic and finite-sample properties of estimators based on stochastic gradients. The Annals of Statistics,2017, 45 (4): 1694–1727. DOI: 10.1214/16-AOS1506
[34]
Chen X, Lee J D, Tong X T, et al. Statistical inference for model parameters in stochastic gradient descent. The Annals of Statistics,2020, 48 (1): 251–273. DOI: 10.1214/18-AOS1801
[35]
Liu R, Yuan M, Shang Z. Online statistical inference for parameters estimation with linear-equality constraints. Journal of Multivariate Analysis,2022, 191: 105017. DOI: 10.1016/j.jmva.2022.105017
[36]
Lee S, Liao Y, Seo M H, et al. Fast and robust online inference with stochastic gradient descent via random scaling. Proceedings of the AAAI Conference on Artificial Intelligence,2022, 36 (7): 7381–7389. DOI: 10.1609/aaai.v36i7.20701
[37]
Salam A, El Hibaoui A. Comparison of machine learning algorithms for the power consumption prediction: Case study of Tetouan city. In: 2018 6th International Renewable and Sustainable Energy Conference (IRSEC). Rabat, Morocco: IEEE, 2018.
[38]
Hall P, Heyde C C. Martingale Limit Theory and Its Application. New York: Academic Press, 2014.
Figure
1.
Impacts of \tau based on 1000 replications. The x -axis and y -axis are the number of rounds and \left \| \bar{ {\theta}}_T- {\theta}^* \right \|, respectively. “Balance” means identical number of local iterations, where E_i=4 for all i\in[N] and \tau=160 . “Small” means a small degree of heterogeneity in the number of local iterations, where E_i is IID from a discrete uniform distribution on \{3, 4, 5\} , and \mathbb{E}\tau=500/3 . “Large” represents a large degree of heterogeneity in the number of local iterations, where E_i is IID from a discrete uniform distribution on \{1, 2, 3, 4, 5, 6, 7\} , and \mathbb{E}\tau=200 .
References
[1]
Hastie T, Friedman J, Tibshirani R. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer, 2001.
[2]
Berk R A. Statistical Learning from a Regression Perspective. New York: Springer, 2008.
[3]
James G, Witten D, Hastie T, et al. An Introduction to Statistical Learning: With Applications in R. New York: Springer, 2013.
[4]
Li T, Sahu A K, Talwalkar A, et al. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine,2020, 37 (3): 50–60. DOI: 10.1109/MSP.2020.2975749
[5]
McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017. Fort Lauderdale, FL: PMLR, 2017: 1273–1282.
[6]
Preuveneers D, Rimmer V, Tsingenopoulos I, et al. Chained anomaly detection models for federated learning: An intrusion detection case study. Applied Sciences,2018, 8 (12): 2663. DOI: 10.3390/app8122663
[7]
Mothukuri V, Khare P, Parizi R M, et al. Federated-learning-based anomaly detection for IoT security attacks. IEEE Internet of Things Journal,2021, 9 (4): 2545–2554. DOI: 10.1109/JIOT.2021.3077803
[8]
Amiri M M, Gündüz D. Federated learning over wireless fading channels. IEEE Transactions on Wireless Communications,2020, 19: 3546–3557. DOI: 10.1109/TWC.2020.2974748
[9]
Bharadhwaj H. Meta-learning for user cold-start recommendation. In: 2019 International Joint Conference on Neural Networks (IJCNN). Budapest, Hungary: IEEE, 2019: 1–8.
[10]
Chen S, Xue D, Chuai G, et al. FL-QSAR: A federated learning-based QSAR prototype for collaborative drug discovery. Bioinformatics,2021, 36: 5492–5498. DOI: 10.1093/bioinformatics/btaa1006
[11]
Tarcar A K. Advancing healthcare solutions with federated learning. In: Federated Learning. Cham, Switzerland: Springer, 2022: 499–508.
[12]
Zhao Y, Li M, Lai L, et al. Federated learning with non-IID data. arXiv: 1806.00582, 2018.
[13]
Zhang W, Wang X, Zhou P, et al. Client selection for federated learning with non-IID data in mobile edge computing. IEEE Access,2021, 9: 24462–24474. DOI: 10.1109/ACCESS.2021.3056919
[14]
Briggs C, Fan Z, Andras P. Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In: 2020 International Joint Conference on Neural Networks (IJCNN). Glasgow, UK: IEEE, 2020: 1–9.
[15]
Li X, Huang K, Yang W, et al. On the convergence of FedAvg on non-IID data. In: 2020 International Conference on Learning Representations. Appleton, WI: ICLR, 2020.
[16]
Stich S U. Local SGD converges fast and communicates little. arXiv: 1805.09767, 2018.
[17]
Zhou F, Cong G. On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2018: 3219–3227.
[18]
Wang S, Tuor T, Salonidis T, et al. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications,2019, 37 (6): 1205–1221. DOI: 10.1109/JSAC.2019.2904348
[19]
Yu H, Jin R, Yang S. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, CA: PMLR, 2019: 7184–7193.
[20]
Wang J, Liu Q, Liang H, et al. Tackling the objective inconsistency problem in heterogeneous federated optimization. In: Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Red Hook, NY: Curran Associates, Inc., 2020, 33: 7611–7623.
[21]
Li T, Sahu A K, Zaheer M, et al. Federated optimization in heterogeneous networks. In: Proceedings of Machine Learning and Systems 2020 (MLSys 2020). Austin, TX: mlsys.org, 2020, 2: 429–450.
[22]
Karimireddy S P, Kale S, Mohri M, et al. SCAFFOLD: Stochastic controlled averaging for federated learning. In: Proceedings of the 37th International Conference on Machine Learning. Online: PMLR, 2020: 5132–5143.
[23]
Liang X, Shen S, Liu J, et al. Variance reduced local SGD with lower communication complexity. arXiv: 1912.12844, 2019.
[24]
Ruppert D. Efficient estimations from a slowly convergent Robbins–Monro process. Ithaca, New York: Cornell University, 1988.
[25]
Polyak B T, Juditsky A B. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization,1992, 30 (4): 838–855. DOI: 10.1137/0330046
[26]
Zhu W, Chen X, Wu W. Online covariance matrix estimation in stochastic gradient descent. Journal of the American Statistical Association,2021, 118: 393–404. DOI: 10.1080/01621459.2021.1933498
[27]
Fang Y, Xu J, Yang L. Online bootstrap confidence intervals for the stochastic gradient descent estimator. The Journal of Machine Learning Research,2018, 19 (1): 3053–3073. DOI: 10.5555/3291125.3309640
[28]
Li X, Liang J, Chang X, et al. Statistical estimation and online inference via local SGD. In: Proceedings of Thirty Fifth Conference on Learning Theory. London: PMLR, 2022: 1613–1661.
[29]
Su W J, Zhu Y. Uncertainty quantification for online learning and stochastic approximation via hierarchical incremental gradient descent. arXiv: 1802.04876, 2018.
[30]
Reisizadeh A, Mokhtari A, Hassani H, et al. FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization. In: Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020. Palermo, Italy: PMLR, 2020, 108: 2021–2031.
[31]
Khaled A, Mishchenko K, Richtárik P. First analysis of local GD on heterogeneous data. arXiv: 1909.04715, 2019.
[32]
Nesterov Y. Introductory Lectures on Convex Optimization: A Basic Course. New York: Springer, 2003.
[33]
Toulis P, Airoldi E M. Asymptotic and finite-sample properties of estimators based on stochastic gradients. The Annals of Statistics,2017, 45 (4): 1694–1727. DOI: 10.1214/16-AOS1506
[34]
Chen X, Lee J D, Tong X T, et al. Statistical inference for model parameters in stochastic gradient descent. The Annals of Statistics,2020, 48 (1): 251–273. DOI: 10.1214/18-AOS1801
[35]
Liu R, Yuan M, Shang Z. Online statistical inference for parameters estimation with linear-equality constraints. Journal of Multivariate Analysis,2022, 191: 105017. DOI: 10.1016/j.jmva.2022.105017
[36]
Lee S, Liao Y, Seo M H, et al. Fast and robust online inference with stochastic gradient descent via random scaling. Proceedings of the AAAI Conference on Artificial Intelligence,2022, 36 (7): 7381–7389. DOI: 10.1609/aaai.v36i7.20701
[37]
Salam A, El Hibaoui A. Comparison of machine learning algorithms for the power consumption prediction: Case study of Tetouan city. In: 2018 6th International Renewable and Sustainable Energy Conference (IRSEC). Rabat, Morocco: IEEE, 2018.
[38]
Hall P, Heyde C C. Martingale Limit Theory and Its Application. New York: Academic Press, 2014.