Let $\Gamma ={Ik\u2208RN1\xd7N2}k=1K$ denote a set of corrupted images from $K$ sensors, and let $\Gamma \u02dc={I\u02dck\u2208R(N1\xd7N2)/4L}k=1K$ be the corresponding set of low-frequency subimages computed using the LWT. $L$ is the number of LWT layers. For simplicity, we assume square images so that $N1/4L=N2/4L=defN$. Stack all $N$ columns of each $I\u02dck$ into a single vector of dimension $N2$, then use these vectors as $K$ columns of a matrix $I\u02dcD$. After normalizing the data, we denote by $i\u2113k$ the $(\u2113,k)$ element of $I\u02dcD$, Display Formula
$I\u02dcD=(i11i12\u2026i1Ki21\vdots i22\cdots \vdots \ddots i2K\vdots iN21iN22\cdots iN2K).$(7)
The cumulative low-frequency subimage matrix is modeled similarly to Eq. (3), Display Formula$I\u02dcD=I\u02dcA+I\u02dcE,$(8)
in which $I\u02dcA\u2208RN2\xd7K$ denotes the noise-free and integrated low-frequency subimage sequence matrix, and $I\u02dcE\u2208RN2\xd7K$ denotes the sparse error matrix from which high-frequency content has been attenuated by the selection of LWT coefficients. The low-frequency LWT coefficients are similar across multiple subimages of the same scene. According to the model, $I\u02dcA$ is noise-free and will ideally, therefore, consist of $K$ identical columns. Accordingly, $I\u02dcA$ will be of low rank as required by the matrix completion procedure. Thus, $I\u02dcA$ can be estimated via matrix completion and RPCA by solving Display Formula$minI\u02dcA,I\u02dcE\Vert I\u02dcA\Vert *+\lambda \Vert P\Omega (I\u02dcE)\Vert 1subject to\u2009\u2009I\u02dcA+I\u02dcE=I\u02dcD,$(9)
where the augmented Lagrange multiplier is Display Formula$L(I\u02dcA,I\u02dcE,Y,\mu )=\Vert I\u02dcA\Vert *+\lambda \Vert P\Omega (I\u02dcE)\Vert 1+Tr{Y,I\u02dcD\u2212I\u02dcA\u2212I\u02dcE}+\mu 2\Vert I\u02dcD\u2212I\u02dcA\u2212I\u02dcE\Vert F2.$(10)
In this equation, $\lambda $ is an estimated positive weighting parameter representing the proportion of the sparse matrix $I\u02dcE$ in the low-rank matrix $I\u02dcA$. The default value for this fraction is $1/N$. $\mu $ is a positive tuning parameter balancing accuracy and computational effort. $Tr{A,B}$ is the trace of the product $ATB$ and $Y$ is the iterated Lagrange multiplier.