Electrical capacitance tomography (ECT) has great application potential in multiphase process monitoring, and its visualization results are of great significance for studying the changes in two-phase flow in closed environments. In this paper, compressed sensing (CS) theory based on dictionary learning is introduced to the inverse problem of ECT, and the K-SVD algorithm is used to learn the overcomplete dictionary to establish a nonlinear mapping between observed capacitance and sparse space. Because the trained overcomplete dictionary has the property to match few features of interest in the reconstructed image of ECT, it is not necessary to rely on the sparsity of coefficient vector to solve the nonlinear mapping as most algorithms based on CS theory. Two-phase flow distribution in a cylindrical pipe was modeled and simulated, and three variations without sparse constraint based on Landweber, Tikhonov, and Newton-Raphson algorithms were used to rapidly reconstruct a 2-D image.
In industrial processes, it is often necessary to analyze information about two-phase flow in a pipeline or closed container. Traditional detection methods have been unable to provide accurate measurement because of the complexity of flow motion. In recent decades, with the development of modern measurement technology, electrical tomography (ET), with its advantages of non-invasive, non-damaging characteristics, and simple structure and low cost, has attracted extensive attention from researchers. At present, ET mainly includes electrical resistance tomography [1] (ERT), electrical impedance tomography [2] (EIT), electromagnetic tomography [3] (EMT), and electrical capacitance tomography [4] (ECT). ECT investigated in this study is a technique for visualizing a two-phase medium with phases of different permittivity in a pipe or a closed container. The system is suitable for imaging multicomponent phase flows, such as sand, that are not conductors. ECT technology has shown great potential in many fields, such as multiphase flow measurement [5], combustion imaging [6], and solid particle monitoring in a fluidized bed [7,8].
The objective of ECT is to obtain projection data through a sensor array fixed on the outer wall of a pipe or container, and then use an algorithm to obtain the internal permittivity distribution, which is presented with 2-D or 3-D images. The sensor array is composed of multiple electrodes, and the number of electrodes can be 8, 12, or 16. The mutual capacitances between electrodes are collected to form a capacitance sequence which is the projected data reflecting internal information. Mathematical modeling computes the permittivity distribution from the capacitance sequence, which is an inverse problem model. Many algorithms have been proposed to solve the inverse problem, such as Convolutional Neural Networks (CNN), the linear inverse projection algorithm (LBP), Landweber algorithm, and Tikhonov regularization algorithm. Deep learning has shown great advantages in many fields, especially in image processing [9,10]. Deep learning has many advantages such as strong learning ability, wide coverage, and good portability [11–13]. However, deep learning models are more complex and time-consuming. High, it violates the real-time nature of electrical capacitance tomography. The LBP algorithm has fast computation but low image accuracy. The Landweber algorithm is a simple iterative algorithm, and many scholars have improved its convergence rate [14–16]. The inverse problem is ill-posed, but the Tikhonov regularization algorithm transforms the ill-posed problem into a well-posed problem by adding regularization constraint terms based on the l_{2}-norm to the target function, and then obtains an effective solution. The reconstructed image is smooth but insensitive to edge contours.
Most algorithms establish approximate linear mapping to replace non-linear mapping because of the “soft field” in ECT, resulting in low accuracy of the reconstructed image. In 2006, Donoho proposed compressive sensing (CS) theory [17], which pointed out that when the signal is sparse or compressible, the original signal can be accurately reconstructed with far fewer samples than that required by Shannon’s theorem. In recent years, Figueiredo and Candes etal. extended CS theory [18–21]. CS theory has been widely applied in the field of imaging, and it can be used for image reconstruction in ECT [22–24].
In CS theory, the transform basis for sparse representation is crucial. For image reconstruction systems in different application fields, the transform basis is generally the specified orthogonal basis, such as discrete Fourier transform (DFT) [25], discrete cosine transform (DCT) [26], or discrete wavelet transform (DWT) [27]. The classical orthonormal basis is better for some features of the image but is negative for two-phase distributions, because two-phase flow is always varied in a complex industrial process. Specifying an orthonormal basis may reflect the permittivity distribution of a flow pattern such as stratified flow. To solve the above problems, this paper introduces the idea of CS and dictionary learning to ECT theory [28,29], which is called D-CS-ECT. Different flow patterns are used to train the transform basis so that it matches the different features of the permittivity distribution as much as possible to improve the accuracy of the reconstructed image [30].
In this paper, the transform basis is an overcomplete dictionary. The overcomplete dictionary is flexible and adaptive. It captures different features of the signal through multiple atoms and improves the redundancy of the transform system to approximate the original signal. There are some features of interest in the image reconstructed by ECT system, which focus on the position and boundary of the two phases. The trained overcomplete dictionary is able to match the features.
This paper is organized as follows: in Section2, the basic principles of ECT and CS theory based on over-complete dictionaries are introduced. In Section3, the process of dictionary learning is described, which is mainly the construction of a training sample set and the application of the K-SVD algorithm. The reconstruction results are analyzed and evaluated through simulation in Section4. Concluding remarks are presented in Section5.
ECT under the Framework of D-CS TheoryBasic Principles of ECT
The hardware architecture of ECT systems is composed of a sensor array, data acquisition module, and image reconstruction unit. In this study, a sensor array with 12 electrodes is driven by an excitation voltage, as shown in Fig. 1b, and the electrodes are numbered from 1 to 12. Firstly, Electrode 1 is excited with a 1 V voltage while the other electrodes are grounded. The capacitance values between Electrode 1 and Electrodes 2, 3, 4…, 12 are collected and then numbered 1–2, 1–3, 1–4…, 1–12. Secondly, Electrode 2 is excited with a 1 V voltage while the other electrodes are grounded. The capacitance values between Electrode 2 and Electrodes 3, 4, 5…, 12 are collected and then numbered 2–3, 2–4, 2–5…, 2–12. By analogy, capacitance values 3–4, 3–5, 3–6…, 3–12 can be obtained between Electrode 3 and Electrodes 2, 3, 4…, 12. Finally, a total of 66 capacitance values can be collected, and the equation [31] to calculate the number of capacitor sequences can be derived by
N=M×(M−1)2
where N is the number of capacitor sequences, and M is the number of electrodes.
ECT sensor with 12 electrodes for two-phase flow in the pipeline. (a) 3-D structure; (b) 2-D section
For a complex and changeable two-phase flow, the distribution in the pipeline includes seven flow patterns: central bubble flow, eccentric bubble flow, two-bubble flow, three-bubble flow, four-bubble flow, stratified flow, and annular flow, as shown in Fig. 2.
In ECT, the approximate linear relationship between the capacitance sequence and the permittivity sequence is described by the following equation:
λ=Sg
where λ∈Rp×1 is the capacitance vector reflecting the projection of mediums in the imaging area. g∈Rn×1 is the permittivity vector, and is also the pixel vector because the variation trend of permittivity distribution can be reflected by the pixels. S∈Rm×n is the sensitivity matrix, which reflects the sensitivity of the capacitance vector affected by the permittivity distribution.
In Fig. 1b, ε1 and ε2 (ε1 ¿ ε2) are the permittivity of medium 1 and medium 2. The state in which the imaging area is filled with medium 1 is called “full field,” and C_{H} denotes the measured capacitance vector under the full field state. The state in which the imaging area is filled with medium 2 is called “empty field,” and C_{L} denotes the measured capacitance vector under the empty field state. In industrial processes, ε1 and ε2 are generally known, so C_{H} and C_{L} can be obtained in advance. The capacitance vector measured by the ECT system at a certain moment during process monitoring is C_{M}, and the normalization methods for the capacitance vector at this moment are given in (3) and (4).
λ=CM−CLCH−CL
λ=1/CM−1/CL1/CH−1/CL
where (3) is the parallel normalization method, and (4) is the series normalization method. It can be seen from (3) or (4) that the λ from (2) is actually the difference in capacitance. Both normalization methods have their advantages and disadvantages, and their differences are discussed in [32]. In this study, (3) is selected as the normalization method for the capacitance vector.
S from (2) is the sensitivity matrix of the empty field state. According to the potential distribution in the imaging area [33], the discrete derivation result for S is
where the number of electrodes is M, the number of pixels distributed in the imaging area is n, and the excitation voltage of the electrode is U=1 V. C_{i, j} is the capacitance scalar between the i − th electrode and the j − th electrode. At the spatial position of the k − th pixel point, Si,jk is the sensitivity scalar relating to the interaction between the i − th electrode and the j − th electrode, εk is the permittivity scalar, and ϕik, ϕik are the potential scalars relating to the excitation of the i − th, i − th electrodes.
The normalization method for the sensitivity matrix is given by
Snorm=Sdiag(1∑j=1nSij2)
where diag(⋅) is the diagonal matrix operator, and Snorm is the normalized sensitivity matrix.
To obtain the image is to solve g. Because the number of the capacitance sequence obtained from the sensor is far lower than the number of pixels in the reconstructed image, S is an ill-posed matrix, and (2) is ill-posed and its solution is not unique. The error function is minimized in the inverse problem of the ECT, that is,
min||λ−Sg||2
Many existing image reconstruction algorithms for ECT can be improved on the basis of (7), such as adding constraint terms based on the l_{2}-norm to the target function.
D-CS-ECT
CS theory mainly includes three parts: the sparse representation of signal, design of the observation matrix, and signal reconstruction. D-CS using a dictionary rather than an orthogonal basis is a branch of CS theory [34]. Proposed the dictionary-restricted isometry property (D-RIP), which is a natural generalization of the restricted isometry property (RIP) [35], proving that the idea of signal reconstruction by a redundant and coherent dictionary is feasible. The mathematical model for D-CS-ECT is given below.
The image signal is represented as a small number of values in sparse space through an overcomplete dictionary. Sparse representation can be expressed as
g=Dx
where g∈Rn×1 is the pixel vector. D is an n×p matrix, with n smaller than p, that represents the redundant overcomplete dictionary, and its column vectors represent atoms. x∈Rp×1 is the coefficient vector. Assuming that x has K non-zero elements, it is called K-sparse. Dictionary learning defines D as an “atom library” of matching features of interest. A non-zero element in x is equivalent to the weight of the corresponding atom in “atom library”, and different images are represented by selecting different weight sets.
The nonlinear mapping relation between the observed capacitance and the coefficient vector can be written as
λ=Sg=SDx=AD−CSx
where λ∈Rm×1 is the capacitance vector. A^{D−CS} = SD, and A^{D−CS} is an m×p sensing matrix. In the case that x is approximately sparse, it can be considered that A^{D−CS} has an approximately sparse property for λ. S∈Rm×n is the observation matrix and also the sensitivity matrix.
The procedure of the reconstruction algorithm is to first solve x according to (9), and then solve g according to (8). Classical signal reconstruction algorithms based on CS theory mainly include greedy algorithms and convex optimization algorithms. In this study, the sparsity adaptive matching pursuit (SAMP) algorithm [36] and the gradient projection for the sparse reconstruction (GPSR) algorithm [37] are used as examples to present the optimization equation.
A mathematics model of an l_{0}-analysis optimization problem for the SAMP algorithm is given by
{xopt=argmin||x||0s.t||λ−SDx||2≤αgopt=Dxopt
The SAMP algorithm, which is an improvement on the orthogonal matching pursuit (OMP) algorithm, is a greedy algorithm. This algorithm avoids taking the number of non-zero elements in x as a priori, and iterates an approximate solution to x within the allowable error range of λ and SDx (α is the upper limit of error).
A mathematical model for the GPSR algorithm can be described as
{xopt=argmin{||λ−SDx||2+β||x||1}gopt=Dxopt
The convex optimization problem takes β as the penalty term, which measures the sparsity of x by using l_{0}-analysis, and approximates the real solution to x by minimizing the objective function.
The above algorithms solve x according to the sparse constraint of x itself, resulting in complex calculation. In D-CS-ECT, x can be solved without the sparse constraint, and two reasonable reasons are given:
In (9), A^{D−CS} has the sparse property for reconstructed images (the sparse property comes from D), which can replace the sparse constraint.
The detection of two-phase flow in an industrial process requires a noiseless binary image, which makes the contact contour of the two mediums obvious and shows that the two-phase distribution is accurate. The few features of interest in the image reconstructed by ECT system work as an indirect factor that solving x is not constrained by sparsity.
According to the above analysis, three variations of the Landweber algorithm, the Tikhonov regularization algorithm, and the improved Newton–Raphson algorithm are used to achieve image reconstruction, which are called Non-Sparse-Landweber, Non-Sparse-Tikhonov, and Non-Sparse Newton-Raphson, and their derivation results are given below.
where (12) and (14) are iterative algorithms and (12) is a direct algorithm. θ in (12), μ in (13), and γ in (14) are tunable parameters. e in (12) and (14) is the error between adjacent iterations. The appearance of x_{opt} are that elements associated with the features of interest are retained prominently, while other elements associated with the features of non-interest are suppressed (close to zero). Fig. 3 shows the overall idea of the image reconstruction algorithm in D-CS-ECT.
The process of image reconstruction algorithm in ECTDictionary Learning
Dictionary learning is the precondition of image reconstruction. It mainly includes the establishment of a training sample set and a learning algorithm. The training sample of this experiment is obtained by simulation in COMSOL5.4 software. The dictionary learning model has attracted much attention in the past few decades, and has been used in fields including image processing, signal restoration and pattern recognition. For the input audio feature, when it is represented by a set of over-complete basis, under the condition of satisfying a certain sparsity or reconstruction error, an approximate representation of the original audio segment can be obtained, that is, Y≈DX⋅Y is the input Need to get the original frame feature parameters of the sparse representation, the dimension is the feature dimension multiply the number of samples D represents the sparse matrix, each column of it is called an “atom”, X is the obtained sparse representation coefficient of Y with respect to D, and the dimension is a dictionary number of atoms multiply number of samples. The optimization goal is to use as few atoms as possible to represent the signal in a given over-complete dictionary, and get the problem of sparse X. This problem can be described by the following formula:
D,Xmin∑i||Xi||0,s.t||Y−DX||F2≤ε
We need to minimize the error after restoration and make X as sparse as possible to obtain a more concise representation of the signal and reduce the complexity of the model. Sparse dictionary learning includes two stages: one is the dictionary construction stage; the other is the use of the constructed dictionary to represent the sample stage. As an effective tool for sparse representation of signals, dictionaries provide a more meaningful way to extract the essential features of signal hiding. Therefore, obtaining a suitable dictionary is the key to the success of the sparse representation algorithm. In this experiment, we choose to learn the dictionary method to initialize a 16 × 256-dimensional Discrete Cosine Transform over complete dictionary D, where 16 is the dimension of the features in the sample, and 256 is the number of dictionary atoms. Then use Orthogonal Matching Pursuit (OMP) algorithm for sparse representation of the original audio frame feature data to obtain the coefficient matrix X of the corresponding dictionary, and then use K-Singular Value Decomposition (K-Singular Value Decomposition) according to the coefficient matrix X obtained. Decomposition (K-SVD) algorithm updates the dictionary D column by column quickly [10], and also updates the coefficient matrix X, and calculates the reconstruction error; after K iterations or convergence to the specified error, the dictionary D and coefficients are completed joint optimization of matrix X.
The pixel distribution of the imaging area reflects the accuracy of the real situation inside the pipeline. The sampled number of pixels for the image should be determined before constructing the training sample set. Five different shaped objects are placed in the pipeline that reflect different flow patterns, as shown in Fig. 4a. The “circular” path method is used to form a 961 or 1681 pixel image, and the “square” path method is used to form a 561, 1281 or 5097 pixel image.These different pixel distributions respectively restore images, as shown in Figs. 4b–4f.
Image with different number of pixels. (a) A two-phase distribution which contains object A, B, C, D, E with the same permittivity; The best restored image for this two-phase distribution. (b) The image with 961 pixels. (c) The image with 1681 pixels. (d) The image with 561 pixels. (e) The image with 1281 pixels. (f) The image with 5097 pixels
Selection of the image mainly depends on the restoration of the outline of the contact between the two mediums. In the “circular” path method, Objects B and D in the restored image are positive, but Objects A, C, and E are not. The “square” path method works for objects of any shape, but the quality of the image depends on the number of pixels. All objects are deformed in the 561-pixel image, and the restoration quality is poor. The 5097-pixel image better reflects the real situation inside the pipeline, but the high number of pixels leads to a time-consuming program for the K-SVD algorithm. The 1281-pixel distribution was selected to restore image because the degree of image restoration and the running speed of the K-SVD program is between Figs.4d and 4f.
Training samples were created based on the three model examples shown in Fig. 5. Except for the cross-section radius R_{1} inside the pipe, the parameters were set randomly. In Fig. 5a, c_{1}(x_{1}, y_{1}) and c_{2}(x_{2}, y_{2}) are the center coordinates of two bubbles, with R_{2} and R_{3} as their radii. In Fig. 5b, H_{1} is the height of the arcs medium. In Fig. 5c, R_{3}, R_{4}, and L are the internal radius, external radius, and thickness of the annular medium.
(a) A model for creating samples with two-bubble distribution, similar models can be used for single-bubble distribution, three-bubble distribution, etc. (b) A model for creating samples with stratified distribution. (c) A model for creating samples with annular distribution
Six types of training samples (a central bubble and an eccentric bubble are collectively called a single bubble) were obtained according to the flow pattern. The distribution for single bubbles, two bubbles, three bubbles, and four bubbles each had 1000 images. 850 images showed stratified distribution, and 863 images showed annular distribution. The training sample set consisted of 5,713 binary images of the internal section of the cylindrical pipe, and some training samples are shown in Fig. 6.
Part of the training sample set
The training samples for the six different distributions were denoted as G_{1}, G_{2}, G_{3}, G_{4}, G_{5}, G_{6}, and the total training sample set was G = [G_{1}, G_{2}, G_{3}, G_{4}, G_{5}, G_{6}]. All samples were randomly sorted and added with slight noise in order to maintain natural conditions and increase the generalization of the overcomplete dictionary. G can be rewritten as G=[g1,g2,g3,…,gN].
The mathematical model of dictionary learning in D-CS-ECT is as follows:
{G=DXG=[g1,g2,g3,…,gN],X=[x1,x2,x3,…,xN]
where gi∈Rn∗1 is the pixel vector, and xi∈Rp∗1 is the sparse coefficient column vector corresponding to g_{i}. G∈Rn∗N is the training sample set, and each of its columns represents an image. X∈Rp∗N is a matrix representing the set of sparse coefficients.
In 2006, Aharon etal. [38] proposed the K-SVD algorithm for dictionary learning. The K-SVD algorithm is a greedy algorithm that realizes signal approximation by alternately optimizing the dictionary and the sparse coefficients. In this paper, the K-SVD algorithm is used to learn the overcomplete dictionary, and its objective function is
min||G−DX||F2s.t||xi||0≤K
where x_{i} is a column vector in X and also the K-sparse sparse coefficient.
There are two stages used by the K-SVD algorithm in training the overcomplete dictionary, which are sparse coding and dictionary updating. In the sparse coding stage, in order to obtain the sparse matrix, the sparse coefficient vector corresponding to each sample is calculated [38–40]. In the dictionary updating stage, atoms in the overcomplete dictionary are updated according to the non-zero elements of the sparse matrix.
The pseudocode for the K-SVD algorithm is as follows:
The above pseudocode can be explained by the following four steps:
The training dataset G(G∈Rn×N) is normalized, and the error parameter for the SAMP algorithm and the number of iterations are set to α and T, respectively.
The number of atoms in the overcomplete dictionary D(D∈Rn×p) is p and p column vectors are randomly selected from G to form the initial overcomplete dictionary, which is denoted as D_{0}.
Sparse coding for G is achieved by using the SAMP algorithm to solve X(t)(X∈Rp×N).
The dictionary is updated column by column after the sparse matrix is obtained. When updating an atom Du(t), the error E^{(u)} between GI(t) and atoms other than Du(t)is calculated, and is then decomposed with SVD to obtain [U(u),Δ(u),V(u)]. Du(t) is updated by the first column in U^{(u)}, and XI(u) by the product of Δ(1,1)(u) and (V1col(u))T.
In this study, the training sample set was large, so the time consumed in the sparse coding stage was much greater than that consumed in the dictionary update stage. Sparse coding was performed by the SAMP algorithm, which is fast in calculating the sparse coefficient vector of a single image and slow in calculating the training sample set. To solve this problem, a parallel computing method was adopted. Considering the computing power and running memory of the computer, the training sample set was evenly divided, and input to the SAMP algorithm was done in batches and in parallel. Sparse coefficient vectors were calculated simultaneously using the CPU and GPU to accelerate the iterations of the K-SVD algorithm.
Simulation and Evaluation
COMSOL Multiphysics® software was used to build the simulation model, and image reconstruction was realized in the MATLAB® environment. The experimental platform was a computer with an Intel® CoreTM i7–6800k CPU @3.40 GHz, and an NVIDIA GeForce GTX 1080 Ti graphics card.
The measurement target was the two-phase flow in the pipeline, as shown in Fig. 7. R_{1} and R_{2} are the radius of the internal cross section and the thickness of the pipeline, respectively, L is the width of the electrode, θ1 is the angle of the electrode attached to the outer wall, and θ2 is the angle between the two adjacent electrodes. ε1 and ε2 are the relative permittivity values of the two mediums.
Parameters of the pipeline model with COMSOL Multiphysics<inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mi>®</mml:mi></mml:math></inline-formula>. R<sub>1</sub> = 40 mm, R<sub>2</sub> = 5 mm, L = 15 mm, <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:msub><mml:mi>θ</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> = 22.5<inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:msup><mml:mi></mml:mi><mml:mrow><mml:mo>∘</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>, <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:msub><mml:mi>θ</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> = 7.5<inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:msup><mml:mi></mml:mi><mml:mrow><mml:mo>∘</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>, <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:msub><mml:mi>ε</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> = 1, <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:msub><mml:mi>ε</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> = 10
The 66×1 capacitance vector and the 66×1281 sensitivity matrix were obtained from the simulation model. When solving for the 1281×1 pixel vector, the capacitance vector and the sensitivity matrix must be normalized. The overcomplete dictionary whose atomic number was set to 3000 based on experience converges after ten iterations of training with the K-SVD algorithm. For a better imaging effect, the overcomplete dictionary after 30 iterations were selected. The image reconstruction algorithms were OMP, GPSR, NS-Landweber, NS-Tikhonov, and NS-Newton-Raphson, and their parameters are shown in Table 1.
Parameters of algorithms for different distributions
OMP
GPSR
NS-Landweber
NS-Tikhonov
NS-Newton-Raphson
Central bubble
K = 25
β= 5.01^{*}10^{−6}
θ= 1.14^{*}10^{−7}
μ= 2.23^{*}10^{2}
γ= 9.55^{*}10^{2}
Eccentric bubble
K = 13
β= 7.41^{*}10^{−6}
θ= 8.05^{*}10^{−7}
μ= 1.12^{*}10^{3}
γ= 8.42^{*}10^{7}
Two bubbles
K = 18
β= 9.25^{*}10^{−3}
θ= 8.13^{*}10^{−6}
μ= 2.86^{*}10^{2}
γ= 4.15^{*}10^{6}
Three bubbles
K = 22
β= 4.93^{*}10^{−2}
θ= 3.27^{*}10^{−7}
μ= 4.47^{*}10^{2}
γ= 1.08^{*}10^{7}
Stratified
K = 2
β= 6.12^{*}10^{−1}
θ= 5.11^{*}10^{−6}
μ= 5.84^{*}10^{3}
γ= 1.09^{*}10^{8}
Annular
K = 6
β= 1.00^{*}10^{−2}
θ= 1.02^{*}10^{−6}
μ= 9.51^{*}10^{1}
γ= 4.78^{*}10^{6}
In order to study the noise resistance of different algorithms, 40 dB of noise was added to the capacitance vector. The imaging results are shown in Fig. 8. For the images reconstructed by the OMP algorithm, almost all distributions show many image artifacts: the bubbles are blurry, and their outlines diverge in multi-bubble distributions. With the GPSR algorithm, the image with the stratified distribution is clear, but target bubbles are deformed in the three-bubble distribution. The central bubble distribution by the NS–Landweber algorithm and the annular distribution by the NS–Tikhonov algorithm are different from their real distribution. Images obtained by the NS–Newton–Raphson algorithm have slight artifacts and slightly deformed bubbles, which enough to reflect the real distribution of the two-phase flow. A reasonable explanation for the poor images is the interference of noise and the soft field, and an incomplete training sample set that cannot cover the majority of the distributions in the two-phase flow.
Reconstruction results based on simulation data of noise of SNR = 40 dB
Image error (IE) and a correlation coefficient (CC) were used to evaluate the quality of the reconstructed images. For ECT, the CC is given higher priority than the IE. IE and CC are given by
where g is the normalized pixel vector of the real image and g′ is the normalized pixel vector of the reconstructed image. g¯ and g¯′ are the average values of g and g′. The maximum value of CC is 1. A small IE and large CC indicate that the quality of image reconstruction is good. The results for IE and CC are shown in Tables 2 and 3.
IE is smallest and CC is largest on the whole for the stratified distribution, indicating that ECT is sensitive for imaging of stratified distribution. For most single-bubble distributions, the contrast between the single-bubble and the background is obvious. As the number of bubbles in multi-bubble distributions increases, the larger IE and the smaller CC are, the worse the imaging quality is. Sensitivity is non-linear in the imaging area, and the location and size of the bubbles are highly random, which increases the difficulty of imaging multi-bubble distributions. It is difficult to reconstruct an image for annular distribution because its IE and CC are the worst of all distributions, but the NS–Newton–Raphson algorithm that IE and CC have no obvious defects can be selected.
Image error (IE) of reconstructed images
OMP
GPSR
NS-Landweber
NS-Tikhonov
NS-Newton-Raphson
Center bubble
57.5931
41.4506
58.7963
40.7606
43.3536
Eccentric bubble
64.6041
47.329
44.8629
41.8379
34.6281
Two bubbles
73.8069
58.5772
63.5528
57.5421
59.7735
Three bubbles
91.0715
62.7431
59.2693
62.5968
63.0554
Stratified
53.9199
45.7729
47.0209
39.6289
47.3087
Annular
67.7863
73.5442
62.0886
75.2278
57.0617
Correlation coefficient (<italic>CC</italic>) of reconstructed images
OMP
GPSR
NS-Landweber
NS-Tikhonov
NS-Newton-Raphson
Center bubble
0.87988
0.90277
0.83647
0.927
0.89319
Eccentric bubble
0.8524
0.92237
0.89062
0.92095
0.91154
Two bubbles
0.82934
0.86045
0.83665
0.86518
0.8556
Three bubbles
0.64996
0.87236
0.92549
0.87906
0.84343
Stratified
0.93447
0.95555
0.98557
0.96981
0.95551
Annular
0.82293
0.85193
0.90564
0.85206
0.93776
In order to evaluate the reconstruction quality and stability of the five algorithms more intuitively, the mean and variance of IE and CC are given.
As shown in Fig. 9, The left picture in Fig. 9 is the average error value. The higher the value, the worse the imaging effect, the lower the value, the better the effect; the right picture is the average correlation coefficient value, the lower the coefficient, the worse the image quality, the higher the coefficient, the higher the image quality. The OMP algorithm has the highest mean for IE and the lowest mean of CC, indicating that the quality of the reconstructed images by the OMP algorithm is low. The mean values for the other four algorithms are similar, and the quality is better than that of the OMP. As shown in Fig. 10, The variance of each method, the larger the variance, the worse the imaging algorithm, the two large variances for the OMP algorithm reflect its poor stability. For the GPSR, NS–Landweber, and NS–Tikhonov algorithms, all the variances are relatively large because of the error between the three distributions (the three bubbles by GPSR, the center bubble by NS–Landweber, and the annular by NS–Tikhonov) and their true distributions are shown in Fig. 8. For the NS–Newton–Raphson algorithm, the variance of CC is small, and the algorithm is stable and not easily disturbed by noise.
Mean of image error (IE) and correlation coefficient (CC) using different algorithmsVariance of image error (IE) and correlation coefficient (CC) using different algorithmsConclusion
In this paper, an image reconstruction method based on D-CS-ECT was discussed. Sparse representation of image signal was obtained by training an overcomplete dictionary with K-SVD, and the nonlinear relationship between observation capacitance and the approximate sparse coefficient was solved under the constraint of no sparsity. The two-phase flow in the pipeline was simulated in a noisy environment. Images were reconstructed by the NS–Landweber, NS–Tikhonov, and NS–Newton–Raphson algorithms and compared with the OMP and GPSR algorithms, which are classical sparse constraint algorithms. The NS–Landweber and NS–Tikhonov algorithms were able to reconstruct images clearly on the whole, but unstable reconstructions of the central bubble distribution and annular distribution indicated that the two-phase flow in the central region was not easy to image. The NS–Newton–Raphson algorithm was superior to the other four algorithms in overall image quality and stability, and its highly correlated reconstructed images were closer to the real two-phase flow distribution.
I would like to acknowledge Professor, Xuebin Qin, for inspiring my interest in the development of innovative technologies.
Funding Statement: This research was supported by the National Natural Science Foundation of China (No. 51704229), Outstanding Youth Science Fund of Xi’an University of Science and Technology (No. 2018YQ2-01).
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
ReferencesSharifi, M., Young, B. (2013). Electrical resistance tomography (ERT) applications to chemical engineering. Cheney, M., Newell, I. (2011). Electrical impedance tomography. Yu, Z. Z., Peyton, A. T. (1993). Imaging system based on electromagnetic tomography (EMT). Xie, C. G., Plaskowski, A. (1989). 8-electrode capacitance system for two-component flow identification. Part 1: Tomographic flow imaging.Li, Y., Yang, W., Xie, C. G., Huang, S., Wu, Z.et al. (2011). Gas/oil/water flow measurement by electrical capacitance tomography. Gut, Z., Wolanski, P. (2010). Flame imaging using 3D electrical capacitance tomography. Zhang, W., Cheng, Y., Wang, C., Yang, W., Wang, C. H. (2013). Investigation on hydrodynamics of triple-bed combined circulating fluidized bed using electrostatic sensor and electrical capacitance tomography. Qiang, G., Meng, S., Wang, D., Zhao, Y., Liu, Z. (2017). Investigation of gas–solid bubbling fluidized beds using ect with a modified tikhonov regularization technique. Zheng, Q., Yang, M., Yang, J., Zhang, Q., Zhang, X. (2018). Improvement of generalization ability of deep cnn via implicit regularization in two-stage training process. Zheng, Q., Tian, X., Jiang, N., Yang, M. (2019). Layer-wise learning based stochastic gradient descent method for the optimization of deep convolutional neural network. Jang, J. D., Lee, S. H., Kim, K. Y., Choi, B. Y. (2006). Modified iterative landweber method in electrical capacitance tomography. Long, X., Mao, M., Lu, C., Li, R., Jia, F. (2021). Modeling of heterogeneous materials at high strain rates with machine learning algorithms trained by finite element simulations. Liu, F., Wang, Z., Wang, Z., Qin, Z., Li, Z.et al. (2021). Evaluating yield strength of ni-based superalloys via high throughput experiment and machine learning. Saxena, S., Animesh, S., Fullwood, M., Mu, Y. (2020). OnionMHC: A deep learning model for peptide—HLA-A*02:01 binding predictions using both structure and sequence feature sets. DOI 10.21203/rs.3.rs-124695/v1.Chen, Y., Li, H. Y., Xia, Z. J. (2018). Electrical capacitance tomography image reconstruction algorithm based on modified implicit formula landweber method. Tian, W., Ramli, M. F., Yang, W., Sun, J. (2017). Investigation of relaxation factor in landweber iterative algorithm for electrical capacitance tomography. IEEE International Conference on Imaging Systems and Techniques. IEEE. DOI 10.1109/IST.2017.8261455.Donoho, D. L. (2006). Compressed sensing. Lustig, M., Donoho, D., Pauly, J. M. (2007). Sparse mri: The application of compressed sensing for rapid mr imaging. Candès, E. J. (2008). The restricted isometry property and its implications for compressed sensing. Jafarpour, S., Molina, R., Katsaggelos, A. K. (2008). Model-based compressive sensing. Babacan, S. D., Molina, R., Katsaggelos, A. K. (2010). Bayesian compressive sensing using Laplace priors. Wang, H., Fedchenia, I., Shishkin, S., Finn, A., Colket, M. (2012). Electrical capacitance tomography: A compressive sensing approach. IEEE International Conference on Imaging Systems & Techniques.Wu, X., Huang, G., Wang, J., Xu, C. (2013). Image reconstruction method of electrical capacitance tomography based on compressed sensing principle. Xin. J, W. U., Huang, G. X., Wang, J. W. (2013). Application of compressed sensing to flow pattern identification of ECT. Zhang, L. F., Liu, Z. L., Tian, P. (2017). Image reconstruction algorithm for electrical capacitance tomography based on compressed sensing. Almurib, H., Kumar, N., Lombardi, F. (2018). Approximate DCT image compression using inexact computing. Liu, Z., Yin, H., Fang, B., Chai, Y. (2015). A novel fusion scheme for visible and infrared images based on compressive sensing. You, Z., Raich, R., Fern, X., Kim, J. (2018). Weakly-supervised dictionary learning. Dou, P., Wu, Y., Shah, S. K., Kakadiaris, I. A. (2018). Monocular 3D facial shape reconstruction from a single 2d image with coupled-dictionary learning and sparse coding. Carrera, D., Boracchi, G., Foi, A., Wohlberg, B. (2018). Sparse overcomplete denoising: Aggregation versus global optimization. Meribout, M., Saiedmran, M. (2017). Real-time two-dimensional imaging of solid contaminants in gas pipelines using an electrical capacitance tomography system. Zhang, L., Yin, W. (2018). Image reconstruction method along electrical field centre lines using a modified mixed normalization model for electrical capacitance tomography. Guo, Z., Shao, F., Lv, D. (2009). New calculation method of sensitivity distribution for ETC. Candes, E. J., Eldar, Y. C., Needell, D. (2010). Compressed sensing with coherent and redundant dictionaries. Parker, P. A., Bliss, D., Tarokh, V. (2015). CISS 2008. Thong, T. D., Gan, L., Nguyen, N., Tran, T. D. (2010). Sparsity adaptive matching pursuit algorithm for practical compressed sensing. Conference on Signals, Systems & Computers. IEEE.Figueiredo, M., Nowak, R. D., Wright, S. J. (2008). Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. Aharon, M., Elad, M., Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. Long, X., Mao, M., Lu, C., Li, R., Jia, F. (2021). Modeling of heterogeneous materials at high strain rates with machine learning algorithms trained by finite element simulations. Long, X.Mao, M. H., Lu, C. H., Lu, T. X., Jia, F. R. (2021). Prediction of dynamic compressive performance of concrete-like materials subjected to SHPB based on artificial neural network.