This research proposes a highly effective soft computing paradigm for estimating the compressive strength (CS) of metakaolin-contained cemented materials. The proposed approach is a combination of an enhanced grey wolf optimizer (EGWO) and an extreme learning machine (ELM). EGWO is an augmented form of the classic grey wolf optimizer (GWO). Compared to standard GWO, EGWO has a better hunting mechanism and produces an optimal performance. The EGWO was used to optimize the ELM structure and a hybrid model, ELM-EGWO, was built. To train and validate the proposed ELM-EGWO model, a sum of 361 experimental results featuring five influencing factors was collected. Based on sensitivity analysis, three distinct cases of influencing parameters were considered to investigate the effect of influencing factors on predictive precision. Experimental consequences show that the constructed ELM-EGWO achieved the most accurate precision in both training (RMSE = 0.0959) and testing (RMSE = 0.0912) phases. The outcomes of the ELM-EGWO are significantly superior to those of deep neural networks (DNN), k-nearest neighbors (KNN), long short-term memory (LSTM), and other hybrid ELMs constructed with GWO, particle swarm optimization (PSO), harris hawks optimization (HHO), salp swarm algorithm (SSA), marine predators algorithm (MPA), and colony predation algorithm (CPA). The overall results demonstrate that the newly suggested ELM-EGWO has the potential to estimate the CS of metakaolin-contained cemented materials with a high degree of precision and robustness.
Concrete is a broadly utilized material in the construction industry. In the field of civil engineering, composite concrete technology is used to make concrete and cement-based materials, which are made up of cement, reinforcement, filler materials, admixtures, and water [
Extensive research works on the production and characterization, and effect of MK on the physico-chemical characteristics of cement, mortars, and concrete have been carried out since the mid-20th century [
Although MK may increase the susceptibility of concrete to carbonation by consuming portlandite, there are cases where MK enhances carbonation resistance. Overall, replacing cement with MK increases the durability of mortar and concrete. According to earlier research, the part replacement of cement with MK has a good effect on the strength and durability of mortar and concrete; therefore, MK can serve as a carbon-neutral substitute for pure Portland cement concrete/mortar. In the past, numerous experimental works have been conducted to examine the impact of various factors, such as the proportion of cement replaced by MK, the age of the specimen during testing, and the ratio of aggregates to binder, on the CS of concrete and cement containing MK. Furthermore, statistical models have been developed to predict their CS. However, the vast number of variables and complexity of the subject make it difficult to incorporate all parameters into these formulas, rendering them specific to certain materials and not generally applicable. Additionally, researchers have tested the CS of cement in the laboratory, but the laboratory tests can be time-consuming, expensive, and labour-intensive. Therefore, there is a need for a more resourceful and cost-effective method to estimate the CS of concrete and cement-based materials containing MK.
Machine learning (ML) is a multi-disciplinary area of research concerned with the development of intelligent inferences that can learn and draw inferences from patterns in data. Its main goal is to simulate and implement human learning behavior, enabling the acquisition of new skills and the continual improvement of existing skills [
According to the literature, to overcome the afore-mentioned limitations and enhance the application of hybrid ML paradigms, this research suggests a high-performance ML solution for predicting the CS of cement containing MK. To this end, enhanced GWO (EGWO), proposed by Joshi et al. [
The subsequent part of this work is ordered as follows:
With non-linear association between the parameters and the concrete qualities, it is challenging to construct optimal concrete mixes incorporating metakaolin [
The existing literature across several technical and scientific disciplines provides evidence that soft computing techniques, known for their proficiency in non-linear modelling, can establish correlations between desired outcomes and a range of influencing parameters, whether they have direct or indirect impacts [
This section discusses the working concept of ELM, followed by the classification of MHs. Following that, details of GWO and EGWO are presented. At the end, the methodological details for hybrid ELM development are provided. Notably, detailed information on the utilized MHs is not offered because they are well-established and the works of PSO [
ELM [
A set of predictor as
The target matrix in the training phase can be defined as:
An optimal response is provided through
As of date, several MHs have been introduced. Based on their source of inspiration, MHs can be classified into various groups including (a) evolutionary algorithms (EAs), (b) swarm intelligence (SI) algorithms, (c) natural phenomena-based algorithms, and (d) human inspiration algorithms [
GWO [
In GWO, E&E are handled using parameters
In standard GWO, exploration is allowed on the primary phase of the iterations and exploitation on the second phase. However, this approach ignored the importance of striking the appropriate balance between the two activities in order to arrive at a close approximation of the global optimum. Thus, to solve the problem, Joshi and Arora [
In the last decade, numerous MHs have been utilized extensively to adjust the learning parameters of many ML models in order to improve their performance [
In past studies, researchers focused more on making models that were easy to use and could predict the properties of concrete and cement mortar. However, they often overlooked how important it was to have a good database for making predictions. The model’s accuracy can be checked by looking at a database with enough good data. In this study, a large amount of data was acquired from Huang et al. [
Particulars | M | MK/B | W/B | S_{P} | B/S | CS |
---|---|---|---|---|---|---|
Minimum | 32.00 | 0.00 | 0.30 | 0.00 | 0.33 | 6.06 |
Mean | 40.70 | 9.89 | 0.46 | 0.37 | 0.43 | 44.42 |
Median | 42.50 | 10.00 | 0.49 | 0.00 | 0.44 | 43.40 |
Maximum | 53.00 | 30.00 | 0.60 | 5.00 | 0.76 | 115.25 |
Skewness | 0.27 | 0.52 | −0.56 | 3.05 | 0.76 | 1.15 |
Kurtosis | −1.36 | −1.00 | −0.06 | 13.70 | 1.23 | 1.79 |
Variance | 66.90 | 101.01 | 0.01 | 0.51 | 0.01 | 428.29 |
Standard deviation | 8.18 | 10.05 | 0.07 | 0.72 | 0.09 | 20.70 |
Standard error | 0.43 | 0.53 | 0.00 | 0.04 | 0.00 | 1.09 |
According to
To model the CS of MK-contained materials, 3 different combinations of input parameters, viz., Case-1 (C1), Case-2 (C2), and Case-3 (C3), were explored. For this purpose, sensitivity analysis (SA) based on the Cosine Amplitude method [
Input parameters (abbreviations and names) | R_{S} | Rank | Case-1 | Case-2 | Case-3 | |
---|---|---|---|---|---|---|
M | Cement grade | 0.7462 | 1 | ✓ | ✓ | ✓ |
MK/B | Metakaolin to binder ratio content | 0.6265 | 4 | × | ✓ | ✓ |
W/B | Water to binder ratio | 0.6887 | 3 | × | ✓ | ✓ |
S_{P} | Superplasticizer | 0.4391 | 5 | × | × | ✓ |
B/S | Binder to sand ratio | 0.7186 | 2 | ✓ | ✓ | ✓ |
Input combination | Input parameters | Input data dimension | Output |
---|---|---|---|
Case-1 | M and B/S | 361 × 2 | CS |
Case-2 | M, B/S, W/B, and MK/B | 361 × 4 | CS |
Case-3 | M, B/S, W/B, MK/B, and S_{P} | 361 × 5 | CS |
It should be mentioned that data normalization is a vital activity in data-driven modelling and is frequently employed to mitigate the impacts of multidimensionality. Thus, the current research uses the min-max approach of normalization. Then the acquired records were separated into training (TR) and testing (TS) subgroups. The latter was extracted in order to estimate the trained models’ outcomes against unseen samples. So, 80% (289 observations) of the collected records were used for model construction and a balance of 20% (72 observations) for validation. Notably, while there are no predefined guidelines for determining the number of samples to include in a data-driven model, the researchers’ decision will be mostly determined by the nature of the tasks. Note that, a model constructed using a large sample is more likely to be accepted than one constructed from a small dataset. Thus, a 20% sample was preferred to validate the developed models. Following model development, their effectiveness was measured using multiple indices namely Adj.R^{2}, NS, PFI, R^{2}, RMSE, RSR, and VAF, expressed in
This section reports the performance of the developed/employed paradigms used to evaluate the CS of MK-contained cemented materials. As said above, each dataset was split into TR and TS subsets before developing the models. It should be mentioned that all of the models were constructed and corroborated with similar TR and TS subsets. Then, their results were evaluated via multiple parameters. It is noteworthy that the deterministic parameters of the MHs (viz., N_{S}, t_{max},
To construct an optimum hybrid ELM, the N_{H} ranging between 5 and 20 was investigated. Using
Parameters | ELM-EGWO | ELM-GWO | ELM-PSO | ELM-HHO | ELM-SSA | ELM-MPA | ELM-CPA |
---|---|---|---|---|---|---|---|
N_{H} (Case-1) | 12 | 16 | 9 | 13 | 11 | 18 | 11 |
N_{H} (Case-2) | 14 | 13 | 11 | 12 | 12 | 17 | 17 |
N_{H} (Case-3) | 15 | 18 | 13 | 11 | 12 | 14 | 14 |
N_{S} | 50 | 50 | 50 | 50 | 50 | 50 | 50 |
t_{max} | 500 | 500 | 500 | 500 | 500 | 500 | 500 |
c_{1}, c_{2} | – | – | 1,2 | – | – | – | |
ub, lb | ±1 | ±1 | ±1 | ±1 | ±1 | ±1 | ±1 |
O_{W+B} (Case-1) | 36 | 48 | 27 | 39 | 33 | 54 | 33 |
O_{W+B} (Case-2) | 70 | 65 | 55 | 60 | 60 | 85 | 85 |
O_{W+B} (Case-3) | 90 | 108 | 78 | 66 | 72 | 84 | 84 |
During ELM-GWO modelling, the ELM was set primarily and consequently the GWO was incorporated in order to optimize biases and weights of ELM. As soon as the training was completed, the final ELM-GWO structure was finalized. It is noteworthy that the GWO optimized ELM contains 16, 13, and 18 hidden neurons in Cases-1, 2, and 3 combinations, respectively. The numbers of input neurons were 2, 4, and 5, for input combinations Case-1, 2, and 3, respectively. However, the number of output neurons was 1 for all three cases. Therefore, the number of O_{W+B} can be determined as 48 (2 × 16 + 16) for Case-1, 65 (4 × 13 + 13) for Case-2, and 108 (5 × 18 + 18) for Case-3 of input combinations. With N_{H} between 13 and 18 and a sum of 289 training samples, the searches were carried out for global optimum at t_{max} = 500 with N_{S} = 50 and subsequently, the O_{W+B} were generated to validate the developed ELM-GWO models against each input combination.
Analogous to the ELM-GWO model, all other hybrid ELMs were developed using the same training dataset. For each hybrid ELM, the value of N_{H} was determined using a trial-and-error technique. However, the values of N_{S},
Similar to hybrid ELMs, the parameters of DNN, KNN, and LSTM were selected using a trial and error approach. The DNN structure was finalized by using mean squared error as the loss function and Adam optimizer. The final structure comprises of 5-input neurons, one output neuron, and four hidden layers with N_{H} = 10 in each layer. Notably, in each dense layer, the ReLU was used as the activation function. For KNN, the number of neighbors was set to 5. The LSTM model was finalized using the same loss function and optimizer that were utilized during the creation of the DNN model. The
The following sub-sections present and discuss the performance of all the constructed/employed models in the evaluation of the CS of MK-contained cemented materials. Also, a comprehensive comparison of the accuracies of models via various performance criteria is provided. In addition, the final results are visually presented and analyzed.
The results for all the constructed/employed paradigms for the estimation of CS of MK-contained cemented materials are given in
Models/Particulars | Adj.R^{2} | NS | PFI | R^{2} | RMSE | RSR | VAF | Total score | |
---|---|---|---|---|---|---|---|---|---|
ELM-EGWO | V | 0.5760 | 0.5834 | 1.0416 | 0.5834 | 0.1178 | 0.6455 | 58.3365 | 70 |
R | 10 | 10 | 10 | 10 | 10 | 10 | 10 | ||
ELM-GWO | V | 0.5595 | 0.5672 | 1.0066 | 0.5672 | 0.1201 | 0.6579 | 56.7162 | 28 |
R | 4 | 4 | 4 | 4 | 4 | 4 | 4 | ||
ELM-PSO | V | 0.5740 | 0.5815 | 1.0374 | 0.5814 | 0.1181 | 0.6470 | 58.1433 | 63 |
R | 9 | 9 | 9 | 9 | 9 | 9 | 9 | ||
ELM-HHO | V | 0.5205 | 0.5289 | 0.9241 | 0.5288 | 0.1253 | 0.6864 | 52.8838 | 21 |
R | 3 | 3 | 3 | 3 | 3 | 3 | 3 | ||
ELM-SSA | V | 0.5598 | 0.5675 | 1.0072 | 0.5674 | 0.1200 | 0.6577 | 56.7431 | 35 |
R | 5 | 5 | 5 | 5 | 5 | 5 | 5 | ||
ELM-MPA | V | 0.5698 | 0.5773 | 1.0284 | 0.5773 | 0.1187 | 0.6502 | 57.7265 | 56 |
R | 8 | 8 | 8 | 8 | 8 | 8 | 8 | ||
ELM-CPA | V | 0.5693 | 0.5768 | 1.0274 | 0.5768 | 0.1187 | 0.6505 | 57.6789 | 49 |
R | 7 | 7 | 7 | 7 | 7 | 7 | 7 | ||
DNN | V | 0.5688 | 0.5760 | 1.0259 | 0.5763 | 0.1188 | 0.6512 | 57.5957 | 42 |
R | 6 | 6 | 6 | 6 | 6 | 6 | 6 | ||
KNN | V | 0.4345 | 0.4432 | 0.7418 | 0.4443 | 0.1362 | 0.7462 | 44.3490 | 7 |
R | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
LSTM | V | 0.5018 | 0.5104 | 0.8845 | 0.5105 | 0.1277 | 0.6997 | 51.0411 | 14 |
R | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
Note: V, Value; R, Rank.
Models/Particulars | Adj.R^{2} | NS | PFI | R^{2} | RMSE | RSR | VAF | Total score | |
---|---|---|---|---|---|---|---|---|---|
ELM-EGWO | V | 0.6353 | 0.6416 | 1.1676 | 0.6416 | 0.1093 | 0.5987 | 64.1594 | 70 |
R | 10 | 10 | 10 | 10 | 10 | 10 | 10 | ||
ELM-GWO | V | 0.5945 | 0.6016 | 1.0809 | 0.6015 | 0.1152 | 0.6312 | 60.1548 | 42 |
R | 6 | 6 | 6 | 6 | 6 | 6 | 6 | ||
ELM-PSO | V | 0.5877 | 0.5949 | 1.0664 | 0.5948 | 0.1162 | 0.6365 | 59.4844 | 35 |
R | 5 | 5 | 5 | 5 | 5 | 5 | 5 | ||
ELM-HHO | V | 0.5740 | 0.5814 | 1.0372 | 0.5813 | 0.1181 | 0.6470 | 58.1347 | 28 |
R | 4 | 4 | 4 | 4 | 4 | 4 | 4 | ||
ELM-SSA | V | 0.6237 | 0.6303 | 1.1430 | 0.6302 | 0.1110 | 0.6081 | 63.0244 | 56 |
R | 8 | 8 | 8 | 8 | 8 | 8 | 8 | ||
ELM-MPA | V | 0.6154 | 0.6221 | 1.1252 | 0.6220 | 0.1122 | 0.6148 | 62.2046 | 49 |
R | 7 | 7 | 7 | 7 | 7 | 7 | 7 | ||
ELM-CPA | V | 0.6311 | 0.6376 | 1.1588 | 0.6375 | 0.1099 | 0.6020 | 63.7535 | 63 |
R | 9 | 9 | 9 | 9 | 9 | 9 | 9 | ||
DNN | V | 0.5584 | 0.5639 | 1.0019 | 0.5661 | 0.1205 | 0.6604 | 56.4023 | 21 |
R | 3 | 3 | 3 | 3 | 3 | 3 | 3 | ||
KNN | V | 0.5386 | 0.5428 | 0.9580 | 0.5467 | 0.1234 | 0.6762 | 54.2794 | 14 |
R | 2 | 2 | 2 | 2 | 2 | 2 | 2 | ||
LSTM | V | 0.4991 | 0.5075 | 0.8788 | 0.5078 | 0.1281 | 0.7018 | 50.7743 | 7 |
R | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Note: V, Value; R, Rank.
Models/Particulars | Adj.R^{2} | NS | PFI | R^{2} | RMSE | RSR | VAF | Total score | |
---|---|---|---|---|---|---|---|---|---|
ELM-EGWO | V | 0.7191 | 0.7240 | 1.3473 | 0.7240 | 0.0959 | 0.5254 | 72.4024 | 70 |
R | 10 | 10 | 10 | 10 | 10 | 10 | 10 | ||
ELM-GWO | V | 0.6560 | 0.6620 | 1.2120 | 0.6620 | 0.1061 | 0.5813 | 66.2018 | 63 |
R | 9 | 9 | 9 | 9 | 9 | 9 | 9 | ||
ELM-PSO | V | 0.6049 | 0.6116 | 1.1028 | 0.6117 | 0.1137 | 0.6232 | 61.1650 | 21 |
R | 3 | 3 | 3 | 3 | 3 | 3 | 3 | ||
ELM-HHO | V | 0.4546 | 0.4641 | 0.7851 | 0.4641 | 0.1336 | 0.7320 | 46.4089 | 7 |
R | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
ELM-SSA | V | 0.6366 | 0.6430 | 1.1705 | 0.6429 | 0.1090 | 0.5975 | 64.2945 | 49 |
R | 7 | 7 | 7 | 7 | 7 | 7 | 7 | ||
ELM-MPA | V | 0.6494 | 0.6555 | 1.1978 | 0.6555 | 0.1071 | 0.5869 | 65.5492 | 56 |
R | 8 | 8 | 8 | 8 | 8 | 8 | 8 | ||
ELM-CPA | V | 0.6280 | 0.6345 | 1.1522 | 0.6345 | 0.1103 | 0.6046 | 63.4486 | 42 |
R | 6 | 6 | 6 | 6 | 6 | 6 | 6 | ||
DNN | V | 0.5696 | 0.5770 | 1.0279 | 0.5771 | 0.1187 | 0.6504 | 57.6971 | 14 |
R | 2 | 2 | 2 | 2 | 2 | 2 | 2 | ||
KNN | V | 0.6172 | 0.6234 | 1.1289 | 0.6238 | 0.1120 | 0.6137 | 62.3789 | 31 |
R | 4 | 5 | 4 | 4 | 5 | 5 | 4 | ||
LSTM | V | 0.6179 | 0.6208 | 1.1295 | 0.6245 | 0.1124 | 0.6158 | 62.3987 | 32 |
R | 5 | 4 | 5 | 5 | 4 | 4 | 5 |
Note: V, Value; R, Rank.
For a comparative assessment, all the developed/employed paradigms were ranked (as per R^{2} and RMSE criteria) and the details of the best three models are presented in
Case | Particulars | R^{2} | RMSE | ||||
---|---|---|---|---|---|---|---|
Rank-1 | Rank-2 | Rank-3 | Rank-1 | Rank-2 | Rank-3 | ||
Case-1 | Model | ELM-EGWO | ELM-PSO | ELM-MPA | ELM-EGWO | ELM-PSO | ELM-MPA |
Value | 0.5834 | 0.5814 | 0.5773 | 0.1178 | 0.1181 | 0.1187 | |
Case-2 | Model | ELM-EGWO | ELM-CPA | ELM-SSA | ELM-EGWO | ELM-CPA | ELM-SSA |
Value | 0.6416 | 0.6375 | 0.6302 | 0.1093 | 0.1099 | 0.1110 | |
Case-3 | Model | ELM-EGWO | ELM-GWO | ELM-MPA | ELM-EGWO | ELM-GWO | ELM-MPA |
Value | 0.7240 | 0.6620 | 0.6555 | 0.0959 | 0.1061 | 0.1071 |
After training the models on the same dataset, the testing subset was employed to corroborate them for all three input combinations in CS prediction. The model performance for the testing subset with normalized output values is given in
Models/Particulars | Adj.R^{2} | NS | PFI | R^{2} | RMSE | RSR | VAF | Total score | |
---|---|---|---|---|---|---|---|---|---|
ELM-EGWO | V | 0.6505 | 0.6305 | 1.1680 | 0.6751 | 0.1284 | 0.6079 | 64.5864 | 70 |
R | 10 | 10 | 10 | 10 | 10 | 10 | 10 | ||
ELM-GWO | V | 0.6388 | 0.6216 | 1.1453 | 0.6642 | 0.1299 | 0.6151 | 63.6439 | 40 |
R | 5 | 6 | 6 | 5 | 6 | 6 | 6 | ||
ELM-PSO | V | 0.6455 | 0.6266 | 1.1592 | 0.6704 | 0.1290 | 0.6111 | 64.2747 | 49 |
R | 7 | 7 | 7 | 7 | 7 | 7 | 7 | ||
ELM-HHO | V | 0.6388 | 0.6204 | 1.1441 | 0.6642 | 0.1301 | 0.6161 | 63.5480 | 28 |
R | 4 | 4 | 4 | 4 | 4 | 4 | 4 | ||
ELM-SSA | V | 0.6389 | 0.6211 | 1.1453 | 0.6643 | 0.1300 | 0.6155 | 63.6337 | 37 |
R | 6 | 5 | 5 | 6 | 5 | 5 | 5 | ||
ELM-MPA | V | 0.6497 | 0.6302 | 1.1667 | 0.6743 | 0.1284 | 0.6081 | 64.5496 | 63 |
R | 9 | 9 | 9 | 9 | 9 | 9 | 9 | ||
ELM-CPA | V | 0.6493 | 0.6300 | 1.1661 | 0.6740 | 0.1285 | 0.6083 | 64.5240 | 56 |
R | 8 | 8 | 8 | 8 | 8 | 8 | 8 | ||
DNN | V | 0.5920 | 0.5992 | 1.0704 | 0.6207 | 0.1337 | 0.6331 | 61.2148 | 21 |
R | 3 | 3 | 3 | 3 | 3 | 3 | 3 | ||
KNN | V | 0.5445 | 0.5550 | 0.9633 | 0.5766 | 0.1409 | 0.6670 | 55.9646 | 14 |
R | 2 | 2 | 2 | 2 | 2 | 2 | 2 | ||
LSTM | V | 0.5044 | 0.5078 | 0.8791 | 0.5393 | 0.1481 | 0.7016 | 52.2813 | 7 |
R | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Note: V, Value; R, Rank.
Models/Particulars | Adj.R^{2} | NS | PFI | R^{2} | RMSE | RSR | VAF | Total score | |
---|---|---|---|---|---|---|---|---|---|
ELM-EGWO | V | 0.7810 | 0.7742 | 1.4663 | 0.7964 | 0.1003 | 0.4752 | 78.5611 | 70 |
R | 10 | 10 | 10 | 10 | 10 | 10 | 10 | ||
ELM-GWO | V | 0.7259 | 0.7221 | 1.3462 | 0.7452 | 0.1113 | 0.5271 | 73.1664 | 52 |
R | 7 | 8 | 7 | 7 | 8 | 8 | 7 | ||
ELM-PSO | V | 0.6966 | 0.6988 | 1.2857 | 0.7180 | 0.1159 | 0.5488 | 70.4956 | 35 |
R | 5 | 5 | 5 | 5 | 5 | 5 | 5 | ||
ELM-HHO | V | 0.6488 | 0.6592 | 1.1892 | 0.6735 | 0.1233 | 0.5838 | 66.3618 | 25 |
R | 3 | 4 | 3 | 3 | 4 | 4 | 4 | ||
ELM-SSA | V | 0.7237 | 0.7146 | 1.3421 | 0.7432 | 0.1128 | 0.5343 | 73.1185 | 42 |
R | 6 | 6 | 6 | 6 | 6 | 6 | 6 | ||
ELM-MPA | V | 0.7264 | 0.7190 | 1.3507 | 0.7456 | 0.1119 | 0.5301 | 73.6219 | 53 |
R | 8 | 7 | 8 | 8 | 7 | 7 | 8 | ||
ELM-CPA | V | 0.7501 | 0.7381 | 1.3938 | 0.7677 | 0.1081 | 0.5118 | 75.1719 | 63 |
R | 9 | 9 | 9 | 9 | 9 | 9 | 9 | ||
DNN | V | 0.6827 | 0.6447 | 1.2179 | 0.7050 | 0.1259 | 0.5961 | 66.1105 | 24 |
R | 4 | 3 | 4 | 4 | 3 | 3 | 3 | ||
KNN | V | 0.6429 | 0.6105 | 1.1342 | 0.6681 | 0.1318 | 0.6241 | 62.3066 | 14 |
R | 2 | 2 | 2 | 2 | 2 | 2 | 2 | ||
LSTM | V | 0.4964 | 0.4968 | 0.8626 | 0.5319 | 0.1498 | 0.7094 | 51.6029 | 7 |
R | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Note: V, Value; R, Rank.
Models/Particulars | Adj.R^{2} | NS | PFI | R^{2} | RMSE | RSR | VAF | Total score | |
---|---|---|---|---|---|---|---|---|---|
ELM-EGWO | V | 0.8100 | 0.8135 | 1.5369 | 0.8233 | 0.0912 | 0.4319 | 81.8112 | 70 |
R | 10 | 10 | 10 | 10 | 10 | 10 | 10 | ||
ELM-GWO | V | 0.6950 | 0.6913 | 1.2813 | 0.7165 | 0.1173 | 0.5556 | 70.3607 | 63 |
R | 9 | 9 | 9 | 9 | 9 | 9 | 9 | ||
ELM-PSO | V | 0.6074 | 0.6174 | 1.1026 | 0.6350 | 0.1306 | 0.6185 | 62.5858 | 14 |
R | 2 | 2 | 2 | 2 | 2 | 2 | 2 | ||
ELM-HHO | V | 0.6582 | 0.6611 | 1.2050 | 0.6822 | 0.1229 | 0.5822 | 66.9826 | 28 |
R | 4 | 4 | 4 | 4 | 4 | 4 | 4 | ||
ELM-SSA | V | 0.6748 | 0.6720 | 1.2371 | 0.6977 | 0.1209 | 0.5727 | 68.3185 | 37 |
R | 6 | 5 | 5 | 6 | 5 | 5 | 5 | ||
ELM-MPA | V | 0.6849 | 0.6857 | 1.2674 | 0.7071 | 0.1184 | 0.5606 | 70.0899 | 56 |
R | 8 | 8 | 8 | 8 | 8 | 8 | 8 | ||
ELM-CPA | V | 0.5928 | 0.6075 | 1.0764 | 0.6214 | 0.1323 | 0.6265 | 61.5896 | 7 |
R | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
DNN | V | 0.6555 | 0.6397 | 1.1831 | 0.6798 | 0.1268 | 0.6002 | 65.4321 | 21 |
R | 3 | 3 | 3 | 3 | 3 | 3 | 3 | ||
KNN | V | 0.6727 | 0.6759 | 1.2387 | 0.6957 | 0.1202 | 0.5693 | 68.6176 | 40 |
R | 5 | 6 | 6 | 5 | 6 | 6 | 6 | ||
LSTM | V | 0.6755 | 0.6838 | 1.2478 | 0.6983 | 0.1188 | 0.5624 | 69.1022 | 49 |
R | 7 | 7 | 7 | 7 | 7 | 7 | 7 |
Note: V, Value; R, Rank.
The second-best model for Case-1 combination is ELM-MPA with total score = 63, R^{2} = 0.6743 and RMSE = 0.1284, followed by ELM-CPA (total score = 56, R^{2} = 0.6740 and RMSE = 0.1285), ELM-PSO (total score = 49, R^{2} = 0.6704 and RMSE = 0.1290), and so on (see
Models | Particulars | R^{2} | RMSE | ||||
---|---|---|---|---|---|---|---|
Rank-1 | Rank-2 | Rank-3 | Rank-1 | Rank-2 | Rank-3 | ||
Case-1 | Model | ELM-EGWO | ELM-MPA | ELM-CPA | ELM-EGWO | ELM-MPA | ELM-CPA |
Value | 0.6751 | 0.6743 | 0.6740 | 0.1284 | 0.1284 | 0.1285 | |
Case-2 | Model | ELM-EGWO | ELM-CPA | ELM-MPA | ELM-EGWO | ELM-CPA | ELM-GWO |
Value | 0.7964 | 0.7677 | 0.7456 | 0.1003 | 0.1081 | 0.1113 | |
Case-3 | Model | ELM-EGWO | ELM-GWO | ELM-MPA | ELM-EGWO | ELM-GWO | ELM-MPA |
Value | 0.8233 | 0.7165 | 0.7071 | 0.0912 | 0.1173 | 0.1184 |
In order to spot trends, patterns, and other insights, it is always preferable to offer a visual description of the gathered information rather than reading raw reports/data. Raw data is visualized using graphical representations, allowing users to easily explore the data and reveal profound insights. It permits one to grasp information quickly and effectively. Thus, to investigate the generalization ability of the generated/employed paradigms, visual interpretations of results are presented and discussed in this sub-section.
In this work, the performance of the developed/employed paradigms was examined using index scoring and Taylor diagram [
Notably, quantitative estimation of a data-driven model’s results is an important criterion for assessing its reliability in forecasting the desired output [
Note that, WCB represents a range of errors in which approximately 95% of the data reside.
Case | Models | MAE | SD | SE | ME | LB | UB | WCB | Rank |
---|---|---|---|---|---|---|---|---|---|
Case-1 | ELM-EGWO | 0.0891 | 0.0924 | 0.0109 | 0.0217 | 0.0674 | 0.1108 | 0.0434 | 1 |
ELM-GWO | 0.0910 | 0.0927 | 0.0109 | 0.0218 | 0.0692 | 0.1128 | 0.0436 | 3 | |
ELM-PSO | 0.0895 | 0.0930 | 0.0110 | 0.0219 | 0.0676 | 0.1114 | 0.0438 | 8 | |
ELM-HHO | 0.0914 | 0.0926 | 0.0109 | 0.0218 | 0.0696 | 0.1132 | 0.0436 | 5 | |
ELM-SSA | 0.0910 | 0.0928 | 0.0109 | 0.0218 | 0.0692 | 0.1128 | 0.0436 | 4 | |
ELM-MPA | 0.0886 | 0.0930 | 0.0110 | 0.0219 | 0.0667 | 0.1105 | 0.0438 | 7 | |
ELM-CPA | 0.0887 | 0.0929 | 0.0109 | 0.0218 | 0.0669 | 0.1105 | 0.0436 | 2 | |
DNN | 0.0957 | 0.0934 | 0.0110 | 0.0219 | 0.0738 | 0.1176 | 0.0438 | 6 | |
KNN | 0.1028 | 0.0963 | 0.0113 | 0.0226 | 0.0802 | 0.1254 | 0.0452 | 9 | |
LSTM | 0.1063 | 0.1032 | 0.0122 | 0.0243 | 0.0820 | 0.1306 | 0.0486 | 10 | |
Case-2 | ELM-EGWO | 0.0762 | 0.0653 | 0.0077 | 0.0153 | 0.0609 | 0.0915 | 0.0306 | 1 |
ELM-GWO | 0.0853 | 0.0715 | 0.0084 | 0.0168 | 0.0685 | 0.1021 | 0.0336 | 4 | |
ELM-PSO | 0.0857 | 0.0780 | 0.0092 | 0.0183 | 0.0674 | 0.1040 | 0.0366 | 6 | |
ELM-HHO | 0.0937 | 0.0801 | 0.0094 | 0.0188 | 0.0749 | 0.1125 | 0.0376 | 7 | |
ELM-SSA | 0.0853 | 0.0738 | 0.0087 | 0.0173 | 0.0680 | 0.1026 | 0.0346 | 5 | |
ELM-MPA | 0.0872 | 0.0702 | 0.0083 | 0.0165 | 0.0707 | 0.1037 | 0.0330 | 3 | |
ELM-CPA | 0.0826 | 0.0697 | 0.0082 | 0.0164 | 0.0662 | 0.0990 | 0.0328 | 2 | |
DNN | 0.0897 | 0.0883 | 0.0104 | 0.0207 | 0.0690 | 0.1104 | 0.0414 | 8 | |
KNN | 0.0972 | 0.0890 | 0.0105 | 0.0209 | 0.0763 | 0.1181 | 0.0418 | 9 | |
LSTM | 0.1082 | 0.1036 | 0.0122 | 0.0243 | 0.0839 | 0.1325 | 0.0486 | 10 | |
Case-3 | ELM-EGWO | 0.0678 | 0.0610 | 0.0072 | 0.0143 | 0.0535 | 0.0821 | 0.0286 | 1 |
ELM-GWO | 0.0848 | 0.0811 | 0.0096 | 0.0191 | 0.0657 | 0.1039 | 0.0382 | 5 | |
ELM-PSO | 0.1037 | 0.0795 | 0.0094 | 0.0187 | 0.0850 | 0.1224 | 0.0374 | 2 | |
ELM-HHO | 0.0889 | 0.0849 | 0.0100 | 0.0200 | 0.0689 | 0.1089 | 0.0400 | 8 | |
ELM-SSA | 0.0881 | 0.0829 | 0.0098 | 0.0195 | 0.0686 | 0.1076 | 0.0390 | 7 | |
ELM-MPA | 0.0876 | 0.0796 | 0.0094 | 0.0187 | 0.0689 | 0.1063 | 0.0374 | 3 | |
ELM-CPA | 0.0975 | 0.0895 | 0.0105 | 0.0210 | 0.0765 | 0.1185 | 0.0420 | 9 | |
DNN | 0.0881 | 0.0911 | 0.0107 | 0.0214 | 0.0667 | 0.1095 | 0.0428 | 10 | |
KNN | 0.0874 | 0.0825 | 0.0097 | 0.0194 | 0.0680 | 0.1068 | 0.0388 | 6 | |
LSTM | 0.0883 | 0.0894 | 0.0094 | 0.0187 | 0.0696 | 0.1070 | 0.0374 | 4 |
Furthermore, the monotonicity analysis was used to test the viability of the proposed ELM-EGWO model for different input parameters of MK-contained cemented materials. Notably, overfitting is a common problem encountered during the mathematical simulation of datasets. This means that a model may succeed in an excellent simulation of data used for its development and training, but at the same time predict extremely unusual behavior for other datasets. Hence, it is worth assessing the overall behavior of the optimum models with regard to their expected behavior in terms of the estimated parameter. Thus, the purpose of this investigation was to see if the proposed ELM-EGWO model is capable of developing the necessary trend between various inputs and CS. For this purpose, one input parameter was changed monotonically, while the other inputs were kept constant at their mean values (as presented in
Parameters | Range | Details of constant input parameters | Fig. ref. |
---|---|---|---|
M | Set A: 30–39 by 1 | MK/B = 9.89, W/B = 0.46, S_{P} = 0.37, and B/S = 0.43. | |
Set B: 40–49 by 1 | |||
Set C: 50–59 by 1 | |||
MK/B | Set A: 1–10 by 1 | M = 40.70, W/B = 0.46, S_{P} = 0.37, and B/S = 0.43. | |
Set B: 11–20 by 1 | |||
Set C: 21–30 by 1 | |||
W/B | Set A: 0.30–0.39 by 0.01 | M = 40.70, MK/B = 9.89, S_{P} = 0.37, and B/S = 0.43. | |
Set B: 0.40–0.49 by 0.01 | |||
Set C: 0.50–0.59 by 0.01 | |||
S_{P} | Set A: 0–1.35 by 0.15 | M = 40.70, MK/B = 9.89, W/B = 0.46, and B/S = 0.43. | |
Set B: 1.50–2.85 by 0.15 | |||
Set C: 3.00–4.35 by 0.15 | |||
B/S | Set A: 0.35–0.44 by 0.01 | M = 40.70, MK/B = 9.89, W/B = 0.46, and S_{P} = 0.37. | |
Set B: 0.45–0.54 by 0.01 | |||
Set C: 0.55–0.64 by 0.01 |
A thorough examination of the prediction outcomes of the proposed ELM-EGWO model in estimating CS of cemented materials containing metakaolin is provided in the above sub-sections. In this work, three distinct input parameter combinations were chosen for model construction and validation. The ELM-EGWO model appears to be the most successful based on the results of RMSE, R^{2}, and other performance metrics. The said model also attained the requisite predicted accuracy in Case-3 combination. Using UA, the reliability of the proposed/employed models was assessed. The ELM-EGWO model was determined to be most reliable in Case-3 combination (WCB = 0.0286) when the produced models were evaluated using the WCB value. Note that, the uncertainty evaluation was done only based on the testing dataset because a model with higher accuracy during the validation phase is deemed more resilient and should be recognized with greater confidence. It can also be seen from the results that the Case-3 input combination provided a stronger prediction model, implying that the parameters considered in this study had a significant effect on the CS of MK-contained materials. Moreover, the results of the monotonicity analysis verify the correctness and validity of the proposed ELM-EGWO model. Based on the results of monotonicity analysis, engineers and practitioners can attain the desired CS value by adjusting the proportions of different influential parameters. However, it may be noted that the effects of different parameters presented in
This research proposes a novel hybrid model, ELM-EGWO, for estimating the CS of MK-contained cemented materials. To create and evaluate the constructed ELM-EGWO model, three distinct combinations of influencing parameters were investigated based on SA. Experimental results indicate that the developed ELM-EGWO obtained the most precise CS value when all the input parameters, i.e., combination Case-3, were considered as influencing factors. Precisely, the ELM-EGWO performed better than other hybrid ELMs and standalone models in the testing phase with R^{2} = 0.6751 in Case-1, R^{2} = 0.7964, and R^{2} = 0.8233 in Case-3 combinations of CS prediction. Overall, the experimental findings show that the newly suggested ELM-EGWO model has strong potential to estimate the CS of cemented materials containing metakaolin with a high degree of accuracy and robustness.
One of the key advantages of the proposed ELM-EGWO had the highest predation accuracy, demonstrating EGWO’s superiority over the standard GWO and other MHs employed in this work. The proposed ELM-EGWO model also offers a faster convergence rate, which is a significant benefit. In addition, the findings of monotonicity analysis will enable researchers/practitioners to design concrete by changing the mix proportions of MK and superplasticizers as per their requirements. This will lead engineers/practitioners to implement a sustainable concept without affecting the desired strength of concrete. However, the future scope might include (a) a thorough evaluation of the ELM-EGWO and other hybrid ANN, SVM, and ANFIS models constructed with EGWO in predicting CS of MK-contained concrete and cemented materials; (b) a comparison of the proposed ELM-EGWO to other hybrid models constructed with evolutionary and physics-based MHs, and (c) implementation of shapley additive explanations (SHAP) methods or partial dependency graphs to obtained optimized results of parametric analysis. Nonetheless, as per the authors’ knowledge, this is the first research to estimate the CS of cemented materials containing metakaolin using a hybrid ELM paradigm created with an enhanced version of an SI algorithm.
Adjusted coefficient of determination
Artificial neural network
Binder to sand ratio
Biogeography-based optimization
Carbon dioxide
Colony predation algorithm
Compressive strength
Charged system search
Differential evolution
Deep neural network
Exploration and exploitation
Enhanced grey wolf optimizer
Extreme learning machine
Hybrid model of ELM and CPA
Hybrid model of ELM and EGWO
Hybrid model of ELM and GWO
Hybrid model of ELM and HHO
Hybrid model of ELM and MPA
Hybrid model of ELM and PSO
Hybrid model of ELM and SSA
Evolutionary programming
Field of forces algorithm
Genetic algorithm
Granular glass furnace slag
Genetic programming
Gravitational search algorithm
Grey wolf optimizer
Harris hawks optimization
K-nearest neighbors algorithm
Lower bound
League championship algorithm
Long short-term memory
Cement grade
Meta-heuristic
Metakaolin
Metakaolin to binder ratio content
Machine learning
Machine learning algorithm
Marine predators algorithm
Number of hidden neurons
Swarm size
Performance index
Particle swarm optimization
Coefficient of determination
Root mean square error
RMSE to standard deviation of actual observations
Soccer league competition
single layer feed-forward neural network
Social learning optimization
Spiral optimizer
Seeker optimization algorithm
Superplasticizer
Salp swarm algorithm
Hybrid model of SVR and PSO
Teaching-learning-based optimization
Maximum epoch count
Training
Testing
Upper bound
Variance account factor
Water-binder ratio
Water cycle algorithm
None.
This study is supported via funding from Prince Sattam Bin Abdulaziz University Project Number (PSAU/2023/R/1445).
The authors confirm contribution to the paper as follows: study conception and design: A. B.; data collection: A. B.; analysis and interpretation of results: A. B.; draft manuscript preparation: A. B., R. K. S., M. A., S. A. A. All authors reviewed the results and approved the final version of the manuscript.
The details of employed dataset are mentioned in the manuscript.
The authors declare that they have no conflicts of interest to report regarding the present study.