An inverse problem in practical scientific investigations is the process of computing unknown parameters from a set of observations where the observations are only recorded indirectly, such as monitoring and controlling quality in industrial process control. Linear regression can be thought of as linear inverse problems. In other words, the procedure of unknown estimation parameters can be expressed as an inverse problem. However, maximum likelihood provides an unstable solution, and the problem becomes more complicated if unknown parameters are estimated from different samples. Hence, researchers search for better estimates. We study two joint censoring schemes for lifetime products in industrial process monitoring. In practice, this type of data can be collected in fields such as the medical industry and industrial engineering. In this study, statistical inference for the Chen lifetime products is considered and analyzed to estimate underlying parameters. Maximum likelihood and Bayes’ rule are both studied for model parameters. The asymptotic distribution of maximum likelihood estimators and the empirical distributions obtained with Markov chain Monte Carlo algorithms are utilized to build the interval estimators. Theoretical results using tables and figures are adopted through simulation studies and verified in an analysis of the lifetime data. We briefly describe the performance of developed methods.

Several types of monitoring data are available. One is the censoring scheme, which is a popular problem in life testing experiments. The oldest censoring projects are the so-called “type-I”, and the other is “type-II”. In practice, there are usually two random variables, i.e., time and the number of failures of items. This strategy of censoring projects shows how the examiner imagines the experiment based on a predetermined time. A random number of units is accounted for the first type-I of a censoring scheme, which means it may be assumed the exact time of stopping experiment. While the predetermined number of failure units and a random time in the type-II censoring scheme. In these two types of censoring schemes, companies cannot be removed from an experiment until the final stage or the number of units fail. This process allows the detection of some units that are defective after running the experiment. The mixture of these types of censoring schemes is the so-called hybrid censoring system [_{m}_{i}_{i}_{m}_{i}_{i}_{1} and _{2} are chosen from these lines for experimental testing. The experiment runs under some consideration of time and cost, and the experimenter reports that it terminates after a predetermined time or number of failures. This is called a joint censoring scheme [_{1} +_{2} is taken as _{1} from line _{2} from line _{1} is removed immediately from the experiment. We observe the first failure unit, say _{1} and has line _{1} from line _{2} is removed from the test after we examine the second failure unit, say _{2} and has line _{2}, say _{i}_{1} +_{2}, taken from production lines

Assume two production lines, and a random sample of size _{1} +_{2}, where _{1} comes from line _{2} from line _{1} is observed from some units that are taken from line _{1} survival component is removed from _{1} and _{1} +1 survival component is removed from _{2} when the second failure _{2} is observed if _{2} is chosen from the line _{2} +1 survival component is removed from _{1} − _{1} −1, and _{2} survival component is removed from the sample _{2} − _{2} −1. The test continues in this manner until the _{m}_{1} comes from the line _{2} comes from the line _{j}_{1} +_{2}, _{1} is the number of failed units from line _{2} is the number of failed units from line

The joint likelihood rule under two progressive type-II censoring samples

where

and _{j}_{j}

Reliability and hazard rate functions, respectively, are given by

and

where

The joint likelihood function in

After taking the logarithms of both sides, the joint likelihood function in

which is used to represent the point and interval estimators of underlying parameters.

The likelihood rule is obtained from

The equation

The equation

The equation

The equation

After replacing

and

Nonlinear _{1} = 0 or _{2} = 0, then the parameter values

To obtain interval estimates of unknown parameters requires the computation of the Fisher information matrix, which is defined by the negative expectation of the partial second derivative of the log-likelihood rule using

where

Therefore, under the rule of asymptotic normality distribution of computing

where the diagonal of the approximate variance-covariance matrix _{11}, _{22}, _{33}, and _{44}, and

and

We need to use Bayes approaches with the MCMC method because of the dimensionality of the model. Bayes estimation requires prior information about the model parameters, which are considered in this study to be independent gamma priors. Then, the available prior information is modeled as

where

Following this, the information about the model parameters is obtained from the prior information and the data, which provides the posterior distribution as

where the denominator of the fraction can be removed since it contains no information about

The Bayes estimators are computed with respect to the loss rule; then the Bayes method of any function

The integrals in

and

Then the full conditional distributions are reduced to gamma distributions represented by

Step 1: Start with an initial vector

Step 2: The values

Step 3: The values

Step 4: The vector

Step 5: Steps (2) to (4) are repeated S times.

Step 6: If we need to the number of iterations to reach convergence in the equilibrium, which called burn-in, say ^{*}; hence, the Bayes estimators of model parameters are represented by

with posterior variance of

Step 7: The

where

Two estimation methods, classical ML and Bayes estimation under Chen lifetime distribution, are discussed and developed in this study. We compare and assess these methods under the MCMC algorithms. We report the results with various sample sizes (_{1}, _{2}), several sample sizes of failure units

Pa. |
ML BMCMC prior 0 | BMCMC prior 1 | ||||||
---|---|---|---|---|---|---|---|---|

MVs | MSEs | MVs | MSEs | MVs | MSEs | |||

1.311 | 0.325 | 1.352 | 0.321 | 1.241 | 0.241 | |||

1.712 | 0.521 | 1.715 | 0.511 | 1.669 | 0.400 | |||

0.311 | 0.100 | 0.321 | 0.099 | 0.217 | 0.074 | |||

0.201 | 0.081 | 0.198 | 0.079 | 0.147 | 0.054 | |||

1.332 | 0.375 | 1.372 | 0.381 | 1.255 | 0.262 | |||

1.732 | 0.566 | 1.754 | 0.559 | 1.670 | 0.412 | |||

0.325 | 0.113 | 0.344 | 0.110 | 0.242 | 0.076 | |||

0.231 | 0.092 | 0.210 | 0.088 | 0.177 | 0.073 | |||

1.340 | 0.382 | 1.379 | 0.390 | 1.266 | 0.257 | |||

1.741 | 0.571 | 1.762 | 0.563 | 1.678 | 0.417 | |||

0.331 | 0.116 | 0.340 | 0.114 | 0.249 | 0.081 | |||

0.235 | 0.097 | 0.213 | 0.091 | 0.178 | 0.075 | |||

1.209 | 0.201 | 1.214 | 0.199 | 1.174 | 0.124 | |||

1.641 | 0.410 | 1.635 | 0.409 | 1.611 | 0.325 | |||

0.287 | 0.082 | 0.289 | 0.081 | 0.216 | 0.066 | |||

0.175 | 0.055 | 0.171 | 0.057 | 0.144 | 0.042 | |||

1.225 | 0.214 | 1.227 | 0.212 | 1.179 | 0.131 | |||

1.652 | 0.422 | 1.651 | 0.417 | 1.625 | 0.331 | |||

0.292 | 0.087 | 0.290 | 0.089 | 0.222 | 0.071 | |||

0.181 | 0.059 | 0.182 | 0.058 | 0.151 | 0.049 | |||

1.231 | 0.217 | 1.235 | 0.216 | 1.181 | 0.136 | |||

1.659 | 0.427 | 1.656 | 0.422 | 1.629 | 0.335 | |||

0.290 | 0.095 | 0.291 | 0.093 | 0.227 | 0.076 | |||

0.185 | 0.062 | 0.181 | 0.065 | 0.154 | 0.053 | |||

(50,50) | 1.115 | 0.125 | 1.114 | 0.122 | 1.113 | 0.100 | ||

1.574 | 0.214 | 1.569 | 0.217 | 1.552 | 0.158 | |||

0.252 | 0.055 | 0.249 | 0.054 | 0.213 | 0.036 | |||

0.136 | 0.041 | 0.129 | 0.039 | 0.121 | 0.018 | |||

1.126 | 0.137 | 1.118 | 0.141 | 1.118 | 0.109 | |||

1.582 | 0.221 | 1.575 | 0.223 | 1.561 | 0.166 | |||

0.271 | 0.059 | 0.258 | 0.060 | 0.218 | 0.041 | |||

0.143 | 0.048 | 0.145 | 0.051 | 0.127 | 0.026 |

Pa. |
ML | BMCMCprior0 | BMCMCprior1 | |||||
---|---|---|---|---|---|---|---|---|

ALs | PCs | ALs | PCs | ALs | PCs | |||

(30,25) | 2.854 | (0.89) | 2.849 | (0.90) | 2.489 | (0.91) | ||

3.752 | (0.89) | 3.762 | (0.89) | 3.089 | (0.90) | |||

0.615 | (0.90) | 0.619 | (0.89) | 0.542 | (0.91) | |||

0.401 | (0.90) | 0.409 | (0.90) | 0.396 | (0.90) | |||

2.875 | (0.89) | 2.882 | (0.90) | 2.521 | (0.91) | |||

3.791 | (0.90) | 3.799 | (0.89) | 3.214 | (0.91) | |||

0.651 | (0.91) | 0.644 | (0.89) | 0.571 | (0.91) | |||

0.434 | (0.89) | 0.418 | (0.90) | 0.399 | (0.92) | |||

2.887 | (0.90) | 2.891 | (0.90) | 2.532 | (0.91) | |||

3.798 | (0.90) | 3.794 | (0.90) | 3.218 | (0.90) | |||

0.662 | (0.90) | 0.671 | (0.89) | 0.580 | (0.91) | |||

0.441 | (0.90) | 0.417 | (0.90) | 0.400 | (0.91) | |||

2.624 | (0.90) | 2.618 | (0.91) | 2.214 | (0.92) | |||

3.521 | (0.91) | 3.524 | (0.94) | 3.000 | (0.92) | |||

0.521 | (0.91) | 0.518 | (0.92) | 0.410 | (0.91) | |||

0.328 | (0.90) | 0.333 | (0.90) | 0.301 | (0.91) | |||

2.631 | (0.91) | 2.624 | (0.91) | 2.217 | (0.93) | |||

3.528 | (0.90) | 3.529 | (0.93) | 3.021 | (0.92) | |||

0.528 | (0.91) | 0.522 | (0.92) | 0.417 | (0.91) | |||

0.341 | (0.92) | 0.338 | (0.91) | 0.311 | (0.92) | |||

2.640 | (0.93) | 2.639 | (0.93) | 2.232 | (0.93) | |||

3.524 | (0.90) | 3.531 | (0.93) | 3.024 | (0.91) | |||

0.529 | (0.91) | 0.531 | (0.92) | 0.422 | (0.93) | |||

0.345 | (0.92) | 0.341 | (0.92) | 0.310 | (0.92) | |||

(50,50) | 2.542 | (0.94) | 2.555 | (0.92) | 2.198 | (0.95) | ||

3.412 | (0.931) | 3.417 | (0.94) | 2.900 | (0.92) | |||

0.410 | (0.91) | 0.415 | (0.92) | 0.397 | (0.94) | |||

0.300 | (0.93) | 0.310 | (0.93) | 0.289 | (0.92) | |||

2.551 | (0.91) | 2.554 | (0.92) | 2.201 | (0.93) | |||

3.426 | (0.95) | 3.425 | (0.94) | 2.909 | (0.92) | |||

0.417 | (0.94) | 0.421 | (0.95) | 0.400 | (0.94) | |||

0.315 | (0.93) | 0.318 | (0.93) | 0.309 | (0.93) |

Let Chen distribution with parameter values _{1}, _{1}) = (4, 2), _{3}, _{3}) = (2.0, 1.5) and (_{4}, _{4}) = (2, 2.5) are used to apply Bayes approaches.

Under consideration two sample of size (_{1}, _{2}) = (40, 40), censoring scheme _{1} = 30 from a Chen distribution with parameters (1.5, 1.1) and with size _{2} from a Chen distribution with parameters (1.8, 0.9) using the algorithms [

0.0274 | 0.0435 | 0.0519 | 0.0581 | 0.0740 | 0.1138 | 0.1387 | 0.1839 | 0.1859 | 0.1932 |

0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 |

0.1945 | 0.2545 | 0.2613 | 0.2791 | 0.2911 | 0.2973 | 0.3281 | 0.3577 | 0.3955 | 0.4163 |

1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 |

0.4671 | 0.4947 | 0.5935 | 0.5990 | 0.6411 | 0.6502 | 0.7318 | 0.7530 | 0.9014 | 1.0391 |

0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 |

Parameter |
ML | BMCMC | 95% ACIs | Length | 95% CIs | Length |
---|---|---|---|---|---|---|

1.117 | 1.241 | (0.6725,1.5615) | 0.889 | (0.7206,1.5733) | 0.853 | |

1.147 | 1.109 | (0.5478,1.6028) | 1.055 | (0.6101,1.6327) | 1.022 | |

1.075 | 1.321 | (0.5176,1.9768) | 1.459 | (0.6186,1.8626) | 1.244 | |

0.744 | 0.837 | (0.2432,1.2438) | 1.001 | (0.3389,1.3381) | 0.999 |

Products from different production lines were investigated using a joint censoring procedure under the same conditions. The balanced joint censoring procedure has been shown considerable attention over the last few years. In this study, we discussed products that follow a Chen lifetime distribution. We discussed the ML and Bayes estimates to estimate the underlying parameters of two Chen lifetime distributions. Numerical results were obtained to compare the theoretical performance results. Some points are observed from numerical results, which are summarized as follows.

From the results in

Estimation results under the Bayes method and informative prior distribution provide better estimation than ML and non-informative prior methods according to the MSE.

For non-informative priors, there are no significant differences between MLEs and Bayes estimates.

The effective sample size

The researcher would like to thank LetPub (