This section outlines the conditional inference, which converts the standard likelihood function to function depending on pivotal quantities and ancillary statistics based on the generalized progressive hybrid censoring scheme. For more details, see Lawless (1973, 1980, 1982) and Maswadah (2003, 2005), who used the pivotal quantities for the parameters as tools for constructing the confidence intervals for the unknown parameters based on complete and censored samples.

For the inverse Weibull distribution, we can write the likelihood function of (2) based on the GPCS (3) as

\(L(\alpha ,\beta ;x)=C\left(\alpha \beta {)}^{N}{\prod }_{i=1}^{N}{x}_{i}^{-\alpha -1}{e}^{-\beta {x}_{i}^{-\alpha }}\right[1-{e}^{-\beta {x}_{i}^{-\alpha }}{]}^{{R}_{i}}[1-{e}^{-\beta {T}^{-\alpha }}{]}^{{R}_{T}^{*}\delta }\) \(=C(\alpha \beta {)}^{N}{\prod }_{i=1}^{N}{x}_{i}^{-\alpha -1}\text{e}\text{x}\text{p}[-\beta \sum _{i=1}^{N}{x}_{i}^{-\alpha }+\sum _{i=1}^{N}{R}_{i}ln\left[1-{e}^{-\beta {x}_{i}^{-\alpha }}\right]+\delta {R}_{T}^{*}ln\left[1-{e}^{-\beta {T}^{-\alpha }}\right]]\) (4)

Since

$${\beta }{\text{x}}_{\text{i}}^{-{\alpha }}=({\beta }^{\widehat{\alpha }/\alpha }{\widehat{{\beta }}\text{x}}_{\text{i}}^{-\widehat{{\alpha }}}{/\widehat{\beta })}^{\frac{{\alpha }}{\widehat{{\alpha }}}}=({\text{z}}_{2}{a}_{i}{)}^{{z}_{1}}$$

Thus, \({\text{Z}}_{1}={\alpha }/\widehat{\alpha }\), and \({\text{Z}}_{2}={\beta }^{1/{z}_{1}}/\widehat{\beta }\) are pivotal quantities and \({\text{a}}_{\text{i}}=\widehat{{\beta }}{\text{x}}_{\text{i}}^{-\widehat{{\alpha }}}\), for \(\text{i}=\text{1,2},\dots .,\text{N}-2\), are the ancillary statistics.

**Theorem**

Let \(\widehat{{\alpha }}\) and \(\widehat{{\beta }}\) be the maximum likelihood estimators of \({\alpha }\) and \({\beta }\) based on the generalized progressive hybrid censored sample from the inverse Weibull distribution (1). Thus, the joint conditional PDF of

\({\text{Z}}_{1}={\alpha }/\widehat{\alpha }\) and \({\text{Z}}_{2}={\beta }^{1/{z}_{1}}/\widehat{\beta }\) given \(\underset{\_}{\text{A}}=\left({\text{a}}_{1},{\text{a}}_{2}\dots ,{\text{a}}_{\text{N}-2}\right)\) is of the form

$$\text{f}\left({\text{z}}_{1},{\text{z}}_{2}|\underset{\_}{\text{A}}\right)=\text{D}{\text{z}}_{1}^{\text{N}-1}{\text{z}}_{2}^{\text{N}{\text{Z}}_{1}-1}\frac{\text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{ln}\left(1-{e}^{-({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}}\right)+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{ln}\left(1-{e}^{-({z}_{2}{a}_{T}{)}^{{\text{Z}}_{1}}}\right)]}{\text{e}\text{x}\text{p}[-{z}_{1}\sum _{i=1}^{N}ln{a}_{i}+\sum _{i=1}^{n}({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}]}$$

5

where D is the normalizing constant.

**Proof**

Make the change of variables from \(\left({x}_{1},{x}_{2},{x}_{3},\dots ,{x}_{N}\right)\) that has joint density function (4) to \(\left(\widehat{{\alpha }},\widehat{{\beta }},{\text{a}}_{1},{\text{a}}_{2}\dots ,{\text{a}}_{\text{N}-2}\right)\). This transformation can be written as follows:

\({\text{X}}_{\text{i}}=({\text{a}}_{\text{i}}/\widehat{{\beta }}{)}^{\frac{1}{\widehat{{\alpha }}}}\) , \(\text{i}=\text{1,2},\dots .,\text{N}-2,\) \({\text{x}}_{\text{N}-1}=({\text{a}}_{\text{N}-1}/\widehat{{\beta }}{)}^{\frac{1}{\widehat{{\alpha }}}}\) and \({\text{X}}_{\text{N}}=({\text{a}}_{\text{N}}/\widehat{{\beta }}{)}^{\frac{1}{\widehat{{\alpha }}}}\), where \({\text{a}}_{\text{N}}\) and \({\text{a}}_{\text{N}-1}\) can be expressed in terms of \({\text{a}}_{1},{\text{a}}_{2}\dots ,{\text{a}}_{\text{N}-2}\).

The Jacobin of this transformation \(\frac{\partial \left({\text{x}}_{1},{\text{x}}_{2},\dots \dots ,{\text{x}}_{\text{N}}\right)}{\partial \left(\widehat{{\alpha }},\widehat{{\beta }},{\text{a}}_{1},{\text{a}}_{2},\dots ,{\text{a}}_{\text{N}-2}\right)}\) is of the form \(\left|J\right|=\prod _{i=1}^{N}{a}_{i}^{-\frac{1}{\widehat{\alpha }}}{\widehat{\beta }}^{\frac{N}{\widehat{\alpha }}-N}{\widehat{\alpha }}^{-N}\text{K}\left(\text{A},\text{N}\right),\) which is

independent of \({Z}_{1}\) and \({Z}_{2}\) \(.\) Therefore the joint PDF of \(\widehat{\alpha },\widehat{{\beta }}, \underset{\_}{A}\) can be derived as

$$\text{f}\left(\widehat{{\alpha }}.\widehat{{\beta }}; \underset{\_}{\text{A}}\right)=\text{C}{\left({\alpha }{\beta }\right)}^{\text{N}}{\prod }_{\text{i}=1}^{\text{N}}({\text{a}}_{\text{i}}/\widehat{{\beta }}{)}^{\frac{{\alpha }+1}{\widehat{{\alpha }}}}\text{e}\text{x}\text{p}[-\sum _{i=1}^{n}({\beta }^{\widehat{\alpha }/\alpha }{\widehat{{\beta }}\text{x}}_{\text{i}}^{-\widehat{{\alpha }}}{/\widehat{\beta })}^{\frac{{\alpha }}{\widehat{{\alpha }}}}]$$

$$\times \text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{l}\text{n}(1-\text{e}\text{x}\text{p}[-\left({\beta }^{\frac{\widehat{\alpha }}{\alpha }}{\widehat{{\beta }}\text{x}}_{\text{i}}^{-\widehat{{\alpha }}}{/\widehat{\beta })}^{\frac{{\alpha }}{\widehat{{\alpha }}}}\right]+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{l}\text{n}(1-\text{e}\text{x}\text{p}[-({\beta }^{\frac{\widehat{\alpha }}{\alpha }}{\widehat{{\beta }}\text{x}}_{\text{i}}^{-\widehat{{\alpha }}}{/\widehat{\beta })}^{\frac{{\alpha }}{\widehat{{\alpha }}}}\left]\right]\left|\text{J}\right|$$

Making further change of variables from\(\left(\widehat{{\alpha }},\widehat{{\beta }};\underset{\_}{\text{A}}\right)\) to \(\left({\text{Z}}_{1},{\text{Z}}_{2};\underset{\_}{\text{A}}\right)\), the Jacobin of this transformation can be derived as follows:

$$\left|\text{J}\right|=\left|\frac{\partial \left(\widehat{{\alpha }},\widehat{{\beta }}\right)}{\partial \left({\text{z}}_{1},{\text{z}}_{2}\right)}\right|=\left|\begin{array}{cc}-\frac{{\alpha }}{{\text{z}}_{1}^{2}}& 0\\ *& -\frac{{\beta }^{1/{z}_{1}}}{{Z}_{2}^{2}}\end{array}\right|=\frac{{\alpha }{\beta }^{1/{z}_{1}}}{{\text{z}}_{1}^{2}{Z}_{2}^{2}}=\frac{\widehat{{\alpha }}\widehat{{\beta }}}{{Z}_{1}{\text{Z}}_{2}}\propto \frac{1}{{{Z}_{1}\text{Z}}_{2}}$$

Thus

$$\text{f}\left({\text{z}}_{1},{\text{z}}_{2}; \underset{\_}{\text{A}}\right)=\text{D}{\text{z}}_{1}^{\text{N}-1}{\text{z}}_{2}^{\text{N}{\text{Z}}_{1}-1}\frac{\text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{ln}\left(1-{e}^{-({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}}\right)+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{ln}\left(1-{e}^{-({z}_{2}{a}_{T}{)}^{{\text{Z}}_{1}}}\right)]}{\text{e}\text{x}\text{p}[-{z}_{1}\sum _{i=1}^{N}ln{a}_{i}+\sum _{i=1}^{n}({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}]}$$

Finally, the joint conditional distribution function of \(\left({Z}_{1},{Z}_{2}\right)\) given \(\underset{\_}{\text{A}}\) can be derived as in (5)\(∎\)

Thus, based on (5), we can derive the marginal conditional distribution for each \({Z}_{1}\) and \({Z}_{2}\) given \(\underset{\_}{A}\) respectively as follows:

$$\text{f}\left({\text{z}}_{1}|\underset{\_}{\text{A}}\right)=\text{D}\underset{0}{\overset{\infty }{\int }}{\text{z}}_{1}^{\text{N}-1}{\text{z}}_{2}^{\text{N}{\text{Z}}_{1}-1}\frac{\text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{ln}\left(1-{e}^{-({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}}\right)+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{ln}\left(1-{e}^{-({z}_{2}{a}_{T}{)}^{{\text{Z}}_{1}}}\right)]}{\text{e}\text{x}\text{p}[-{z}_{1}\sum _{i=1}^{N}ln{a}_{i}+\sum _{i=1}^{n}({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}]}d{z}_{2}$$

6

$$\text{f}\left({\text{z}}_{2}|\underset{\_}{\text{A}}\right)=\text{D}\underset{0}{\overset{\infty }{\int }}{\text{z}}_{1}^{\text{N}-1}{\text{z}}_{2}^{\text{N}{\text{Z}}_{1}-1}\frac{\text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{ln}\left(1-{e}^{-({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}}\right)+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{ln}\left(1-{e}^{-({z}_{2}{a}_{T}{)}^{{\text{Z}}_{1}}}\right)]}{\text{e}\text{x}\text{p}[-{z}_{1}\sum _{i=1}^{N}ln{a}_{i}+\sum _{i=1}^{n}({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}]}d{z}_{1}$$

7

The conditional estimators for the pivotal \({\text{Z}}_{1}\)and \({ \text{Z}}_{2}\)can be derived from (6) and (7) respectively and transformed fiducially to the parameters \({\alpha }\) and \(\beta\) separately as follows:

Lawless (1973), derived the conditional inference for finding the confidence intervals for the unknown parameters based on pivotal quantities, therefore, the statistical significance of this work is finding the point estimators for the unknown parameters. Thus, for finding the point estimator for the parameter \({\alpha }\) say, we integrate (8) with respect to \({\text{Z}}_{1}\) from \({\text{z}}_{10}\) to \({\text{z}}_{11}\), we have

$$\text{F}\left({\text{z}}_{11}|\text{A}\right)-\text{F}\left({\text{z}}_{10}|\text{A}\right)=\text{U}-\text{D}\underset{0}{\overset{{z}_{10}}{\int }}\underset{0}{\overset{\infty }{\int }}{\text{z}}_{1}^{\text{N}-1}{\text{z}}_{2}^{\text{N}{\text{Z}}_{1}-1}\frac{\text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{ln}\left(1-{e}^{-({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}}\right)+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{ln}\left(1-{e}^{-({z}_{2}{a}_{T}{)}^{{\text{Z}}_{1}}}\right)]}{\text{e}\text{x}\text{p}[-{z}_{1}\sum _{i=1}^{N}ln{a}_{i}+\sum _{i=1}^{n}({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}]}\text{d}{\text{z}}_{2}\text{d}{\text{z}}_{1}$$

$$\text{F}\left({\text{z}}_{11}|\text{A}\right)-\text{F}\left({\text{z}}_{10}|\text{A}\right)=\text{U}-\text{W}\left({z}_{10}\right)$$

8

,

where\(\text{W}\left({\text{z}}_{10}\right)=\text{D}{\Gamma }\left(\text{N}\right)\underset{0}{\overset{{\text{z}}_{10}}{\int }}\underset{0}{\overset{{\infty }}{\int }}{\text{z}}_{1}^{\text{N}-1}{\text{z}}_{2}^{\text{N}{\text{z}}_{1}-1}\frac{\text{e}\text{x}\text{p}[\sum _{\text{i}=1}^{\text{N}}{\text{R}}_{\text{i}}\text{ln}\left(1-{e}^{-({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}}\right)+{\delta }{\text{R}}_{\text{T}}^{\text{*}}\text{ln}\left(1-{e}^{-({z}_{2}{a}_{T}{)}^{{\text{Z}}_{1}}}\right)]}{\text{e}\text{x}\text{p}[-{z}_{1}\sum _{i=1}^{N}ln{a}_{i}+\sum _{i=1}^{n}({z}_{2}{\text{a}}_{\text{i}}{)}^{{\text{Z}}_{1}}]}\text{d}{\text{z}}_{2}\text{d}{\text{z}}_{1}\)

and U is a uniform random number from the uniform distribution \(\text{U}\left(\text{0,1}\right)\).

It is known that the second order approximation of the first derivative \(\frac{\text{d}F\left({\text{z}}_{11}|\underset{\_}{A}\right)}{\text{d}{Z}_{1}}\), which is defined by the centered differencing, can be written as

\(\frac{\text{d}\text{F}\left({\text{Z}}_{1}^{\text{*}}|\underset{\_}{\text{A}}\right)}{\text{d}{\text{Z}}_{1}}=\frac{\text{F}\left({\text{z}}_{11}|\underset{\_}{\text{A}}\right)-\text{F}\left({\text{Z}}_{10}|\underset{\_}{\text{A}}\right)}{{\text{Z}}_{11}-{\text{Z}}_{10}}=\text{f}\left({\text{z}}_{1}^{\text{*}}|\underset{\_}{\text{A}}\right),\) \({\text{z}}_{11}<{\text{z}}_{1}^{\text{*}}<{\text{z}}_{10}\)(9)

From (8) and (9) we can derive the conditional estimator for \({Z}_{1}\) as:

$${\text{z}}_{\text{i}+1}={\text{z}}_{0}+\text{C}\left[\text{U}-\text{W}\left({z}_{0}\right)\right].$$

10

The convergence of (10) is guaranteed by the condition \(\text{C}<\frac{\text{D}}{\text{f}\left({\text{z}}_{1}^{\text{*}}|\underset{\_}{\text{A}}\right)}\), where D is the normalizing constant.

The iterative process is continued for \(\text{i}=\text{1,2},3,\dots .\) until two consecutive numerical solutions almost the same, that is if \({|\text{z}}_{\text{i}+1}-{\text{z}}_{i}|<1\text{E}-05.\) Thus, we can get the successive approximation for the pivotal \({Z}_{1}\)from (10) and transforming fiducially for \({\alpha }\) as follows: \({{\alpha }}^{\text{*}}=\widehat{{\alpha }} {\text{z}}_{\text{i}+1}\). Similarly, the estimators for \({\beta }\) can be derived from (7).