## A.4 Exercises

**Exercise A.1**The sequence of random numbers generated from a given seed is called a random number (a)\(\underline{\hspace{3cm}}\).

**Exercise A.2**State three major methods of generating random variables from any distribution. (a)\(\underline{\hspace{3cm}}\). (b)\(\underline{\hspace{3cm}}\).(c)\(\underline{\hspace{3cm}}\).

**Exercise A.3**Consider the multiplicative congruential generator with (\(a = 13\), \(m = 64\), \(c = 0\), and seeds \(X_0\) = 1,2,3,4). a) Using Theorem A.1, does this generator achieve its maximum period for these parameters? b) Generate one period’s worth of uniform random variables from each of the supplied seeds.

**Exercise A.4**Consider the multiplicative congruential generator with (\(a = 11\), \(m = 64\), \(c = 0\), and seeds \(X_0\) = 1,2,3,4). a) Using Theorem A.1, does this generator achieve its maximum period for these parameters? b) Generate one period’s worth of uniform random variables from each of the supplied seeds.

**Exercise A.5**Consider the linear congruential generator with (\(a = 11\), \(m = 16\), \(c = 5\), and seed \(X_0\) = 1). a) Using Theorem A.1, does this generator achieve its maximum period for these parameters? b) Generate 2 pseudo-random uniform numbers for this generator.

**Exercise A.6**Consider the linear congruential generator with (\(a = 13\), \(m = 16\), \(c = 13\), and seed \(X_0\) = 37). a) Using Theorem A.1, does this generator achieve its maximum period for these parameters? b) Generate 2 pseudo-random uniform numbers for this generator.

**Exercise A.7**Consider the linear congruential generator with (\(a = 8\), \(m = 10\), \(c = 1\), and seed \(X_0\) = 3). a) Using Theorem A.1, does this generator achieve its maximum period for these parameters? b) Generate 2 pseudo-random uniform numbers for this generator.

**Exercise A.8**Consider the following discrete distribution of the random variable \(X\) whose probability mass function is \(p(x)\).

\(x\) | 0 | 1 | 2 | 3 | 4 |
---|---|---|---|---|---|

\(p(x)\) | 0.3 | 0.2 | 0.2 | 0.1 | 0.2 |

- Determine the CDF \(F(x)\) for the random variable, \(X\).
- Create a graphical summary of the CDF. See Example A.4.
- Create the inverse CDF that can be used to determine a sample from the discrete distribution, \(p(x)\). See Example A.4.
- Generate 3 values of \(X\) using the following pseudo-random numbers \(u_1= 0.943, u_2 = 0.398, u_3 = 0.372\)

**Exercise A.9**Consider the following uniformly distributed random numbers:

\(U_1\) | \(U_2\) | \(U_3\) | \(U_4\) | \(U_5\) | \(U_6\) | \(U_7\) | \(U_8\) |
---|---|---|---|---|---|---|---|

0.9396 | 0.1694 | 0.7487 | 0.3830 | 0.5137 | 0.0083 | 0.6028 | 0.8727 |

- Generate an exponentially distributed random number with a mean of 10 using the 1st random number.
- Generate a random variate from a (12, 22) discrete uniform distribution using the 2nd random number.

**Exercise A.10**Consider the following uniformly distributed random numbers:

\(U_1\) | \(U_2\) | \(U_3\) | \(U_4\) | \(U_5\) | \(U_6\) | \(U_7\) | \(U_8\) |
---|---|---|---|---|---|---|---|

0.9559 | 0.5814 | 0.6534 | 0.5548 | 0.5330 | 0.5219 | 0.2839 | 0.3734 |

- Generate a uniformly distributed random number with a minimum of 12 and a maximum of 22 using \(U_8\).
- Generate 1 random variate from an Erlang(\(r=2\), \(\beta=3\)) distribution using \(U_1\) and \(U_2\)
- The demand for magazines on a given day follows the following
probability mass function:

\(x\) | 40 | 50 | 60 | 70 | 80 |
---|---|---|---|---|---|

\(P(X=x)\) | 0.44 | 0.22 | 0.16 | 0.12 | 0.06 |

Using the supplied random numbers for this problem starting at \(U_1\), generate 4 random variates from the probability mass function.

**Exercise A.11**Suppose that customers arrive at an ATM via a Poisson process with mean 7 per hour. Determine the arrival time of the first 6 customers using the following pseudo-random numbers via the inverse transformation method. Start with the first row and read across the table.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.12**The demand, \(D\), for parts at a repair bench per day can be described by the following discrete probability mass function:

\(D\) | 0 | 1 | 2 |
---|---|---|---|

\(p(D)\) | 0.3 | 0.2 | 0.5 |

Generate the demand for the first 4 days using the following sequence of U(0,1) random numbers: 0.943, 0.398, 0.372, 0.943.

**Exercise A.13**The service times for a automated storage and retrieval system has a shifted exponential distribution. It is known that it takes a minimum of 15 seconds for any retrieval. The parameter of the exponential distribution is \(\lambda = 45\). Generate two service times for this distribution using the following sequence of U(0,1) random numbers: 0.943, 0.398, 0.372, 0.943.

**Exercise A.14**The time to failure for a computer printer fan has a Weibull distribution with shape parameter \(\alpha = 2\) and scale parameter \(\beta = 3\). Testing has indicated that the distribution is limited to the range from 1.5 to 4.5. Generate two random variates from this distribution using the following sequence of U(0,1) random numbers: 0.943, 0.398, 0.372, 0.943.

**Exercise A.15**The interest rate for a capital project is unknown. An accountant has estimated that the minimum interest rate will between 2% and 5% within the next year. The accountant believes that any interest rate in this range is equally likely. You are tasked with generating interest rates for a cash flow analysis of the project. Generate two random variates from this distribution using the following sequence of U(0,1) random numbers: 0.943, 0.398, 0.372, 0.943.

**Exercise A.16**Customers arrive at a service location according to a Poisson distribution with mean 10 per hour. The installation has two servers. Experience shows that 60% of the arriving customers prefer the first server. Start with the first row and read across the table determine the arrival times of the first three customers at each server.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.17 **Consider the triangular distribution:

- Derive an inverse transform algorithm for this distribution.
- Using 0.943, 0.398, 0.372, 0.943, 0.204 generate 5 random variates from the triangular distribution with \(a = 2\), \(c = 5\), \(b = 10\).

**Exercise A.18 **Consider the following probability density function:

- Derive an inverse transform algorithm for this distribution.
- Using 0.943, 0.398 generate two random variates from this distribution.

**Exercise A.19 **Consider the following probability density function:

- Derive an inverse transform algorithm for this distribution.
- Using 0.943, 0.398 generate two random variates from this distribution.

**Exercise A.20 **Consider the following probability density function:

- Derive an inverse transform algorithm for this distribution.
- Using 0.943, 0.398 generate two random variates from this distribution.

**Exercise A.21 **Consider the following probability density function:

- Derive an inverse transform algorithm for this distribution.
- Using 0.943, 0.398 generate two random variates from this distribution.

**Exercise A.22 **The times to failure for an automated production process have been found to be
randomly distributed according to a Rayleigh distribution:

- Derive an inverse transform algorithm for this distribution.
- Using 0.943, 0.398 generate two random variates from this distribution with \(\beta = 2.0\).

**Exercise A.23**Starting with the first row, first column and reading by rows use the random numbers from the following table to generate 2 random variates from the negative binomial distribution with parameters \((r = 4, p =0.4)\) using the convolution method.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.24**Starting with the first row, first column and reading by rows use the random numbers from the following table to generate 2 random variates from the negative binomial distribution with parameters \((r = 4, p =0.4)\) using a sequence of Bernoulli trials to get 4 successes.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.25 **Suppose that the processing time for a job consists of two distributions. There
is a 30% chance that the processing time is lognormally distributed with
a mean of 20 minutes and a standard deviation of 2 minutes, and a 70%
chance that the time is uniformly distributed between 10 and 20 minutes.
Using the first row of random numbers the following table generate two job processing times. Hint:
\(X \sim LN(\mu, \sigma^2)\) if and only if
\(\ln(X) \sim N(\mu, \sigma^2)\). Also, note that:

\[\begin{aligned} E[X] & = e^{\mu + \sigma^{2}/2}\\ Var[X] & = e^{2\mu + \sigma^{2}}\left(e^{\sigma^{2}} - 1\right)\end{aligned}\]

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.26**Suppose that the service time for a patient consists of two distributions. There is a 25% chance that the service time is uniformly distributed with minimum of 20 minutes and a maximum of 25 minutes, and a 75% chance that the time is distributed according to a Weibull distribution with shape of 2 and a scale of 4.5. Using the first row of random numbers from the following table generate the service time for two patients.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.27**If \(Z \sim N(0,1)\), and \(Y = \sum_{i=1}^k Z_i^2\) then \(Y \sim \chi_k^2\), where \(\chi_k^2\) is a chi-squared random variable with \(k\) degrees of freedom. Using the first two rows of random numbers from the following table generate two \(\chi_5^2\) random variates.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.28**In the (a)\(\underline{\hspace{3cm}}\) technique for generating random variates, you want the (b)\(\underline{\hspace{3cm}}\) function to be as close as possible to the distribution function that you want to generate from in order to ensure that the (c)\(\underline{\hspace{3cm}}\) is as high as possible, thereby improving the efficiency of the algorithm.

**Exercise A.29 **Prove that the acceptance-rejection method for continuous random variables is valid by
showing that for any \(x\),

\[P\lbrace X \leq x \rbrace = \int_{-\infty}^x f(y)dy\]

Hint: Let E be the event that the acceptance occurs and use conditional probability.**Exercise A.30**Consider the following probability density function:

\[f(x) = \begin{cases} \dfrac{3x^2}{2} & -1 \leq x \leq 1\\ 0 & \text{otherwise} \\ \end{cases}\] a. Derive an acceptance-rejection algorithm for this distribution. b. Using the first row of random numbers from the following table generate 2 random variates using your algorithm.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.31 **This problem is
based on (Cheng 1977), see also (Ahrens and Dieter 1972). Consider the
gamma distribution:

\[f(x) = \beta^{-\alpha} x^{\alpha-1} \dfrac{e^{-x/\beta}}{\Gamma(\alpha)}\]

where x \(>\) 0 and \(\alpha >\) 0 is the shape parameter and \(\beta >\) 0 is the scale parameter. In the case where \(\alpha\) is a positive integer, the distribution reduces to the Erlang distribution and \(\alpha = 1\) produces the negative exponential distribution.

Acceptance-rejection techniques can be applied to the cases of \(0 < \alpha < 1\) and \(\alpha > 1\). For the case of \(0 < \alpha < 1\) see Ahrens and Dieter (1972). For the case of \(\alpha > 1\), Cheng (1977) proposed the following majorizing function:

\[g(x) = \biggl[\dfrac{4 \alpha^\alpha e^{-\alpha}}{a \Gamma (\alpha)}\biggr] h(x)\]

where \(a = \sqrt{(2 \alpha - 1)}\), \(b = \alpha^a\), and \(h(x)\) is the resulting probability distribution function when converting \(g(x)\) to a density function:

\[h(x) = ab \dfrac{x^{a-1}}{(b + x^a)^2} \ \ \text{for} x > 0\]

Develop an inverse transform algorithm for generating from \(h(x)\)

Using the first two rows of random numbers from the following table, generate two random variates from a gamma distribution with parameters \(\alpha =2\) and \(\beta = 10\) via the acceptance/rejection method.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.32**Parts arrive to a machine center with three drill presses according to a Poisson distribution with mean \(\lambda\). The arriving customers are assigned to one of the three drill presses randomly according to the respective probabilities \(p_1\), \(p_2\), and \(p_3\) where \(p_1 + p_2 + p_3 = 1\) and \(p_i > 0\) for \(i = 1, 2, 3\). What is the distribution of the inter-arrival times to each drill press? Specify the parameters of the distribution. Suppose that \(p_1\), \(p_2\), and \(p_3\) equal to 0.25, 0.45, and 0.3 respectively and that \(\lambda\) is equal to 12 per minute.

Using the first row of random numbers from the following table generate the first three arrival times.

0.943 | 0.398 | 0.372 | 0.943 | 0.204 | 0.794 |

0.498 | 0.528 | 0.272 | 0.899 | 0.294 | 0.156 |

0.102 | 0.057 | 0.409 | 0.398 | 0.400 | 0.997 |

**Exercise A.33 **Consider the following function:

- Determine the value of \(c\) that will turn \(g(x)\) into a probability density function. The resulting probability density function is called a parabolic distribution.
- Denote the probability density function found in part (a), \(f(x)\). Let \(X\) be a random variable from \(f(x)\). Derive the inverse cumulative distribution function for \(f(x)\).

**Exercise A.34 **Consider the following probability density function:

\[f(x) = \begin{cases} \frac{3(c - x)^{2}}{c^{3}} & 0 \leq x \leq c\\ 0 & \text{otherwise} \\ \end{cases} \]

Derive an inverse cumulative distribution algorithm for generating from \(f(x)\).### References

*Communications of the Association for Computing Machinery*15: 873–82.

*Applied Statistics*26 (1): 71–75.