3.2 Batch Statistics
In simulation, we often collect data that is correlated. That is, the data have a dependence structure. This causes difficulty in developing valid confidence intervals for the estimators as well as invalidated a number of other statistical procedures that require independent observations. Grouping the data into batches and computing the average of each batch is one methodology for mitigating the effect of dependence within the data on statistical inference procedures. The idea is that the average associated with each batch will tend to be less dependent, especially the larger the batch size. The method of batch means provides a mechanism for developing an estimator for \(Var\lbrack \bar{X} \rbrack\).
The method of batch means is based on observations \((X_{1}, X_{2}, X_{3}, \dots, X_{n})\). The idea is to group the output into batches of size, \(b\), such that the averages of the data within a batch are more nearly independent and possibly normally distributed.
\[\begin{multline*} \underbrace{X_1, X_2, \ldots, X_b}_{batch 1} \cdots \underbrace{X_{b+1}, X_{b+2}, \ldots, X_{2b}}_{batch 2} \cdots \\ \underbrace{X_{(j-1)b+1}, X_{(j-1)b+2}, \ldots, X_{jb}}_{batch j} \cdots \underbrace{X_{(k-1)b+1}, X_{(k-1)b+2}, \ldots, X_{kb}}_{batch k} \end{multline*}\]
Let \(k\) be the number of batches each of size \(b\), where, \(b = \lfloor \frac{n}{k}\rfloor\). Define the \(j^{th}\) batch mean (average) as:
\[ \bar{X}_j(b) = \dfrac{1}{b} \sum_{i=1}^b X_{(j-1)b+i} \] Each of the batch means are treated like observations in the batch means series. For example, if the batch means are re-labeled as \(Y_j = \bar{X}_j(b)\), the batching process simply produces another series of data, (\(Y_1, Y_2, Y_3, \ldots, Y_k\)) which may be more like a random sample. Why should they be more independent? Typically, in auto-correlated processes the lag-k auto-correlations decay rapidly as \(k\) increases. Since, the batch means are formed from batches of size \(b\), provided that \(b\) is large enough the data within a batch is conceptually far from the data in other batches. Thus, larger batch sizes are good for ensuring independence; however, as the batch size increases the number of batches decreases and thus variance of the estimator will increase.
To form a \((1 - \alpha)\)% confidence interval, we simply treat this new series like a random sample and compute approximate confidence intervals using the sample average and sample variance of the batch means series:
\[
\bar{Y}(k) = \dfrac{1}{k} \sum_{j=1}^k Y_j
\]
The sample variance of the batch process is based on the \(k\) batches:
\[
S_b^2 (k) = \dfrac{1}{k - 1} \sum_{j=1}^k (Y_j - \bar{Y}^2)
\]
Finally, if the batch process can be considered independent and identically distributed the \(1-\alpha\) level confidence interval can be written as follows:
\[
\bar{Y}(k) \pm t_{\alpha/2, k-1} \dfrac{S_b (k)}{\sqrt{k}}
\]
The BatchStatistic
class within the statistic package implements a basic batching process. The BatchStatistic
class works with data as it is presented to its collect method. Since we do not know in advance how much data we have, the BatchStatistic
class has rules about the minimum number of batches and the size of batches that can be formed. Theory indicates that we do not need to have a large number of batches and that it is better to have a relatively small number of batches that are large in size.
Three properties of the BatchStatistic
class that are important are:
minNumBatches
– This represents the minimum number of batches required. The default value for this attribute is determined byBatchStatistic
.MIN_NUM_BATCHES
, which is set to 20.minBatchSize
– This represents the minimum size for forming initial batches. The default value for this attribute is determined byBatchStatistic
.MIN_NUM_OBS_PER_BATCH
, which is set to 16.maxNumBatchesMultiple
– This represents a multiple of minimum number of batches which is used to determine the upper limit (maximum) number of batches. For example, ifmaxNumBatchesMultiple = 2
and theminNumBatches = 20
, then the maximum number of batches we can have is 40 (2*20). The default value for this property is determined byBatchStatistic
.MAX_BATCH_MULTIPLE
, which is set to 2.
The BatchStatistic
class uses instances of the Statistic
class to do its calculations. The bulk of the processing is done in two methods, collect()
and collectBatch().
The collect()
method simply uses an instance of the Statistic
class (myStatistic
) to collect statistics. When the amount of data collected (myStatistic.count
) equals the current batch size (currentBatchSize
) then the collectBatch()
method is called to form a batch.
override fun collect(obs: Double) {
super.collect(obs)
myTotNumObs = myTotNumObs + 1.0
myValue = obs
myStatistic.collect(myValue)
if (myStatistic.count == currentBatchSize.toDouble()) {
collectBatch()
}
}
Referring to the collectBatch() method in the following code, the batches that are formed are recorded in an array called bm.
After recording the batch average, the statistic is reset for collecting the next batch of data. The number of batches is recorded and if this has reached the maximum number of batches (as determined by the batch multiple calculation), we rebatch the batches back down to the minimum number of batches by combining adjacent batches according to the batch multiple.
private fun collectBatch() {
// increment the current number of batches
numBatches = numBatches + 1
// record the average of the batch
bm[numBatches] = myStatistic.average
// collect running statistics on the batches
myBMStatistic.collect(bm[numBatches])
// reset the within batch statistic for next batch
myStatistic.reset()
// if the number of batches has reached the maximum then rebatch down to
// min number of batches
if (numBatches == maxNumBatches) {
numRebatches++
currentBatchSize = currentBatchSize * minNumBatchesMultiple
var j = 0 // within batch counter
var k = 0 // batch counter
myBMStatistic.reset() // clear for collection across new batches
// loop through all the batches
for (i in 1..numBatches) {
myStatistic.collect(bm[i]) // collect across batches old batches
j++
if (j == minNumBatchesMultiple) { // have enough for a batch
//collect new batch average
myBMStatistic.collect(myStatistic.average)
k++ //count the batches
bm[k] = myStatistic.average // save the new batch average
myStatistic.reset() // reset for next batch
j = 0
}
}
numBatches = k // k should be minNumBatches
myStatistic.reset() //reset for use with new data
}
}
There are a variety of procedures that have been developed that will automatically batch the data as it is collected. The KSL has a batching algorithm based on the procedure implemented within the Arena simulation language. When a sufficient amount of data has been collected batches are formed. As more data is collected, additional batches are formed until \(k=40\) batches are collected. When 40 batches are formed, the algorithm collapses the number of batches back to 20, by averaging each pair of batches. This has the net effect of doubling the batch size. This process is repeated as more data is collected, thereby ensuring that the number of batches is between 20 and 39. In addition, the procedure also computes the lag-1 correlation so that independence of the batches can be tested.
The BatchStatistic
class also provides a public reformBatches()
method to allow the user to rebatch the batches to a user supplied number of batches. Since the BatchStatistic
class implements the StatisticalAccessorIfc
interface, it can return the sample average, sample variance, minimum, maximum, etc. of the batches. Within the discrete-event modeling constructs of the KSL, batching can be turned on to collect batch statistics during a replication. The use of these constructs will be discussed when the discrete-event modeling elements of the KSL are presented.
The following code illustrates how to create and use a BatchStatistic
.
fun main() {
val d = ExponentialRV(2.0)
// number of observations
val n = 1000
// minimum number of batches permitted
// there will not be less than this number of batches
val minNumBatches = 40
// minimum batch size permitted
// the batch size can be no smaller than this amount
val minBatchSize = 25
// maximum number of batch multiple
// The multiple of the minimum number of batches
// that determines the maximum number of batches
// e.g. if the min. number of batches is 20
// and the max number batches multiple is 2,
// then we can have at most 40 batches
val maxNBMultiple = 2
// In this example, since 40*25 = 1000, the batch multiple does not matter
val bm = BatchStatistic(minNumBatches, minBatchSize, maxNBMultiple)
for (i in 1..n) {
bm.collect(d.value)
}
println(bm)
val bma = bm.batchMeans
var i = 0
for (x in bma) {
println("bm($i) = $x")
i++
}
// this re-batches the 40 down to 10
val reformed = bm.reformBatches(10)
println(Statistic(reformed))
}
The ksl.utilities.statistic
package defines a lot of functionality. Here is a summary of some of the useful classes and interfaces.
CollectorIfc
defines a set of collect() methods for collecting data. The method is overridden to permit the collection of a wide variety of data type. The collect() method is designed to collect values and a weight associated with the value. This allows the collection of weighted statistics.Collector
is an abstract base class for building concrete sub-classes.DoubleArraySaver
defines methods for saving observed data to arrays.WeightedStatisticIfc
defines statistics that are computed on weighted data values.WeightedStatistic
is a concrete implementation of the interface.AbstractStatistic
is an abstract base class for defining statistics. Sub-classes ofAbstractStatistic
compute summary statistics of some kind.Histogram
defines a class to collect statistics and tabulate data into bins.Statistic
is a concrete implementation ofAbstractStatistic
allowing for a multitude of summary statistics.BatchStatistic
is also a concrete implementation ofAbstractStatistic
that provides for summarizing data via a batching process.IntegerFrequency
tabulates integer values into a frequencies by observed values, similar to a histogram.StateFrequency
facilitates defining labeled states and tabulating visitation and transition statistics.StatisticXY
collects statistics on \((x,y)\) pairs computing statistics on the \(x\) and \(y\) values separately, as well as the covariance and correlation between the observations within a pair.
The most important class within the statistics package is probably the
Statistic
class. This class summarizes the observed data into summary
statistics such as: minimum, maximum, average, variance, standard
deviation, lag-1 correlation, and count. In addition, confidence
intervals can be formed on the observations based on the student-t
distribution. Finally, there are useful companion object methods for computing
statistics on arrays and for estimating sample sizes. The reader is
encourage to review the KSL documentation for all of the functionality,
including the ability to write nicely printed statistical results.
In the remaining sections of this chapter, we will illustrate the collection of statistics on simple Monte Carlo models. This begins in the next section by estimating the area of a simple one-dimensional function.