model structure and, thus, apply to a wider swath of target distributions or objective functions “out of the box”. Such generic algorithms typically require little cleverness or creativity to implement, limiting the amount of time data scientists must spend worrying about computational details. Moreover, they aid the development of flexible statistical software that adapts to complex model structure in a way that users easily understand. But it is not enough that software be flexible and easy to use: mapping computations to computer hardware for optimal implementations remains difficult. In Section 4.2, we argue that Core Challenge 5, effective use of computational resources such as central processing units (CPU), graphics processing units (GPU), and quantum computers, will become increasingly central to the work of the computational statistician as data grow in magnitude.
2 Core Challenges 1–3
Before providing two recent examples of twenty‐first century computational statistics (Section 3), we present three easily quantified Core Challenges within computational statistics that we believe will always exist: big , or inference from many observations; big , or inference with high‐dimensional models; and big , or inference with nonconvex objective – or multimodal density – functions. In twenty‐first century computational statistics, these challenges often co‐occur, but we consider them separately in this section.
2.1 Big N
Having a large number of observations makes different computational methods difficult in different ways. A worst case scenario, the exact permutation test requires the production of datasets. Cheaper alternatives, resampling methods such as the Monte Carlo permutation test or the bootstrap, may require anywhere from thousands to hundreds of thousands of randomly produced datasets [8, 10]. When, say, population means are of interest, each Monte Carlo iteration requires summations involving expensive memory accesses. Another example of a computationally intensive model is Gaussian process regression [16, 17]; it is a popular nonparametric approach, but the exact method for fitting the model and predicting future values requires matrix inversions that scale . As the rest of the calculations require relatively negligible computational effort, we say that matrix inversions represent the computational bottleneck for Gaussian process regression.
To speed up a computationally intensive method, one only needs to speed up the method's computational bottleneck. We are interested in performing Bayesian inference [18] based on a large vector of observations . We specify our model for the data with a likelihood function and use a prior distribution with density function to characterize our belief about the value of the ‐dimensional parameter vector a priori. The target of Bayesian inference is the posterior distribution of conditioned on
The denominator's multidimensional integral quickly becomes impractical as grows large, so we choose to use the MetropolisHastings (M–H) algorithm to generate a Markov chain with stationary distribution [19, 20]. We begin at an arbitrary position and, for each iteration , randomly generate the proposal state from the transition distribution with density . We then accept proposal state with probability
The ratio on the right no longer depends on the denominator in Equation (1), but one must still compute the likelihood and its terms .
It is for this reason that likelihood evaluations are often the computational bottleneck for Bayesian inference. In the best case, these evaluations are