The testing of multiple hypotheses is an important consideration in many statistical analyses. A theme for multiple comparisons problems under a frequentist paradigm is the need for an adjustment to control the overall error probability for the false detection of null effects. Our review will focus on Bayesian approaches to multiple comparisons problems. Under a Bayesian paradigm, multiplicity adjustments arise from a concern that many of the effects to be tested are null. We will discuss how Bayesian models provide a multiplicity adjustment through a prior placing increased probability on null effects, or through hierarchical modeling. We will also show how the Bayesian information criterion for model selection fits naturally into the study of multiple comparisons problems.

For further resources related to this article, please visit the WIREs website.

Estimated mean differences in the log-transformed creatine kinase levels. (1) corresponds to empirical group means; (2) corresponds to estimated group means under Bayesian model averaging; (3) corresponds to estimated group means under the highest posterior probability model.

This paper is an extension of WIRE publication Kolmogorov–Zurbenko filters, 2010. It addresses computational aspects of multidimensional KZ filtering for unevenly spaced data. Some real examples are provided to illustrate some of the details of such data analysis. The identification and separation of different spatial or temporal scales in spatiotemporal data can provide essential improvements to the accuracy of explanations. In particular, long term scales can be made to display clear patterns which are absolutely indistinguishable using standard multivariate analysis methods.

For further resources related to this article, please visit the WIREs website.

A long term KZ filter over specific humidity captures scales more than 7 years in time and more than 1000 miles in space, revealing an explosive increasing concentration of vapor energy anomalies in the North Atlantic and increasing dry out in the western Americas and Australia. These patterns essentially contribute to regional climate changes and extreme weather effects.

The ‘*k*-sample problem’ aims to detect statistical differences among multiple populations. A statistical test, capable of detecting any departure from the null hypothesis of ‘statistical equality’ such as the equality in distribution, is typically referred to as an ‘omnibus test.’ A short overview of historic developments and a detailed discussion of the more prominent state-of-the-art techniques are presented with references to numerous sources and studies. Both classical and modern omnibus tests are systematically categorized in terms of seminal probabilistic and statistical concepts into tests that are based upon the empirical distribution, characteristic or kernel density function, etc. To demonstrate the strengths and weaknesses of each particular approach with regard to its statistical performance, applicability, computational complexity, and parameter tuning, eight representatively selected omnibus tests (along with the Kruskal–Wallis test) are numerically implemented and compared under various ‘archetypal’ scenarios. Recommendations are made accordingly along with a discussion of challenges and potential future research directions for this problem.

For further resources related to this article, please visit the WIREs website.

K-sample omnibus tests are used to detect statistical differences among multiple populations by comparing their empirical CDFs, PDFs, etc. A short overview of historic developments and a detailed discussion of the more prominent state-of-the-art techniques are presented along with simulation results and comparisons as well as numerous references.

High dimensional data plays a key role in the modern statistical analysis. A common objective for the high dimensional data analysis is to perform model selection, and penalized likelihood method is one of the most popular approaches. Typical penalty functions are usually symmetric about 0, continuous and nondecreasing in (0, ∞). In this review article, we will focus on a special type of penalty function, the so call reciprocal Lasso (rLasso) penalty. The rLasso penalty functions are decreasing in (0, ∞), discontinuous at 0, and converge to infinity when the coefficients approach zero. Although uncommon, this choice of penalty is intuitively appealing if one seeks a parsimonious model fitting. In this article, we will provide an overview for the motivation, theory, and computational challenges of this rLasso penalty, and we will also compare the theoretical properties and empirical performance of rLasso with other popular penalty choices.

For further resources related to this article, please visit the WIREs website.

An reciprocal regularization employees a decreasing penalty function (the right plot). In contrast, conventional penalization methods use increasing penalties (the left plot).

Precedence probabilities are important tools in a statistician's toolkit. Precedence probabilities can be defined as the probability of observing single samples from *K* populations in a particular order. Noting that there are *K* ! possible orders of *K* populations; these *K* ! parameters are a useful way to measure the effectiveness of a classifier (AUC/VUS/HUM). Receiver operating characteristic (ROC) curve/surface/manifold, which can be generated by any classifier leads to calculation of the area under curve (AUC)/volume under surface (VUS)/hyper-volume under manifold (HUM) can be approximated by a single precedence probability and can be nonparametrically estimated via rank-based U-statistic. Precedence probabilities can also be used to test equality of *K* > 2 distribution functions. Hypothesis tests related to both these problems mentioned above are discussed. On the other hand, when we are interested in testing if the *K* distributions are stochastically ordered, we perform a precedence-type test. Different nonparametric tests are also discussed in relation to precedence-type tests.

For further resources related to this article, please visit the WIREs website.

We review Bayesian and classical approaches to nonparametric density and regression estimation and illustrate how these techniques can be used in economic applications. On the Bayesian side, density estimation is illustrated via finite Gaussian mixtures and a Dirichlet Process Mixture Model, while nonparametric regression is handled using priors that impose smoothness. From the frequentist perspective, kernel-based nonparametric regression techniques are presented for both density and regression problems. Both approaches are illustrated using a wage dataset from the Current Population Survey. *WIREs Comput Stat* 2017, 9:e1406. doi: 10.1002/wics.1406

Classical and Bayesian methods for flexibly modeling density and regression functions are reviewed and employed, highlighting their value in economic applications.

For regulatory review and approval of biosimilar products, the United States (US) Food and Drug Administration (FDA) recommended a stepwise approach for demonstrating biosimilarity between a proposed biosimilar product and an innovative (reference) biological product (e.g., an US-licensed product).^{1–3} The stepwise approach is to provide totality-of-the-evidence for demonstrating biosimilarity between the proposed biosimilar product and the reference product. The stepwise approach starts with analytical studies for functional and structural characterization of critical quality attributes (CQAs) at various stages of manufacturing process. For the assessment of analytical similarity of CQAs, FDA suggests, first, identifying the CQAs that are relevant to clinical outcomes, and then classifying the identified CQAs into several tiers depending upon their criticality or risk ranking. FDA also suggests different methods be used to assess similarity for CQAs from different tiers. For example, equivalence test for CQAs from Tier 1, quality range approach for CQAs from Tier 2, and descriptive raw data and graphical comparison for CQAs from Tier 3. In this article, controversial issues regarding the FDA’s recommended approaches are discussed followed by alternative methods for assessment of similarity for CQAs from Tier 1. *WIREs Comput Stat* 2017, 9:e1407. doi: 10.1002/wics.1407

For further resources related to this article, please visit the WIREs website.

A stepwise approach to demonstrate biosimilarity.

Data sampling methods have been investigated for decades in the context of machine learning and statistical algorithms, with significant progress made in the past few years driven by strong interest in big data and distributed computing. Most recently, progress has been made in methods that can be broadly categorized into random sampling including density-biased and nonuniform sampling methods; active learning methods, which are a type of semi-supervised learning and an area of intense research; and progressive sampling methods which can be viewed as a combination of the above two approaches. A unified view of scaling-down sampling methods is presented in this article and complemented with descriptions of relevant published literature. *WIREs Comput Stat* 2017, 9:e1414. doi: 10.1002/wics.1414

For further resources related to this article, please visit the WIREs website.

Summary of scaling-down techniques found in literature

Covariance matrix and its inverse, known as the precision matrix, have many applications in multivariate analysis because their elements can exhibit the variance, correlation, covariance, and conditional independence between variables. The practice of estimating the precision matrix directly without involving any matrix inversion has obtained significant attention in the literature. We review the methods that have been implemented in R and their R packages, particularly when there are more variables than data samples and discuss ideas behind them. We describe how sparse precision matrix estimation methods can be used to infer network structure. Finally, we discuss methods that are suitable for gene coexpression network construction. *WIREs Comput Stat* 2017, 9:e1415. doi: 10.1002/wics.1415

For further resources related to this article, please visit the WIREs website.

Estimating a Gaussian graphical model from a high-dimensional dataset with a sparse precision matrix estimator.