The post Le Monde puzzle [#1029] appeared first on All About Statistics.

]]>**A** convoluted counting Le Monde mathematical puzzle:

A film theatre has a waiting room and several projection rooms. With four films on display. A first set of 600 spectators enters the waiting room and vote for their favourite film. The most popular film is projected to the spectators who voted for it and the remaining spectators stay in the waiting room. They are joined by a new set of 600 spectators, who then also vote for their favourite film. The selected film (which may be the same as the first one) is then shown to those who vote for it and the remaining spectators stay in the waiting room. This pattern is repeated for a total number of 10 votes, after which the remaining spectators leave. What are the maximal possible numbers of waiting spectators and of spectators in a projection room?

**A** first attempt by random sampling does not produce extreme enough events to reach those maxima:

wm=rm=600 #waiting and watching for (v in 1:V){ film=rep(0,4) #votes on each fiLm for (t in 1:9){ film=film+rmultinom(1,600,rep(1,4)) rm=max(rm,max(film)) film[order(film)[4]]=0 wm=max(wm,sum(film)+600)} rm=max(rm,max(film)+600)}

where the last line adds the last batch of arriving spectators to the largest group of waiting ones. This code only returns 1605 for the maximal number of waiting spectators. And 1155 for the maximal number in a projection room. Compared with the even separation of the first 600 into four groups of 150… I thus looked for an alternative deterministic allocation:

wm=rm=0 film=rep(0,4) for (t in 1:9){ size=sum(film)+600 film=c(rep(ceiling(size/4),3),size-3*ceiling(size/4)) film[order(film)[4]]=0 rm=max(rm,max(film)+600) wm=max(wm,sum(film)+600)}

which tries to preserve as many waiting spectators as possible for the last round (and always considers the scenario of all newcomers backing the largest waiting group for the next film). The outcome of this sequence moves up to 1155 for the largest projection audience and 2264 for the largest waiting group. I however wonder if splitting into two groups in the final round(s) could even increase the size of the last projection. And indeed halving the last batch into two groups leads to 1709 spectators in the final projection. With uncertainties about the validity of the split towards ancient spectators keeping their vote fixed! (I did not think long enough about this puzzle to turn it into a more mathematical problem…)

While in Warwick, I reconsidered the problem from a dynamic programming perspective, always keeping the notion that it was optimal to allocate the votes evenly between some of the films (from 1 to 4). Using the recursive R code

optiz=function(votz,t){ if (t==9){ return(sort(votz)[3]+600) }else{ goal=optiz(sort(votz)+c(0,0,600,-max(votz)),t+1) goal=rep(goal,4) for (i in 2:4){ film=sort(votz);film[4]=0;film=sort(film) size=sum(film[(4-i+1):4])+600 film[(4-i+1):4]=ceiling(size/i) while (sum(film[(4-i+1):4])>size) film[4]=film[4]-1 goal[i]=optiz(sort(film),t+1)} return(max(goal))}}

led to a maximal audience size of 1619. [Which is also the answer provided by Le Monde]

Filed under: Books, Kids, R Tagged: combinatorics, Le Monde, R

**Please comment on the article here:** **R – Xi'an's Og**

The post Le Monde puzzle [#1029] appeared first on All About Statistics.

]]>The post βCEA appeared first on All About Statistics.

]]>Recently, I've been doing a lot of work on the beta version of BCEA (I was after all born in Agrigento $-$ in the picture to the left $-$, which is a Greek city, so a beta version sounds about right...).

The new version is only available as a beta-release from our GitHub repository - usual ways to install it are through the devtools package.

There aren't very many changes from the current CRAN version, although the one thing I did change is kind of big. In fact, I've embedded the web-app functionalities within the package. So, it is now possible to launch the web-app from the current R session using the new function BCEAweb. This takes as arguments three inputs: a matrix e containing $S$ simulations for the measures of effectiveness computed for the $T$ interventions; a matrix c containing the simulations for the measures of costs; and a data frame or matrix containing simulations for the model parameters.

In fact, none of the inputs is required and the user can actually launch an empty web-app, in which the inputs can be uploaded, say, from a spreadsheet (there are in fact other formats available).

I think the web-app facility is not

This means there are a few more packages "suggested" on installation and potentially a longer compilation time for the package $-$ but nothing major. The new version is under testing but I may be able to release it on CRAN soon-ish... And there are other cool things we're playing around (the links here give all the details!).

**Please comment on the article here:** **Gianluca Baio's blog**

The post βCEA appeared first on All About Statistics.

]]>Emil Kirkegaard writes: Regarding data sharing, you recently commented that “In future perhaps journals will require all data to be posted as a condition of publication and then this sort of thing won’t happen anymore.” We went a step further. We require public data sharing at submission. This means that from the moment one submits, […]

The post We start by talking reproducible research, then we drift to a discussion of voter turnout appeared first on Statistical Modeling, Causal Inference, and Social Science.

The post We start by talking reproducible research, then we drift to a discussion of voter turnout appeared first on All About Statistics.

]]>Emil Kirkegaard writes:

Regarding data sharing, you recently commented that “In future perhaps journals will require all data to be posted as a condition of publication and then this sort of thing won’t happen anymore.”

We went a step further. We require public data sharing at submission. This means that from the moment one submits, the data must be public. Two reasons for this setup. First, reviewers may need the data (+code) to review the paper. Some reviewers replicate all analyses in papers they review (i.e. me, hoping to start a trend) which frequently results in mistakes being found in the review. Second, if the data are first shared upon publication, this means that while the submission is in review, they are locked away. This results in a substantial slow-down of science because review times can be so long. A great example of this problem is GWASs which can take >1 year in review, while the manuscripts (usually without data) can be acquired thru networks if one knows the right person.

In your post, you note that you had to dig up the data from the hard drive to the student who requested it (good idea with that study, he should use lower-level administrative divisions too; alas these have lower voter turnout, the opposite of rational voter theory!). Given the fact that hard drives crash, computers get replaced, and humans are bad at backing up, this is a very error-prone method for storing for scientific materials for perpetuity. Would it not be better for you to go thru your old publications and make a project for each of them on OSF, and put all the materials there?

My reply: Yes, I agree with you on the replication thing. But I think you’re wrong regarding rational choice theory and turnout as a function of jurisdiction size; see section 3.3 of this article.

Kirkegaard responds:

I can’t say I’m an expert on RCT or turnout or that I’m interested enough in RCT for turnout to spend a lot of time understanding the math in that paper. A sort of meta-comeback.

However, I did read Section 3 and onwards. EU elections, by the way, have lower turnout than the national ones in EU, and the sub-national ones have lower turnout than the national ones as well (At least, that’s my impression, I did look up the Danish numbers, but did not do a systematic review of turnout by EU country by level). Not sure how the RCTheoist will change up the equations to back-predict this non-linear result, but I’m sure it can be done with appropriate tricks.

Above is a figure of Danish turnout results 1970-2013 [oh no! Excel graphics! — ed.]. Source: http://magasineteuropa.dk/folkets-vilje/ The reason the kommunal (communal, second-level divisions, n≈100) and regional (first-level divisions; n=5/14, it changed in 2007, notice no change in the turnout) are so closely tied is that they put them on the same day, so people almost always vote for both. EU, by the way, has grown tremendously in power since the 1970s but voter turnout is steady. As I recall, the reason for the spike in communal/regional turnout in early 2000s was because they put the national election on the same day, so people voted for both while they were there anyway.

Regarding voter intentions. It’s easy to find out why they vote. I have been talking to them about this for years, and they never ever ever cite these fancy decisions models. Of course, normal people don’t really understand this stuff. Instead, they say stuff like “if everybody thought like you, democracy wouldn’t work” (a failure to apply game theory) or “it’s a democratic duty” (not in the legal sense, and dubiously in the moral either). In my unscientific estimate of commoners, non-science regular people, I’d say about 90% of reasons given for why one should vote is one or both of these two.

This discussion reminds me of this one about RCT for voter ignorance. My agreement lies with Friedman.

Just in response to those last two paragraphs: I think these fancy decision models can give us insight into behavior, even if this is not the way people understand their voting decisions. Different explanations for voting are complementary, not competing. See section 5 of our paper for more on this point.

The post We start by talking reproducible research, then we drift to a discussion of voter turnout appeared first on Statistical Modeling, Causal Inference, and Social Science.

**Please comment on the article here:** **Statistical Modeling, Causal Inference, and Social Science**

The post We start by talking reproducible research, then we drift to a discussion of voter turnout appeared first on All About Statistics.

]]>The post Diverging paths for rich and poor, infographically appeared first on All About Statistics.

]]>Ray Vella (link) asked me to comment on a chart about regional wealth distribution, which I wrote about here. He also asked students in his NYU infographics class to create their own versions.

This effort caught my eye:

This work is creative, and I like the concept of using two staircases to illustrate the diverging fortunes of the two groups. This is worlds away from the original Economist chart.

The infographic does have a serious problem. In one of my dataviz talks, I talk about three qualifications of work called "data visualization." The first qualification is that the data visualization has to display the data. This is an example of an infographic that is invariant to the data.

Is it possible to salvage the concept? I tried. Here is an idea:

I abandoned the time axis so the data plotted are only for 2015, and the countries are shown horizontally from most to least equal. I'm sure there are ways to do it even better.

Infographics can be done while respecting the data. Ray is one of the designers who appreciate this. And thanks Ray for letting me blog about this.

**Please comment on the article here:** **Junk Charts**

The post Diverging paths for rich and poor, infographically appeared first on All About Statistics.

]]>I (Aki) recently made a case study that demonstrates how to implement user defined probability functions in Stan language (case study, git repo). As an example I use the generalized Pareto distribution (GPD) to model extreme values of geomagnetic storm data from the World Data Center for Geomagnetism. Stan has had support for user defined […]

The post Custom Distribution Solutions appeared first on Statistical Modeling, Causal Inference, and Social Science.

The post Custom Distribution Solutions appeared first on All About Statistics.

]]>I (Aki) recently made a case study that demonstrates how to implement user defined probability functions in Stan language (case study, git repo). As an example I use the generalized Pareto distribution (GPD) to model extreme values of geomagnetic storm data from the World Data Center for Geomagnetism. Stan has had support for user defined functions for a long time, but there wasn’t a full practical example of how to implement all the functions that built-in distributions have (_lpdf (or _lpmf),_cdf, _lcdf, _lccdf, and_rng). Having the full set of functions makes it easy to implement models, censoring, posterior predictive checking and loo. The most interesting things I learned while making the case study were:

- How to replicate the behavior of Stan’s internal distribution functions as close as possible (due to lack of overloading of user defined functions, we have to make some compromises).
- How to make tests for the user defined distribution functions.

By using this case study as a template, it should be easier and faster to implement and test new custom distributions for your Stan models.

The post Custom Distribution Solutions appeared first on Statistical Modeling, Causal Inference, and Social Science.

**Please comment on the article here:** **Statistical Modeling, Causal Inference, and Social Science**

The post Custom Distribution Solutions appeared first on All About Statistics.

]]>The post Excel and R appeared first on All About Statistics.

]]>There are many applications and tools for working with Excel and R. R code can be run from Excel and R can be used to output result to Excel sheets. Typically all these tools require some knowledge of using the command line or code. Data and results need to be converted to one format or Read more about Excel and R[…]

The post Excel and R appeared first on Sharp Statistics.

**Please comment on the article here:** **Sharp Statistics**

The post Excel and R appeared first on All About Statistics.

]]>The post La lotteria dei rigori appeared first on All About Statistics.

]]>Seems like my own country has kind of run out of luck... First we fail to qualify for the World Cup, then lose the right to host the relocated headquarters of the European Medicine Agency, post Brexit. If I were a cynic ex-pat, I'd probably think that the former will be felt like the worst defeat across Italy. May be it will.

As I've mentioned here, I'd been talking to Politico, about how the whole process looked like the Eurovision. I think the actual thing did have some elements $-$ earlier today, on the eve of the vote, it appeared like Bratislava was the hot favourite. This kind of reminded me of the days before the final of the Eurovision, when one of the acts is often touted as the sure-thing, often over and above its musical quality. And I do believe that there's an element of "letting people know that we're up for hosting the next one" going on to pimp up the experts' opinions. Although sometimes, as it turns out, the favourites are not so keen in reality $-$ cue their poor performance come the actual thing...

In the event, Bratislava was eliminated at the first round. The contest went all the way to extra times, with Copenhagen dropping out at the semifinals and Amsterdam-Milan contesting the final head-to-head. As the two finalists got the same number of votes (with I think one abstaining), the decision was made on luck $-$ basically on penalties, or as we say in Italian, la lotteria dei rigori.

I guess there must have been some thinking behind the set-up of the voting system that, in case it came down to a tie at the final round, both remaining candidates would be "acceptable" (if not to everybody, at least to the main players) and so they'd be happy for this to go 50:50. And so Amsterdam it is!

**Please comment on the article here:** **Gianluca Baio's blog**

The post La lotteria dei rigori appeared first on All About Statistics.

]]>Jessica Franklin writes: Given your interest in post-publication peer review, I thought you might be interested in our recent experience criticizing a paper published in BMJ last year by Hemkens et al.. I realized that the method used for the primary analysis was biased, so we published a criticism with mathematical proof of the bias […]

The post “A Bias in the Evaluation of Bias Comparing Randomized Trials with Nonexperimental Studies” appeared first on Statistical Modeling, Causal Inference, and Social Science.

The post “A Bias in the Evaluation of Bias Comparing Randomized Trials with Nonexperimental Studies” appeared first on All About Statistics.

]]>Jessica Franklin writes:

Given your interest in post-publication peer review, I thought you might be interested in our recent experience criticizing a paper published in BMJ last year by Hemkens et al.. I realized that the method used for the primary analysis was biased, so we published a criticism with mathematical proof of the bias (we tried to publish in BMJ, but it was a no go). Now there has been some back and forth between the Hemkens group and us on the BMJ rapid response page, and BMJ is considering a retraction, but no action yet. I don’t really want to comment too much on the specifics, as I don’t want to escalate the tension here, but this has all been pretty interesting, at least to me.

Interesting, in part because both sides in the dispute include well-known figures in epidemiology: John Ioannidis is a coauthor on the Hemkens et al. paper, and Kenneth Rothman is a coauthor on the Franklin et al. criticism.

**Background**

The story starts with the paper by Hemkens et al., who performed a meta-analysis on “16 eligible RCD studies [observational studies using ‘routinely collected data’], and 36 subsequent published randomized controlled trials investigating the same clinical questions (with 17 275 patients and 835 deaths),” and they found that the observational studies overestimated efficacy of treatments compared to the later randomized experiments.

Their message: be careful when interpreting observational studies.

One thing I wonder about, though, is how much of this is due to the time ordering of the studies. Forget for a moment about which studies are observational and which are experimental. In any case, I’d expect the first published study on a topic to show statistically significant results—otherwise it’s less likely to be published in the first place—whereas anything could happen in a follow-up. Thus, I’d expect to see earlier studies overestimate effect sizes relative to later studies, irrespective of which studies are observational and which are experimental. This is related to the time-reversal heuristic.

To put it another way: The Hemkens et al. project is itself an observational study, and in their study there is complete confounding between two predictors: (a) whether a result came from an observational study or an experiment, and (b) whether the result was published first or second. So I think it’s impossible to disentangle the predictive value of (a) and (b).

**The criticism and the controversy**

Here are the data from Hemkens et al.:

Franklin et al. expressed the following concern:

In a recent meta-analysis by Hemkens et al. (Hemkens et al. 2016), the authors compared published RCD [routinely collected data] studies and subsequent RCTs [randomized controlled trials] using the ROR, but inverted the clinical question and corresponding treatment effect estimates for all study questions where the RCD estimate was > 1, thereby ensuring that all RCD estimates indicated protective effects.

Here’s the relevant bit from Hemkens et al.:

For consistency, we inverted the RCD effect estimates where necessary so that each RCD study indicated an odds ratio less than 1 (that is, swapping the study groups so that the first study group has lower mortality risk than the second).

So, yeah, that’s what they did.

On one hand, I can see where Hemkens et al. were coming from. To the extent that the original studies purported to be definitive, it makes sense to code them in the same direction, so that you’re asking how the replications compared to what was expected.

On the other hand, Franklin et al. have a point, that in the absence of any differences, the procedure of flipping all initial estimates to have odds ratios less than 1 will bias the estimate of the difference.

Beyond this, the above graph shots a high level of noise in the comparisons, as some of the follow-up randomized trials have standard errors that are essentially infinite. (What do you say about an estimated odds ratio that can be anywhere from 0.2 to 5?) Hemkens et al. appear to be using some sort of weighting procedure, but the relevant point here is that only a few of these studies have enough data to tell us anything at all.

**My take on these papers**

The above figure tells the story: The 16 observational studies appear to show a strong correlation between standard error and estimated effect size. This makes sense. Go, for example, to the bottom of the graph: I don’t know anything about Hahn 2010, Fonoarow 2008, Moss 2003, Kim 2008, and Cabell 2005, but all these studies are estimated to cut mortality by 50% or more, which seems like a lot, especially considering the big standard errors. It’s no surprise that these big estimates fail to reappear under independent replication. Indeed, as noted above, I’d expect that big estimates from randomized experiments would also generally fail to reappear under independent replication.

Franklin et al. raise a valid criticism: Even if there is no effect at all, the method used by Hemkens et al. will create the appearance of an effect: in short, the Hemkens et al. estimate is indeed biased.

Put it all together, and I think that the sort of meta-analysis performed by Hemkens et al. is potentially valuable, but maybe it would’ve been enough for them to stop with the graph on the left in the above image. It’s not clear that anything is gained from their averaging; also there’s complete confounding in their data between timing (which of the two studies came first) and mode (observational or experimental).

**The discussion**

Here are some juicy bits from the online discussion at the BMJ site:

02 August 2017, José G Merino, US Research Editor, The BMJ:

Last August, a group led by Jessica Franklin submitted to us a criticism of the methods used by the authors of this paper, calling into question some of the assumptions and conclusion reached by Lars Hemkens and his team. We invited Franklin and colleagues to submit their comments as a rapid response rather than as a separate paper but they declined and instead published the paper in Epidemiological Methods (Epidem Meth 2-17;20160018, DOI 10.1515/em-2016-0018.) We would like to alert the BMJ’s readers about the paper, which can be found here: https://www.degruyter.com/view/j/em.ahead-of-print/em-2016-0018/em-2016-0018.xml?format=INT

We asked Hemkens and his colleagues to submit a response to the criticism. That report is undergoing statistical review at The BMJ. We will post the response shortly.

14 September 2017, Lars G Hemkens, senior researcher, Despina G Contopoulos-Ioannidis, John P A Ioannidis:

The arguments and analyses of Franklin et al. [1] are flawed and misleading. . . . It is trivial that the direction of comparisons is essential in meta-epidemiological research comparing analytic approaches. It is also essential that there must be a rule for consistent coining of the direction of comparisons. The fact that there are theoretically multiple ways to define such rules and apply the ratio-of-odds ratio method doesn’t invalidate the approach in any way. . . . We took in our study the perspective of clinicians facing new evidence, having no randomized trials, and having to decide whether they use a new promising treatment. In this situation, a treatment would be seen as promising when there are indications for beneficial effects in the RCD-study, which we defined as having better survival than the comparator (that is a OR < 1 for mortality in the RCD-study) . . . it is the only reasonable and useful selection rule in real life . . . The theoretical simulation of Franklin et al. to make all relative risk estimates <1 in RCTs makes no sense in real life and is without any relevance for patient care or health-care decision making. . . . Franklin et al. included in their analysis a clinical question where both subsequent trials were published simultaneously making it impossible to clearly determine which one is the first (Gnerlich 2007). Franklin et al. selected the data which better fit to their claim. . . .

21 September 2017, Susan Gruber, Biostatistician:

The rapid response of Hemkens, Contopoulos-Ioannidis, and Ioannidis overlooks the fact that a metric of comparison can be systematic, transparent, replicable, and also wrong. Franklin et. al. clearly explains and demonstrates that inverting the OR based on RCD study result (or on the RCT result) yields a misleading statistic. . . .

02 October 2017, Jessica M. Franklin, Assistant Professor of Medicine, Sara Dejene, Krista F. Huybrechts, Shirley V. Wang, Martin Kulldorff, and Kenneth J. Rothman:

In a recent paper [1], we provided mathematical proof that the inversion rule used in the analysis of Hemkens et al. [2] results in positive bias of the pooled relative odds ratio . . . In their response, Hemkens et al [3] do not address this core statistical problem with their analysis. . . .

We applaud the transparency with which Hemkens et al reported their analyses, which allowed us to replicate their findings independently as well as to illustrate the inherent bias in their statistical method. Our paper was originally submitted to BMJ, as recently revealed by a journal editor [4], and it was reviewed there by two prominent biostatisticians and an epidemiologist. All three reviewers recognized that we had described a fundamental flaw in the statistical approach invented and used by Hemkens et al. We believe that everyone makes mistakes, and acknowledging an honest mistake is a badge of honor. Thus, based on our paper and those three reviews, we expected Hemkens et al. and the journal editors simply to acknowledge the problem and to retract the paper. Their reaction to date is disappointing.

13 November 2017, José G Merino, US Research Editor, Elizabeth Loder, Head of Research, The BMJ:

We acknowledge receipt of this letter that includes a request for retraction of the paper. We take this request very seriously. Before we make a decision on this request, we -The BMJ’s editors and statisticians – are reviewing all the available information. We hope to reach a decision that will maintain the integrity of the scientific literature, acknowledge legitimate differences of opinion about the methods used in the analysis of data, and is fair to all the participants in the debate. We will post a rapid response once we make a decision on this issue.

The discussion also includes contributions from others on unrelated aspects of the problem; here I’m focusing about the Franklin et al. critique and the Hemkens et al. paper.

**Good on ya, BMJ**

I love how the BMJ is handling this. The discussion is completely open, and the journal editor is completely non-judgmental. All so much better than my recent experience with the Association for Psychological Science, where the journal editor brushed me off in a polite but content-free way, and then the chair of the journal’s publication board followed up with some gratuitous rudeness. The BMJ is doing it right, and the psychology society has a few things to learn from them.

Also, just to make my position on this clear: I don’t see why anyone would think the Hemkens et al. paper should be retracted; a link to the criticisms would seem to be enough.

**P.S.** Franklin adds:

Just last week I got am email from someone who thought that our conclusion in our Epi Methods paper that use of the pooled ROR without inversion is “just as flawed” was too strong. I think they are right, so we will now be preparing a correction to our paper to modify this statement. So the circle of post-publication peer review continues…

Yes, exactly!

The post “A Bias in the Evaluation of Bias Comparing Randomized Trials with Nonexperimental Studies” appeared first on Statistical Modeling, Causal Inference, and Social Science.

**Please comment on the article here:** **Statistical Modeling, Causal Inference, and Social Science**

The post “A Bias in the Evaluation of Bias Comparing Randomized Trials with Nonexperimental Studies” appeared first on All About Statistics.

]]>Axel Cleeremans writes: I appreciated your piece titled “What has happened down here is the winds have changed”. Your mini-history of what happened was truly enlightening — but you didn’t explicitly mention our failure to replicate Bargh’s slow walking effect. This was absolutely instrumental in triggering the replication crisis. As you know, the article was […]

The post A pivotal episode in the unfolding of the replication crisis appeared first on Statistical Modeling, Causal Inference, and Social Science.

The post A pivotal episode in the unfolding of the replication crisis appeared first on All About Statistics.

]]>Axel Cleeremans writes:

I appreciated your piece titled “What has happened down here is the winds have changed”. Your mini-history of what happened was truly enlightening — but you didn’t explicitly mention our failure to replicate Bargh’s slow walking effect. This was absolutely instrumental in triggering the replication crisis. As you know, the article was covered by the science journalist Ed Yong and came shortly after the Stapel affair. It was the first failure to replicate a classic priming effect that attracted so much attention. Yong’s blog post about it attracted a response from John Bargh and further replies from Yong, as you indirectly point to. But our article and the entire exchange between Yong and Bargh is also what triggered an extended email discussion involving many of the actors involved in this entire debate (including E. J. Wagenmakers, Hal Pashler, Fritz Strack and about 30 other people). That discussion was initiated by Daniel Kahneman after he and I discussed what to make of our failure to replicate Bargh’s findings. This email discussion continued for about two years and eventually resulted in further attempts to replicate, as they are unfolding now.

I was aware of the Bargh issue but I’d only read Wagenmakers (and Bargh’s own unfortunate writings) on the issue; I’d never followed up to read the original, so this is good to know. One thing I like about having these exchanges on a blog, rather than circulating emails, is that all the discussion is in one place and is open to all to read and participate.

The post A pivotal episode in the unfolding of the replication crisis appeared first on Statistical Modeling, Causal Inference, and Social Science.

**Please comment on the article here:** **Statistical Modeling, Causal Inference, and Social Science**

The post A pivotal episode in the unfolding of the replication crisis appeared first on All About Statistics.

]]>The post More on Path Forecasts appeared first on All About Statistics.

]]>I blogged on path forecasts yesterday. A reader just forwarded this interesting paper, of which I was unaware. Lots of ideas and up-to-date references.

**Please comment on the article here:** **No Hesitations**

The post More on Path Forecasts appeared first on All About Statistics.

]]>