It was at breakfast during a recent conference that Prof. X leaned toward me and quietly said, “We tried using weighted ensemble but it didn’t work.” I got the sense he was trying not to broadcast this to other conference attendees, as a courtesy.

#### CategoryStatistical Uncertainty

Here’s a true story from a number of years ago. A postdoc in the group comes to me in frustration. He has built a cool “semi-atomistic” coarse-grained protein model that has generated disappointing results. An alpha helix that’s clearly resolved in the X-ray structure of his protein completely unravels. Disappointment. But playing the optimistic supervisor, I ask, “Are we sure you’re wrong? Could that helix be marginally stable?” Further digging revealed an isoform of the protein where the helix in question was not resolvable via X-ray. Relief! I was pretty pleased with myself, I must say.

But now I’m disappointed that I was pleased.

Some quick guidance for analyzing molecular dynamics (MD) or Markov-chain Monte Carlo (MC) data in hard-to-sample systems – e.g., biomolecules. I can summarize the advice this way: __Ask not how to compute error bars. Ask first whether error bars are even appropriate.__ A meaningless error bar is more dangerous (to you and the community) than no error bar at all. This guidance is essentially abstracted from our recent Best Practices paper, and I hope it will set in context some of the theory discussed in an earlier post.

I realized that I owe you something. In a prior post, I invoked some Bayesian ideas to contrast with boostrapping analysis of high-variance data. (More precisely, it was high *log-*variance data for which there was a problem, as described in our preprint.) But the Bayesian discussion in my earlier post was pretty quick. Although there are a number of good, brief introductions to Bayesian statistics, many get quite technical.

Here, I’d like to introduce Bayesian thinking in absolutely the simplest way possible. We want to understand the point of it, and get a better grip on those mysterious priors.

I want to talk again today about the essential topic of analyzing statistical uncertainty – i.e., making error bars – but I want to frame the discussion in terms of a larger theme: our community’s often insufficiently critical adoption of elegant and sophisticated ideas. I discussed this issue a bit previously in the context of PMF calculations. To save you the trouble of reading on, the technical problem to be addressed is statistical uncertainty for high-variance data with small(ish) sample sizes.

Let’s draw a line. Across the calendar, I mean. Let’s all pledge that from today on we’re going to give honest accounting of the uncertainty in our data. I mean ‘honest’ in the sense that if someone tried to reproduce our data in the future, their confidence interval and ours would overlap.

There are a few conceptual issues to address up front. Let’s set up our discussion in terms of some variable which we measure in a molecular dynamics (MD) simulation at successive configurations: , , , and so on. Regardless of the length of our simulation, we can measure the average of all the values . We can also calculate the standard deviation σ of these values in the usual way as the square root of the variance. Both of these quantities will approach their “true” values (based on the simulation protocol) with enough sampling – with large enough .

© Daniel M. Zuckerman, 2015 - 2020

Theme by Anders Noren — Up ↑