Today we have a question about what they might have been doing with the R1 computer. It involves something (safe to say) I’d never expected to be writing about: Markov chain Monte Carlo sampling.
Many thanks to reader Bill Harris ’71 for raising this issue of the possible role of Zevi Salsburg (one of my heroes) and the R1 computer in the history of statistics. (Harris earlier sent in some great photographs of the R1, seen here.) He first asked it in an email to me. This is the relevant part, with the key question at the end:
When I was one of Dr. Salsburg’s student computer operators in the late 1960s, I recall being told that his programs did some sort of stochastic modeling of molecules, that they ran for 1,000 to 2,000 hours and that we had to run each program twice to be sure we had the same (or was it consistent?) answers. Reading that abstract and Betancourt’s article made me wonder if Dr. Salsburg was doing early MCMC work on the R1 and the two runs weren’t done because the hardware might be flaky, as I thought people had said, but because you typically run a program multiple times (each run called a “chain”) to be sure the Monte Carlo algorithm had “converged.
Last weekend the discussion was amplified and new detail added here at Andrew Gelman’s blog, Statistical Modeling, Causal Inference, and Social Science. If this makes sense to you or if you know someone who might be able to shed some light on the matter please do let me know. I realize that this is a wee bit arcane but it’s also a chance to learn something important about computers at Rice.