Curtis McDonald Personal Webpage
About
I am currently a postdoctoral researcher in the Machine Learning pod at the Simons Institute at UC Berkeley where I am hosted by Peter Bartlett. My research focuses on sampling algorithms, optimization of neural networks, and Markov Chain Monte Carlo (MCMC). I received my undergraduate and master’s degrees in Applied Mathematics and Engineering from Queen’s University. I received my PhD in Statistics and Data Science from Yale University in 2025 advised by Andrew Barron.
The main theme of my research has been the convergence behaviour of stochastic processes. In my early research, this focused on filter stability for Hidden Markov Models (HMM) and applications to robust stochastic control. Namely, given bad prior information can an agent still learn an accurate posterior on the hidden state of a system and use this to make good control decisions.
More recently, I am interested in mixing time guarantees of sampling algorithms for multi modal and non-log concave target densities. What methods are most effective to produce samples from such difficult densities? Traditional MCMC methods based on time invariant local likelihood updates can have difficulty exploring the full state space and can get trapped in local modes of the log likelihood for long periods of time. Time varying transition rules, annealing, auxiliary random variables, and score-based methods all present interesting alternatives to produce samples for such densities beyond traditional MCMC.
For more information, please see my Publications page, CV, or LinkedIN profile.