Curtis McDonald Personal Webpage

About

I am entering my 5th year as a PhD Student in the Statistics and Data Science Department at Yale University. I work with my advisor Andrew Barron on topics related to sampling problems, greedy optimization of neural networks, and Markov Chain Monte Carlo (MCMC).

The main theme of my research from my Master’s education into my PhD has been the convergence behaviour of stochastic processes. In my initial research, this focused on filter stability for Hidden Markov Models (HMM) and applications to robust stochastic control. Namely, given a bad prior can an agent still learn an accurate posterior on the hidden state of the system and then use this to make good control decisions.

More recently, I am interested in mixing time guarantees of sampling approaches for multi modal and non-log concave target densities. What methods are most effective to produce samples from such difficult densities? Traditional MCMC methods can have difficulty exploring the full state space and can get trapped in local modes of the log likelihood. Time varying transition rules, annealing, optimal transport, and score-based methods all present interesting alternatives to produce samples for such densities beyond traditional MCMC.

I received my undergraduate degree in Applied Mathematics and Engineering from Queen’s University in 2017. I also received my Master’s in Applied Science, specializing in Mathematics and Engineering, from Queen’s University in 2019. In 2019 I moved to New Haven, CT where I am currently pursuing my PhD at Yale University.

For more information, please see my Publications page, CV, or LinkedIN profile.

Upcoming News

  • I will be graduating in May 2024 and will be on the academic job market for positions beginning September 2024. I am interested in post doc positions involving research in statistical learning theory, sampling methods, diffusion models, and Bayesian statistics.