I show that one can perform Bayesian and MLE estimation of dynamic models without the use of a likelihood function, particle, or Kalman filter. Compared to conventional methods, this allows solution methods like projection and value function iteration to be estimated as well as models with large latent spaces like HANK models. The simulation-based technique scales better than conventional Metropolis-Hastings producing more accurate posteriors with less computational effort, especially for the largest problems like the Smets-Wouters 2007 model and a HANK model.
This paper uses a transformer neural network model to perform imputation of missing data. The method returns a distribution which allows for easy marginalization which can allow for statistically efficient analysis when combined with a model for inference.
This paper demonstrates how to use Variational Inference as well as a flexible variational family that allows fast estimation of DSGE models, which roughly reduces computation time by a factor of 100.
This paper demonstrates how to use variational inference as well as a flexible variational family that allows fast estimation of DSGE models, which roughly reduces computation time by a factor of 100 compared to Markov chain Monte Carlo.