The expectation–maximization (EM) algorithm is a well-known iterative algorithm for finding maximum likelihood estimates from incomplete data and is used in several statistical models with latent variables and missing data. The algorithm also exhibits a monotonic increase in a likelihood function and satisfies parameter constraints for its convergence. The popularity of the EM algorithm can be attributed to its stable convergence, simple implementation and flexibility in interpreting data incompleteness. Despite these computational advantages, the algorithm is linear convergent and suffers from very slow convergence when a statistical model has many parameters and a high proportion of missing data. Various algorithms have been proposed to accelerate the convergence of the EM algorithm. We introduce the acceleration of the EM algorithm using root-finding and vector extrapolation algorithms. The root-finding algorithms include Aitken's method and the Newton–Raphson, quasi-Newton and conjugate gradient algorithms. These algorithms with faster convergence rates allow the EM algorithm to be sped up. The vector extrapolation algorithms transform the sequence of estimates from the EM algorithm into a fast convergent sequence and can accelerate the convergence without modifying the EM algorithm. We describe the derivation of these acceleration algorithms and attempt to apply them to two examples.