HUIJZER.XYZ

Bayesian Latent Profile Analysis (mixture modeling)

2021-12-01

Updated on 2021-12-15: Include ordered constraint.

This post discusses some latent analysis techniques and runs a Bayesian analysis for example data where the outcome is continuous, also known as latent profile analysis (LPA). My aim will be to clearly visualize the analysis so that it can easily be adjusted to different contexts.

In essence, latent analyses about finding hidden groups in data (Oberski, 2016). Specifically, they are called mixture models because the underlying distributions are mixed together.

For example, suppose we had data about dog weights and did not register the breed. If we assume that the sample only consists of Australian Sheperds and American Terriers, that American Terriers are larger on average, and that both groups are normally distributed, then we can take the observed data and estimate the latent distributions. For example, we can generate some data:

begin
    using Bijectors: OrderedBijector, ordered, inv
    using CairoMakie
    using DataFrames
    using StableRNGs: StableRNG
    using Statistics: mean, median, std
    using Turing
end
d1 = Normal(22, 2.4); # Australian Shepherd.
d2 = Normal(28, 2.8); # American Terrier.
combined = let
    rng = StableRNG(1)
    [i % 2 == 1 ? rand(rng, d1) : rand(rng, d2) for i in 1:200]
end;

and visualize it:

Next to LPA, this problem is also known as gaussian (finite) mixture modelling. When the observed variables are discrete, the appropriate model is known as latent class analysis (LCA) or binomial (finite) mixture model. LCA is also known as latent Dirichlet allocation in the machine learning literature.

As a sidenote, Latent Dirichlet allocation is one of the older methods used for natural language processing tasks (Blei et al., 2003). By old, here, I mean that it is one of the methods used before the advent of deep learning around 2013. A typical natural language processing tasks would be to classify documents. In this setting, LDA can be used to interpret text by finding words with a similar "topic". For example, the topic can be "education" with words such as "school", "students", "schools" and "teachers" (Blei et al., 2003). This example also shows one of the main differences between LDA usage in machine learning and LCA usage in science. Both are essentially the same models, but where social science often sticks to a few latent classes to manually interpret, machine learning happily runs the model for 100 latent classes (Blei et al., 2003; p. 1007).

First attempt

Enough background, let's fit a latent profile model via Turing. Based on the Turing tutorial and a great Discourse post, we can fit a nice model which runs in less than a minute:

@model function exchangeable_mixture(k::Int, Y)
    w ~ Dirichlet(k, 1)
    
    μ ~ filldist(Normal(25, 4), 2)
    
    n = length(Y)
    Y ~ filldist(MixtureModel(Normal, μ, w), n)
end;
exchangeable_model = exchangeable_mixture(2, combined);
n_samples = 1_000;

Just to be sure, let's sample from the prior to see how things look:

exchangeable_prior = sample(exchangeable_model, Prior(), n_samples);

This looks as expected. When looking at the observed distribution in the first figure in this blog, it is not unreasonable to set the means for the μ priors around the middle of the data. With that, the sampler has a nice place to start and should be able to estimate the parameters reasonably quickly.

The w specifies the weights of the latent classes. In the combined data, I've drawn half of the samples from d1 and half of the samples from d2, so the weights should be 0.5 and 0.5. This Dirichlet prior is a, so called, unit simplex. To me it just looks like an uniform prior, but I guess there is a good reason for the unit simplex term. This prior is reasonable because it doesn't tell the sampler much about the location of the weights except that they are between 0 and 1.

So, the prior look good. It's time to obtain the posterior. To do this, we need to use HMC and not NUTS. Normally, NUTS is the best sampler to use, but for latent models NUTS is often having problems. Online, people seem to suggest that this is due to the multimodality, that is, because there are two solutions to this problem:

  1. μ1 estimates the mean of Sheperds and μ2 estimates the mean of Terriers OR

  2. μ2 estimates the mean of Teriers and μ2 estimates the mean of Shepers.

This is called the identifiability problem (Casella & Berger, 2002), the label switching problem (Obserki, 2016) or labeling degeneracies (Betancourt, 2017).

So, let's sample 3 chains in parallel with HMC:

exchangable_posterior = let
    rng = StableRNG(1)
    sampler = HMC(0.2, 10)
    sample(rng, exchangeable_model, sampler, MCMCThreads(), n_samples, 3)
end;

Hmm. That didn't work. When the one or more chains don't move at all (a horizontal line in the left plot) for mixture models, then try reducing the leapfrog step size (the first argument to HMC).

exchangable_posterior_smaller_stepsize = let
    rng = StableRNG(1)
    sampler = HMC(0.01, 10)
    sample(rng, exchangeable_model, sampler, MCMCThreads(), n_samples, 4)
end;

That looks much better. However, now we're dealing with the label switching problem. Normally, to get the parameter estimate, we could just take the mean over all the chains. In this case, we couldn't do that and instead should take the mean over only one chain? That would work, but isn't ideal either.

Fixing the label switching

Betancourt (2017) suggests using an ordered prior for μ. Via Bijectors.jl.OrderedBijector this should be possible in Turing.jl too. Unfortunately, I wasn't able to figure it out. (It appears that the Stan model is transforming things to the log scale and that works well together with the ordered prior. I'm too lazy to convert things to the log scale and back again, so that's why I'm not doing that.)

As a workaround, I came up with the idea to enforce the ordering in another way, namely to cutoff the range of possible values via two non-overlapping uniform distributions. This can be thought of as drawing a line through the middle of the two means which will lock both parameters in their own region.

@model function mm(k::Int, Y)
    w ~ Dirichlet(k, 1)
    
    μ1 = Uniform(10, 25)
    μ2 = Uniform(25, 40)
    μ ~ arraydist([μ1, μ2])
    
    n = length(Y)
    Y ~ filldist(MixtureModel(Normal, μ, w), n)
end;
mixture_model = mm(2, combined);
mixture_model_prior = sample(mixture_model, Prior(), n_samples);

After a bit of fiddling and increasing the number of leapfrog steps to use (the second argument to HMC), this shows chains with nice convergence and mixing:

mixture_posterior = let
    rng = StableRNG(1)
    sampler = HMC(0.01, 20)
    sample(rng, mixture_model, sampler, MCMCThreads(), n_samples, 3)
end;
parametersmeanstdmcseess_bulkess_tailrhatess_per_sec
1Symbol("w[1]")0.5496930.08436690.011671377.474534.93511.075591.79705
2Symbol("w[2]")0.4503070.08436690.011671377.474534.93511.075591.79705
3Symbol("μ[1]")22.13241.194950.34712124.497418.97471.102190.568226
4Symbol("μ[2]")28.67651.366290.138542148.00530.58041.086893.43303

And we now have almost correct estimates for all parameter locations, see the mean column.

To make this even better, it is possible to tune the HMC parameters based on how well the chains converge or by setting a slightly more informative prior. In this case, by setting two non-overlapping truncated normals:

@model function normals_mm(k::Int, Y)
    w ~ Dirichlet(k, 1)
    
    μ1 = truncated(Normal(24, 4), -Inf, 25)
    μ2 = truncated(Normal(26, 4), 25, Inf)
    μ ~ arraydist([μ1, μ2])

    n = length(Y)
    Y ~ filldist(MixtureModel(Normal, μ, w), n)
end;
normal_mixture_model = normals_mm(2, combined);
normal_mixture_posterior = let
    rng = StableRNG(1)
    sampler = HMC(0.02, 40)
    sample(rng, mixture_model, sampler, MCMCThreads(), n_samples, 3)
end;
parametersmeanstdmcseess_bulkess_tailrhatess_per_sec
1Symbol("w[1]")0.5085670.1698390.06640157.334056.096532.529230.265698
2Symbol("w[2]")0.4914330.1698390.06640157.334056.226942.529230.265698
3Symbol("μ[1]")18.48682.327910.9389566.657756.03.199460.241197
4Symbol("μ[2]")34.02293.970561.613046.64NaN3.206550.240553

Finally, this seems to properly converge. Alas, the model is now quite sensitive to choice of prior and choice of sampler parameters.

Variational inference

Some say that variational inference (VI) can deal much better with mixed models than Markov chain Monte Carlo. (I forgot the reference but read it in some paper while trying to debug models.) Let's put that claim to the test.

VI doesn't have such a nice interface as the Monte carlo based models, but we can run multithreaded sampling with only a few lines of code. The outcomes are put in a DataFrame here to allow for easier plotting:

function sample_vi(model; samples_per_step=10, max_iters=1_000)
    n_chains = 3
    dfs = Vector{DataFrame}(undef, n_chains)
    colnames = names(mixture_model_prior, :parameters)
    Threads.@threads for i in 1:n_chains
        q = vi(model, ADVI(samples_per_step, max_iters))
        M = rand(q, n_samples)::Matrix{Float64}
        df = DataFrame(transpose(M), colnames)
        df[!, :chain] = fill(i, nrow(df))
        df[!, :iteration] = 1:nrow(df)
        dfs[i] = df
    end
    vcat(dfs...)
end;
vi_posterior = sample_vi(mixture_model);
parametersmeanstd
1Symbol("w[1]")0.5324080.0358029
2Symbol("w[2]")0.4675920.0358029
3Symbol("μ[1]")21.66050.107891
4Symbol("μ[2]")28.62320.118411

This outcome is

One caveat though: don't run only one VI chain on the exchangeable model above! It will happily give a completely incorrect outcome without showing sampling problems! To avoid that, run multiple VI chains like shown here.

Ordered constraint

The drawback of the earlier solutions to the label switching problem is that an estimate should be available for the location of the distributions. This isn't always the case, especially when the problem would involve more latent distributions than presented here. A better solution would be to enforce an ordering on the means μ, that is, to enforce that μ1 ≤ μ2 ≤ ... ≤ μn for n means. With such an ordering, it is impossible for the labels to switch.

After lots and lots of fiddling, I did manage to use an ordered prior in Turing.jl. Thanks to help by Tor Fjelde in a GitHub issue. The trick is to use the Bijectors.OrderedBijector, put the desired values through the inverse of the bijector and put these outcomes in an ordered(arraydist(...)).

Also, for technical reasons, the numbers put through the inverse cannot be the same, so that's why the second number is slightly larger. I've fiddled a bit with how much difference there is between the numbers and a smaller difference shows a better prior plot, but worse HMC posterior. Very strange.

inv_ordered(X::Vector) = Bijectors.inverse(Bijectors.OrderedBijector())(X);
M = inv_ordered([25, 25.01])
2-element Vector{Float64}:
 25.0
 -4.6051701859879355
@model function ordered_mixture(k::Int, Y)
    w ~ Dirichlet(k, 1)

    μ ~ ordered(arraydist([Normal(m, 4) for m in M]))

    n = length(Y)
    Y ~ filldist(MixtureModel(Normal, μ, w), n)
end;
ordered_model = ordered_mixture(2, combined);

In the end, the HMC sampler keeps being the most robust. I've tried NUTS(1_000, 20) too and it did work albeit taking minutes to finish and giving erratic w estimates. Also, I've tried VI and that just didn't work with the ordering constraint and no amount of sampler parameter tuning seemed to solve the problem.

So, there we go, let's see the best model for the data:

ordered_hmc_posterior = let
    rng = StableRNG(1)
    sampler = HMC(0.001, 100)
    sample(rng, ordered_model, sampler, MCMCThreads(), 2 * n_samples, 3)
end;

One last thing. Convergence takes a while, so let's throw away the first 700 samples.

ordered_warmed_posterior = let
    rng = StableRNG(1)
    sampler = HMC(0.001, 100)
    discard_initial = 1000
    sample(rng, ordered_model, sampler, MCMCThreads(), 2 * n_samples, 3; discard_initial)
end;
parametersmeanstdmcseess_bulkess_tailrhatess_per_sec
1Symbol("w[1]")0.5363980.03564550.00129203761.01651.151.00213.41334
2Symbol("w[2]")0.4636020.03564550.00129203761.01651.151.00213.41334
3Symbol("μ[1]")21.65230.1095020.002828751498.732775.081.001076.72229
4Symbol("μ[2]")28.67430.1105210.011708189.0494230.9931.02550.399416

Awesome.

Conclusion

It is very tricky to get mixture modeling right with Markov chain Monte Carlo. When sampling from the posterior, the sampler couldn't deal well with the label switching problem. Fiddling like this with Bayesian samplers has benefits too, namely that the problematic sampling did indicate a problem in the model specification.

There were two solutions: One solution is to define very strong location priors so that μ[1] and μ[2] have difficulty switching places. This works with VI and, in turn, reduces the running time and is less sensitive to the choice of sampler parameters. The drawback is that much information is required to know where the location should be or what is a good cutoff point. Another solution is to use the ordered constraint. Unfortunately, I couldn't get this to work with VI nor with NUTS. Therefore, some manual tuning of sampler parameters is required to get a good outcome. The running time is reasonable with about 20 seconds for the last model in this post.

Overall, it took about six days to write this blog which is a bit more than I would have guessed. The main reason why it took so long was that Turing.jl provides a lot of flexibility when defining the models. In other words, there are many ways in which you can shoot yourself in the foot. At the same time, it's really great to know that tuning models to specific use-cases is possible unlike frequentist models. Hopefully, this post will provide a helpful foundation for more complex Bayesian mixed models.

References

Betancourt, M. Identifying Bayesian Mixture Models. Stan documentation. https://mc-stan.org/users/documentation/case-studies/identifying_mixture_models.html.

Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. the Journal of machine Learning research, 3, 993-1022. http://www.jmlr.org/papers/volume3/blei03a/blei03a.pdf.

Casella, G. & Berger, R. L. (2002). Statistical Inference. Second edition. Cengage learning.

Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (1995). Bayesian data analysis. Chapman and Hall/CRC. https://doi.org/10.1201/9780429258411.

Oberski D. L. (2016) Mixture Models: Latent Profile and Latent Class Analysis. In: Robertson J., Kaptein M. (eds) Modern Statistical Methods for HCI. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-26633-6_12.

Built with Julia 1.10.2 and

Bijectors 0.13.8
CairoMakie 0.11.9
DataFrames 1.6.1
StableRNGs 1.0.1
Statistics 1.10.0
Turing 0.30.5

To run this blog post locally, open this notebook with Pluto.jl.