HUIJZER.XYZ

Increasing model accuracy by using foreknowledge

2021-06-16

Typically, when making predictions via a linear model, we fit the model on our data and make predictions from the fitted model. However, this doesn't take much foreknowledge into account. For example, when predicting a person's length given only the weight and gender, we already have an intuition about the effect size and direction. Bayesian analysis should be able to incorporate this prior information.

In this blog post, I aim to figure out whether foreknowledge can, in theory, increase model accuracy. To do this, I generate data and fit a linear model and a Bayesian binary regression. Next, I compare the accuracy of the model parameters from the linear and Bayesian model.

begin
    using AlgebraOfGraphics
    using CairoMakie
    using CategoricalArrays
    using DataFrames
    using GLM
    using MLDataUtils: rescale!
    using Random: seed!
    using Statistics
    using StatsFuns
    using Turing
end

We define the model as $g_i = a_e * a_i + r_e * r_i + \epsilon_i = 1.1 * a_i + 1.05 * r_i + \epsilon_i$ where $a_e$ is the coefficient for the age, $r_e$ is a coefficient for the nationality and $\epsilon_i$ is some random noise for individual $i$.

We generate data for $n$ individuals via:

function generate_data(i::Int)
  seed!(i)

  n = 120
  I = 1:n
  P = [i % 2 == 0 for i in I]
  r_2(x) = round(x; digits=2)

  A = r_2.([p ? rand(Normal(aₑ * 18, 1)) : rand(Normal(18, 1)) for p in P])
  R = r_2.([p ? rand(Normal(rₑ * 6, 3)) : rand(Normal(6, 3)) for p in P])
  E = r_2.(rand(Normal(0, 1), n))
  G = aₑ .* A + rₑ .* R .+ E
  G = r_2.(G)

  df = DataFrame(age=A, recent=R, error=E, grade=G, pass=P)
end;
df = generate_data(1)
agerecenterrorgradepass
117.937.490.7128.3false
220.331.190.0723.68true
317.1910.250.630.27false
422.265.51-0.0130.26true
519.160.07-0.0921.06false
620.0710.870.233.69true
719.756.37-0.1928.22false
818.974.25-0.524.83true
916.965.22-0.6123.53false
1019.47-0.88-0.4820.01true
...
12021.33.170.3627.12true

Linear regression

First, we fit a linear model and verify that the coefficients are estimated reasonably well. Here, the only prior information that we give the model is the structure of the data, that is, a formula.

linear_model = lm(@formula(grade ~ age + recent), df)
StatsModels.TableRegressionModel{LinearModel{GLM.LmResp{Vector{Float64}}, GLM.DensePredChol{Float64, LinearAlgebra.CholeskyPivoted{Float64, Matrix{Float64}, Vector{Int64}}}}, Matrix{Float64}}

grade ~ 1 + age + recent

Coefficients:
─────────────────────────────────────────────────────────────────────────
                 Coef.  Std. Error      t  Pr(>|t|)  Lower 95%  Upper 95%
─────────────────────────────────────────────────────────────────────────
(Intercept)  -0.710802   1.30768    -0.54    0.5878  -3.3006      1.87899
age           1.1377     0.0676189  16.83    <1e-32   1.00379     1.27162
recent        1.04928    0.0282601  37.13    <1e-65   0.993317    1.10525
─────────────────────────────────────────────────────────────────────────
r5(x) = round(x; digits=5)
r5 (generic function with 1 method)
coefa = coef(linear_model)[2] |> r5
1.1377
coefr = coef(linear_model)[3] |> r5
1.04928

Notice how these estimated coefficients are close to the coefficients that we set for age and recent, namely a_e = 1.1 ≈ 1.1377 and r_e = 1.05 ≈ 1.04928, as expected.

Bayesian regression

For the Bayesian regression we fit a model via Turing.jl. Now, we give the model information about the structure of the data as well as priors for the size of the coefficients. For demonstration purposes, I've set the priors to the correct values. This is reasonable because I was wondering whether finding a good prior could have a positive effect on the model accuracy.

function rescale_data(df)
    out = DataFrame(df)
    rescale!(out, [:age, :recent, :grade])
    out
end;
rescaled = let
    rescaled = rescale_data(df)
    rescaled[!, :pass_num] = [p ? 1.0 : 0.0 for p in rescaled.pass]
end;
@model function bayesian_model(ages, recents, grades, n)
    intercept ~ Normal(0, 5)
    βₐ ~ Normal(aₑ, 1)
    βᵣ ~ Normal(rₑ, 3)
    σ ~ truncated(Cauchy(0, 2), 0, Inf)

    μ = intercept .+ βₐ * ages .+ βᵣ * recents
    grades ~ MvNormal(μ, σ)
end;
chns = let
    n = nrow(df)
    bm = bayesian_model(df.age, df.recent, df.grade, n)
    chns = Turing.sample(bm, NUTS(), MCMCThreads(), 10_000, 3)
end;

Let's plot the density for the coefficient estimates $\beta_a$ and $\beta_r$:

and compare the outputs from both models:

coefficienttrue valuelinear estimatelinear errorbayesian estimatebayesian error
aₑ1.11.13773.4 %1.1349590704808533.2 %
rₑ1.051.049280.1 %1.04895422147438970.1 %

Conclusion

After giving the true coefficients to the Bayesian model in the form of priors, it does score better than the linear model. However, the differences aren't very big. This could be due to the particular random noise in this sample E or due to the relatively big sample size. The more samples, the more likely it is that the data will overrule the prior. In any way, there are real-world situations where gathering extra data is more expensive than gathering priors via reading papers. In those cases, the increased accuracy introduced by using priors could have serious benefits.

Built with Julia 1.8.5 and

AlgebraOfGraphics 0.6.13
CairoMakie 0.10.1
CategoricalArrays 0.10.7
DataFrames 1.4.4
GLM 1.8.1
MLDataUtils 0.5.4
StatsFuns 1.1.1
Turing 0.23.3

To run this blog post locally, open this notebook with Pluto.jl.