HUIJZER.XYZ

Increasing model accuracy by using foreknowledge

2021-06-16

Typically, when making predictions via a linear model, we fit the model on our data and make predictions from the fitted model. However, this doesn't take much foreknowledge into account. For example, when predicting a person's length given only the weight and gender, we already have an intuition about the effect size and direction. Bayesian analysis should be able to incorporate this prior information.

In this blog post, I aim to figure out whether foreknowledge can, in theory, increase model accuracy. To do this, I generate data and fit a linear model and a Bayesian binary regression. Next, I compare the accuracy of the model parameters from the linear and Bayesian model.

begin
    using AlgebraOfGraphics
    using CairoMakie
    using CategoricalArrays
    using DataFrames
    using GLM
    using MLDataUtils: rescale!
    using Random: seed!
    using Statistics
    using StatsFuns
    using Turing
end

We define the model as $g_i = a_e * a_i + r_e * r_i + \epsilon_i = 1.1 * a_i + 1.05 * r_i + \epsilon_i$ where $a_e$ is the coefficient for the age, $r_e$ is a coefficient for the nationality and $\epsilon_i$ is some random noise for individual $i$.

We generate data for $n$ individuals via:

function generate_data(i::Int)
  seed!(i)

  n = 120
  I = 1:n
  P = [i % 2 == 0 for i in I]
  r_2(x) = round(x; digits=2)

  A = r_2.([p ? rand(Normal(aₑ * 18, 1)) : rand(Normal(18, 1)) for p in P])
  R = r_2.([p ? rand(Normal(rₑ * 6, 3)) : rand(Normal(6, 3)) for p in P])
  E = r_2.(rand(Normal(0, 1), n))
  G = aₑ .* A + rₑ .* R .+ E
  G = r_2.(G)

  df = DataFrame(age=A, recent=R, error=E, grade=G, pass=P)
end;
df = generate_data(1)
agerecenterrorgradepass
117.937.490.4828.07false
220.331.19-0.822.81true
317.1910.25-1.1628.51false
422.265.51-0.4729.8true
519.160.07-2.0519.1false
620.0710.87-0.4233.07true
719.756.37-1.4326.98false
818.974.25-0.4624.87true
916.965.22-0.5123.63false
1019.47-0.88-0.6119.88true
...
12021.33.17-0.426.36true

Linear regression

First, we fit a linear model and verify that the coefficients are estimated reasonably well. Here, the only prior information that we give the model is the structure of the data, that is, a formula.

linear_model = lm(@formula(grade ~ age + recent), df)
StatsModels.TableRegressionModel{LinearModel{GLM.LmResp{Vector{Float64}}, GLM.DensePredChol{Float64, LinearAlgebra.CholeskyPivoted{Float64, Matrix{Float64}, Vector{Int64}}}}, Matrix{Float64}}

grade ~ 1 + age + recent

Coefficients:
────────────────────────────────────────────────────────────────────────
                Coef.  Std. Error      t  Pr(>|t|)  Lower 95%  Upper 95%
────────────────────────────────────────────────────────────────────────
(Intercept)  0.693364   1.23564     0.56    0.5758  -1.75376     3.14049
age          1.05597    0.0638938  16.53    <1e-31   0.929434    1.18251
recent       1.05645    0.0267033  39.56    <1e-68   1.00357     1.10934
────────────────────────────────────────────────────────────────────────
r5(x) = round(x; digits=5)
r5 (generic function with 1 method)
coefa = coef(linear_model)[2] |> r5
1.05597
coefr = coef(linear_model)[3] |> r5
1.05645

Notice how these estimated coefficients are close to the coefficients that we set for age and recent, namely a_e = 1.1 ≈ 1.05597 and r_e = 1.05 ≈ 1.05645, as expected.

Bayesian regression

For the Bayesian regression we fit a model via Turing.jl. Now, we give the model information about the structure of the data as well as priors for the size of the coefficients. For demonstration purposes, I've set the priors to the correct values. This is reasonable because I was wondering whether finding a good prior could have a positive effect on the model accuracy.

function rescale_data(df)
    out = DataFrame(df)
    rescale!(out, [:age, :recent, :grade])
    out
end;
rescaled = let
    rescaled = rescale_data(df)
    rescaled[!, :pass_num] = [p ? 1.0 : 0.0 for p in rescaled.pass]
end;
@model function bayesian_model(ages, recents, grades, n)
    intercept ~ Normal(0, 5)
    βₐ ~ Normal(aₑ, 1)
    βᵣ ~ Normal(rₑ, 3)
    σ ~ truncated(Cauchy(0, 2), 0, Inf)

    μ = intercept .+ βₐ * ages .+ βᵣ * recents
    grades ~ MvNormal(μ, σ)
end;
chns = let
    n = nrow(df)
    bm = bayesian_model(df.age, df.recent, df.grade, n)
    chns = Turing.sample(bm, NUTS(), MCMCThreads(), 10_000, 3)
end;

Let's plot the density for the coefficient estimates $\beta_a$ and $\beta_r$:

and compare the outputs from both models:

coefficienttrue valuelinear estimatelinear errorbayesian estimatebayesian error
aₑ1.11.055974.0 %1.05851965439083243.8 %
rₑ1.051.056450.6 %1.05657321187098780.6 %

Conclusion

After giving the true coefficients to the Bayesian model in the form of priors, it does score better than the linear model. However, the differences aren't very big. This could be due to the particular random noise in this sample E or due to the relatively big sample size. The more samples, the more likely it is that the data will overrule the prior. In any way, there are real-world situations where gathering extra data is more expensive than gathering priors via reading papers. In those cases, the increased accuracy introduced by using priors could have serious benefits.

Built with Julia 1.10.2 and

AlgebraOfGraphics 0.6.18
CairoMakie 0.11.9
CategoricalArrays 0.10.8
DataFrames 1.6.1
GLM 1.9.0
MLDataUtils 0.5.4
Statistics 1.10.0
StatsFuns 1.3.1
Turing 0.30.5

To run this blog post locally, open this notebook with Pluto.jl.