Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayesian Linear Regression

Similar presentations


Presentation on theme: "Bayesian Linear Regression"— Presentation transcript:

1 Bayesian Linear Regression

2 Introducing GLM Important Variable Descriptors:
Independent vs Dependent Cardinal vs Ordinal vs Categorical GLM Linear Regression and ANOVA are closely related. In fact, they are both special cases of a more general family of models: General Linear Model (GLM) ANOVA Linear Regression Categorical Predictor Variable(s) Continuous Predictor Variable(s)

3 OLS Linear Regression Worked Example

4 Against Causality If a dependent variable depends on an independent variable, that still doesn’t conclusively demonstrate causality nor temporal priority. Height can depend on weight, but doesn’t “happen first” Similarly, even if a linear regression model provides a lot of predictive power, Y might still be causally disconnected from X.

5 Constructing the Model
Once β0 and β1 are known, we construct distributions around the central tendency. μi = β0 + β1*xi The normal distribution varies as follows: yi ~ N(μi, σ)

6 Constructing the Model
OLS assumes homogeneity of variance, or absence of heteroskedasticity. Heteroskedastic Homoskedastic Evenly Distributed Homoskedastic Bimodally Distributed

7 Using Student Distribution
The Student-t Distribution generalizes the Normal Distribution y ~ T(μ, σ, v) As v (degrees of freedom) approaches infinity, it approximates the Normal T(μ, σ, v = ∞) = Ɲ(μ, σ) We will here use the T distribution instead of the Normal Distribution to more easily accommodate outliers (heavier tails)

8 Bayesian Model The core of OLS Linear Regression is setting μi = β0 + β1*xi With this hierarchical model, we must shmear probability mass across the space of four parameters: β0, β1, σ, v To discover how these parameters update on the data, we use MCMC to approximate Bayesian inference:

9 Bayesian Model To import our model into JAGs, we simply transcribe our graphic into code: The “z” prefix denotes standardized data, explained later

10 Data Normalization In frequentist OLS, it is considered best practice to mean-center your IVs: X’ = X - μx For Bayesian Inference, we will go a bit further & standardize: X’ = (X - μx) / σx What’s the point? Mean Centering: Decorrelates parameters Normalization: priors less sensitive to scale of the data

11 Where To Normalize? Recall division of labor between R & JAGS.
Graphics MCMC Diagnostics Normalization in R is trivial enough. However, JAGS also provides a data manipulation interface. Benefit of normalizing in JAGS: Diagnostics are easier to consume Example

12 Bayesian Linear Regression JAGS Implementation


Download ppt "Bayesian Linear Regression"

Similar presentations


Ads by Google