本稿の目的
パッケージの読み込み
1
The Golem of Prague
2
Small Worlds and Large Worlds
2.1
Byesian Updating
2.2
Making th model
2.2.1
Grid approximation
2.2.2
Quadratic approximation
2.2.3
MCMC
2.3
Practice
2.3.1
2M1
2.3.2
2M2
2.3.3
2M3
2.3.4
2M4
2.3.5
2M5
2.3.6
2M7
2.3.7
2H1
2.3.8
2H2
2.3.9
2H3
2.3.10
2H4
3
Sampling the Imaginary
3.1
Sampling from a grid-aproximate posterior
3.2
Sampling to summarize
3.2.1
Intervals of defined mass
3.2.2
highly skewed example
3.2.3
Point estimate
3.3
Sampling to simulate prediction
3.3.1
Model checking
3.3.2
Practice with brms
3.4
Practice
3.4.1
3E
3.4.2
3M
3.4.3
3H
4
Geocentric Model
4.1
Why normal distributions are normal
4.2
Normal by multiplication
4.3
Gaussian model of height
4.4
Grid approximationで事後分布を描く
4.4.1
事後分布からサンプリングする
4.4.2
rethinking
4.4.3
Finding posterior with brms
4.4.4
事後分布からのサンプリング
4.5
Linear prediction
4.5.1
Finding the posterior distribution
4.5.2
Plotting posterior inference agaist the data
4.5.3
Plotting regression intervals and contours
4.5.4
Prediction intervals
4.6
Curves from lines
4.6.1
Polynomial regression
4.6.2
overthinking
4.6.3
Splines
4.7
Practice
4.7.1
4M1
4.7.2
4M2
4.7.3
4M4
4.7.4
4M5
4.7.5
4M6
4.7.6
4M7
4.7.7
4M8
4.7.8
4H1
4.7.9
4H2
4.7.10
4H3
4.7.11
4H4
4.7.12
4H5
4.7.13
4H6
5
The many variables & the spurious waffles
5.1
Spurious association
5.1.1
グラフを用いて因果モデルを考える
5.1.2
Predictor residual plots
5.1.3
Posterior prediction plots
5.1.4
Counterfactual plot
5.2
Masked relationship
5.2.1
K ~ Nモデル
5.2.2
K ~ Mモデル
5.2.3
K ~ N + Mモデル
5.2.4
Counterfactual plot を描く
5.2.5
Overthinking
5.3
Categorical variables
5.3.1
Two categories
5.3.2
Many Categories
5.4
Practice
5.4.1
5M1
5.4.2
5M2
5.4.3
5M4
5.4.4
5H1
5.4.5
5H2
5.4.6
5H3
5.4.7
5H4
6
The Haunted DAG & the Causal Terror
6.1
Multicollinearity
6.1.1
Multicollinear milk
6.2
Post-treatment bias
6.2.1
Blocked by consequence
6.2.2
Fungus and d-separation
6.3
Collider bias
6.3.1
Collider of false sorrow
6.3.2
The haunted DAG
6.4
Confronting confoundings
6.4.1
Two roads
6.4.2
Backdoor Waffles
6.5
Practice
6.5.1
6M1
6.5.2
6M2
6.5.3
6H1
6.5.4
6H2
6.5.5
6H3
6.5.6
6H4
6.5.7
6H5
6.6
おまけ ランダム効果を入れてみる
7
Ulysses’ Compass
7.1
The problems with parameters
7.1.1
More parameters always improve fit
7.1.2
Too few parameters hurts too
7.2
Entropy and accuracy
7.2.1
Information and uncertainty
7.2.2
From entropy to acculacy
7.2.3
Estimating divergence
7.2.4
Scoring the right data
7.3
Golem taming: regularization
7.4
Predicting predictive accuracy
7.4.1
Cross validation
7.4.2
Information criteria
7.4.3
overthinking: WAIC calculation
7.4.4
Comparing CV, PSIS, and WAIC
7.5
Model comparison
7.5.1
Model mis-selection
7.5.2
Outlier and other illusions
7.6
Practice
7.6.1
7E2
7.6.2
7E3
7.6.3
7E4
7.6.4
7M4
7.6.5
7H1
7.6.6
7H2
7.6.7
7H3
7.6.8
7H4
7.6.9
7H5
8
Conditional Manatees
8.1
Building an interaction
8.1.1
Making a rugged model
8.1.2
Adding an indicator variable isn’t enough
8.1.3
Adding interaction does work
8.1.4
Plotting the interaction
8.2
Symmetry of interaction
8.3
Continuous interaction
8.3.1
A winter flower
8.3.2
The models
8.3.3
Plotting posterior predictions
8.3.4
Plotting prior predictions
8.4
Practice
8.4.1
8M4
8.4.2
8H1
8.4.3
8H2
8.4.4
8H3
8.4.5
8H4
8.4.6
8H5
8.4.7
8H6
8.4.8
8H7
9
Marcov Chain Monte Carlo
9.1
Good King Marcov and his island kingdom
9.2
Metropolis algorythm
9.2.1
Gibbs sampling
9.2.2
High-dimentional problem
9.3
Hamiltonian Monte Carlo
9.3.1
Another probable
9.3.2
Limitations
9.4
Easy HMC:
ulam
brm()
9.4.1
Preparation
9.4.2
Sampling from posterior
9.4.3
Sampling again in parallel
9.4.4
Visualization
9.4.5
Checking the chain
9.5
Care and feeding of your Marcov chain
9.5.1
How many samples do you need
9.5.2
How many chains do we need
9.5.3
Taming and wild chain
9.5.4
Non-identifiable parameters
9.6
Practice
9.6.1
9M1
9.6.2
9M2
9.6.3
9M3
9.6.4
9H1
9.6.5
9H2
9.6.6
9H3
9.6.7
9H4
9.6.8
9H5
9.6.9
9H6
10
Big Entropy and the Generalized Linear Model
10.1
Maximum entropy
10.1.1
Gaussian
10.1.2
Binomial
10.2
Generalized linear models
10.2.1
Meet the family
10.2.2
Linking linear models to distributions
10.2.3
Overthinking
10.2.4
Omitted variable bias again
10.2.5
Absolute and relative differences
10.2.6
GLMs and information criteria
11
God Spiked the Integers
11.1
Binomial regression
11.1.1
Logistic regression
11.1.2
Relative shark and absolute deer
11.1.3
Aggregated binomial: Chimpanzees again, condensed
11.2
Aggregated binomial: Graduate school admittions.
11.3
Poisson regression
11.3.1
Oceanic tool complexity
11.3.2
Negative binomial (Gamma-Poisson) models.
11.3.3
Example: Exposure and the offset
11.4
Multinomial and categorical models
11.4.1
Predictors matched to outcomes
11.4.2
Predictor matched to observations
11.4.3
Multinomial in disguise as Poisson
11.5
Practice
11.5.1
11E1
11.5.2
11E2
11.5.3
11E3
11.5.4
11E4
11.5.5
11M7
11.5.6
11M8
11.5.7
11H1
11.5.8
11H2
11.5.9
11H3
11.5.10
11H4
11.5.11
11H5
11.5.12
11H6
12
Monsters and Mixtures
12.1
Over-dispersed counts
12.1.1
Beta binomial.
12.1.2
Negative binomial or gamma-Poisson.
12.1.3
Over-dispersion, entropy, and information criteria.
12.2
Zero-inflated outcomes
12.2.1
Example: Zero-inflated Poisson.
12.3
Ordered categorical outcomes
12.3.1
Example: Moral intuition.
12.3.2
Adding predictor variables.
12.4
Ordered categorical predictors
12.5
Practice
12.5.1
12M1
12.5.2
12M2
12.5.3
12M3
12.5.4
12H1
12.5.5
12H2
12.5.6
12H3
12.5.7
12H4
12.5.8
12H5
12.5.9
12H6
12.5.10
12H7
13
Models with Memory
13.1
Multilevel tadpoles
13.2
Varying effects and the underfitting/overfitting trade-off
13.2.1
The model
13.2.2
Simulate survivors
13.3
More than one type of cluster
13.3.1
Even more clusters
13.4
Divergent transitions and non-centered priors
13.4.1
The Devil’s Funnel
13.4.2
Non-centered chimpanzees
13.5
Multilevel posterior predictions
13.5.1
Posterior prediction for same clusters
13.5.2
Posterior prediction for new clusters
13.6
Post stratification
13.6.1
Meet the data
13.6.2
Settle the MR part of MRP.
13.6.3
Post-stratify to put the P in MRP.
13.7
Practice
13.7.1
13M1
13.7.2
13M2
13.7.3
13M3
13.7.4
13M4
13.7.5
13M5
13.7.6
13M6
13.7.7
13H1
13.7.8
13H2
13.7.9
13H3
13.7.10
13H4
14
Adventures in Covariance
14.1
Varying slopes by construction
14.1.1
Simulate and population
14.1.2
Simulate observation
14.1.3
The varying slope model
14.2
Advanced varing slopes
14.3
Instruments and causal designs
14.3.1
Instrumental variables
14.3.2
Other designs
14.4
Social relations as correlated varying effects
14.5
Continuous categories and the Gaussian process
14.5.1
Example: Spatial autocorrelation in Oceanic tools
14.5.2
Example: Phylogenetic distance
14.6
Practice
14.6.1
14M1
14.6.2
14M2
実行環境
Statistical Rethinking Second Edition
1
The Golem of Prague
本全体の導入のため割愛。