自訂狀態空間模型¶
狀態空間模型真正的威力在於允許建立和估計自訂模型。此筆記本展示了各種子類別 sm.tsa.statespace.MLEModel
的狀態空間模型。
請記住,一般的狀態空間模型可以用以下一般方式寫出
您可以查看物件的詳細資料和維度在此連結中
大多數模型不會包含所有這些元素。例如,設計矩陣 \(Z_t\) 可能不依賴時間 (\(\forall t \;Z_t = Z\)),或者模型不會有觀測截距 \(d_t\)。
我們將從相對簡單的內容開始,然後展示如何逐步擴展它以包含更多元素。
模型 1:時變係數。一個觀測方程式和兩個狀態方程式
模型 2:具有非單位轉換矩陣的時變參數
模型 3:多個觀測和多個狀態方程式
額外內容:使用 pymc3 進行貝氏估計
[ ]:
%matplotlib inline
from collections import OrderedDict
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=15)
模型 1:時變係數¶
觀察到的資料為 \(y_t, x_t, w_t\)。其中 \(x_t, w_t\) 為外生變數。請注意,設計矩陣是時變的,因此它將具有三個維度 (k_endog x k_states x nobs
)
狀態為 \(\beta_{x,t}\) 和 \(\beta_{w,t}\)。狀態方程式告訴我們,這些狀態會隨著隨機漫步而演變。因此,在此情況下,轉換矩陣是 2x2 的單位矩陣。
我們將首先模擬資料,然後建構模型,最後估計它。
[ ]:
def gen_data_for_model1():
nobs = 1000
rs = np.random.RandomState(seed=93572)
d = 5
var_y = 5
var_coeff_x = 0.01
var_coeff_w = 0.5
x_t = rs.uniform(size=nobs)
w_t = rs.uniform(size=nobs)
eps = rs.normal(scale=var_y ** 0.5, size=nobs)
beta_x = np.cumsum(rs.normal(size=nobs, scale=var_coeff_x ** 0.5))
beta_w = np.cumsum(rs.normal(size=nobs, scale=var_coeff_w ** 0.5))
y_t = d + beta_x * x_t + beta_w * w_t + eps
return y_t, x_t, w_t, beta_x, beta_w
y_t, x_t, w_t, beta_x, beta_w = gen_data_for_model1()
_ = plt.plot(y_t)
[ ]:
class TVRegression(sm.tsa.statespace.MLEModel):
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t] # shaped nobs x 2
super(TVRegression, self).__init__(
endog=y_t, exog=exog, k_states=2, initialization="diffuse"
)
# Since the design matrix is time-varying, it must be
# shaped k_endog x k_states x nobs
# Notice that exog.T is shaped k_states x nobs, so we
# just need to add a new first axis with shape 1
self.ssm["design"] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm["selection"] = np.eye(self.k_states)
self.ssm["transition"] = np.eye(self.k_states)
# Which parameters need to be positive?
self.positive_parameters = slice(1, 4)
@property
def param_names(self):
return ["intercept", "var.e", "var.x.coeff", "var.w.coeff"]
@property
def start_params(self):
"""
Defines the starting values for the parameters
The linear regression gives us reasonable starting values for the constant
d and the variance of the epsilon error
"""
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001]
return params
def transform_params(self, unconstrained):
"""
We constraint the last three parameters
('var.e', 'var.x.coeff', 'var.w.coeff') to be positive,
because they are variances
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = (
constrained[self.positive_parameters] ** 2
)
return constrained
def untransform_params(self, constrained):
"""
Need to unstransform all the parameters you transformed
in the `transform_params` function
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = (
unconstrained[self.positive_parameters] ** 0.5
)
return unconstrained
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self["obs_intercept", 0, 0] = params[0]
self["obs_cov", 0, 0] = params[1]
self["state_cov"] = np.diag(params[2:4])
然後使用我們的自訂模型類別來估計它¶
[ ]:
mod = TVRegression(y_t, x_t, w_t)
res = mod.fit()
print(res.summary())
產生資料的值為
截距 = 5
var.e = 5
var.x.coeff = 0.01
var.w.coeff = 0.5
如您所見,估計值相當好地恢復了真實參數。
我們也可以恢復潛在係數(或卡爾曼濾波器中的狀態)的估計演變
[ ]:
fig, axes = plt.subplots(2, figsize=(16, 8))
ss = pd.DataFrame(res.smoothed_state.T, columns=["x", "w"])
axes[0].plot(beta_x, label="True")
axes[0].plot(ss["x"], label="Smoothed estimate")
axes[0].set(title="Time-varying coefficient on x_t")
axes[0].legend()
axes[1].plot(beta_w, label="True")
axes[1].plot(ss["w"], label="Smoothed estimate")
axes[1].set(title="Time-varying coefficient on w_t")
axes[1].legend()
fig.tight_layout();
模型 2:具有非單位轉換矩陣的時變參數¶
這是模型 1 的一個小延伸。我們將擁有一個具有兩個參數 (\(\rho_1, \rho_2\)) 的轉換矩陣,而不是單位轉換矩陣,我們需要估計這兩個參數。
我們應該在先前的類別中修改什麼才能使事情正常運作?+ 好消息:不多!+ 壞消息:我們需要小心一些事情
1) 變更起始參數函式¶
我們需要為新參數 \(\rho_1, \rho_2\) 新增名稱,並且需要開始對應的起始值。
param_names
函式從以下變更:
def param_names(self):
return ['intercept', 'var.e', 'var.x.coeff', 'var.w.coeff']
到
def param_names(self):
return ['intercept', 'var.e', 'var.x.coeff', 'var.w.coeff',
'rho1', 'rho2']
我們將 start_params
函式從以下變更:
def start_params(self):
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001]
return params
到
def start_params(self):
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001, 0.8, 0.8]
return params
變更
update
函式
它從以下變更:
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
到
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
self['transition', 0, 0] = params[4]
self['transition', 1, 1] = params[5]
(選擇性) 變更
transform_params
和untransform_params
這不是必需的,但您可能想限制 \(\rho_1, \rho_2\) 介於 -1 和 1 之間。在這種情況下,我們首先從 statsmodels
匯入兩個公用程式函式。
from statsmodels.tsa.statespace.tools import (
constrain_stationary_univariate, unconstrain_stationary_univariate)
constrain_stationary_univariate
將值限制在 -1 和 1 之間。unconstrain_stationary_univariate
提供反函式。轉換和反轉換參數函式看起來會像這樣 (請記住,\(\rho_1, \rho_2\) 位於第 4 和第 5 個索引)
def transform_params(self, unconstrained):
constrained = unconstrained.copy()
constrained[self.positive_parameters] = constrained[self.positive_parameters]**2
constrained[4] = constrain_stationary_univariate(constrained[4:5])
constrained[5] = constrain_stationary_univariate(constrained[5:6])
return constrained
def untransform_params(self, constrained):
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = unconstrained[self.positive_parameters]**0.5
unconstrained[4] = unconstrain_stationary_univariate(constrained[4:5])
unconstrained[5] = unconstrain_stationary_univariate(constrained[5:6])
return unconstrained
我將在下方寫出完整的類別 (不包含我剛剛討論的可選變更)
[ ]:
class TVRegressionExtended(sm.tsa.statespace.MLEModel):
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t] # shaped nobs x 2
super(TVRegressionExtended, self).__init__(
endog=y_t, exog=exog, k_states=2, initialization="diffuse"
)
# Since the design matrix is time-varying, it must be
# shaped k_endog x k_states x nobs
# Notice that exog.T is shaped k_states x nobs, so we
# just need to add a new first axis with shape 1
self.ssm["design"] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm["selection"] = np.eye(self.k_states)
self.ssm["transition"] = np.eye(self.k_states)
# Which parameters need to be positive?
self.positive_parameters = slice(1, 4)
@property
def param_names(self):
return ["intercept", "var.e", "var.x.coeff", "var.w.coeff", "rho1", "rho2"]
@property
def start_params(self):
"""
Defines the starting values for the parameters
The linear regression gives us reasonable starting values for the constant
d and the variance of the epsilon error
"""
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001, 0.7, 0.8]
return params
def transform_params(self, unconstrained):
"""
We constraint the last three parameters
('var.e', 'var.x.coeff', 'var.w.coeff') to be positive,
because they are variances
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = (
constrained[self.positive_parameters] ** 2
)
return constrained
def untransform_params(self, constrained):
"""
Need to unstransform all the parameters you transformed
in the `transform_params` function
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = (
unconstrained[self.positive_parameters] ** 0.5
)
return unconstrained
def update(self, params, **kwargs):
params = super(TVRegressionExtended, self).update(params, **kwargs)
self["obs_intercept", 0, 0] = params[0]
self["obs_cov", 0, 0] = params[1]
self["state_cov"] = np.diag(params[2:4])
self["transition", 0, 0] = params[4]
self["transition", 1, 1] = params[5]
為了進行估計,我們將使用與模型 1 相同的資料,並預期 \(\rho_1, \rho_2\) 接近 1。
結果看起來相當不錯!請注意,此估計對 \(\rho_1, \rho_2\) 的起始值相當敏感。如果您嘗試較低的值,您會發現它無法收斂。
[ ]:
mod = TVRegressionExtended(y_t, x_t, w_t)
res = mod.fit(maxiter=2000) # it doesn't converge with 50 iters
print(res.summary())
模型 3:多個觀測和狀態方程式¶
我們將保留時變參數,但這次我們也將有兩個觀測方程式。
觀測方程式¶
\(\hat{i_t}, \hat{M_t}, \hat{s_t}\) 每期都會被觀察到。
觀測方程式的模型有兩個方程式
依照狀態空間模型的一般符號,觀測方程式的內生部分為 \(y_t = (\hat{i_t}, \hat{M_t})\),我們只有一個外生變數 \(\hat{s_t}\)
狀態方程式¶
狀態空間模型的矩陣表示法¶
我將模擬一些資料,討論我們需要修改的內容,最後估計模型,以查看我們是否正在恢復一些合理的內容。
[ ]:
true_values = {
"var_e1": 0.01,
"var_e2": 0.01,
"var_w1": 0.01,
"var_w2": 0.01,
"delta1": 0.8,
"delta2": 0.5,
"delta3": 0.7,
}
def gen_data_for_model3():
# Starting values
alpha1_0 = 2.1
alpha2_0 = 1.1
t_max = 500
def gen_i(alpha1, s):
return alpha1 * s + np.sqrt(true_values["var_e1"]) * np.random.randn()
def gen_m_hat(alpha2):
return 1 * alpha2 + np.sqrt(true_values["var_e2"]) * np.random.randn()
def gen_alpha1(alpha1, alpha2):
w1 = np.sqrt(true_values["var_w1"]) * np.random.randn()
return true_values["delta1"] * alpha1 + true_values["delta2"] * alpha2 + w1
def gen_alpha2(alpha2):
w2 = np.sqrt(true_values["var_w2"]) * np.random.randn()
return true_values["delta3"] * alpha2 + w2
s_t = 0.3 + np.sqrt(1.4) * np.random.randn(t_max)
i_hat = np.empty(t_max)
m_hat = np.empty(t_max)
current_alpha1 = alpha1_0
current_alpha2 = alpha2_0
for t in range(t_max):
# Obs eqns
i_hat[t] = gen_i(current_alpha1, s_t[t])
m_hat[t] = gen_m_hat(current_alpha2)
# state eqns
new_alpha1 = gen_alpha1(current_alpha1, current_alpha2)
new_alpha2 = gen_alpha2(current_alpha2)
# Update states for next period
current_alpha1 = new_alpha1
current_alpha2 = new_alpha2
return i_hat, m_hat, s_t
i_hat, m_hat, s_t = gen_data_for_model3()
我們需要修改什麼?¶
再一次,我們不需要變更太多,但我們需要小心維度。
1) __init__
函式從以下變更:¶
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t]
super(TVRegressionExtended, self).__init__(
endog=y_t, exog=exog, k_states=2,
initialization='diffuse')
self.ssm['design'] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm['selection'] = np.eye(self.k_states)
self.ssm['transition'] = np.eye(self.k_states)
到
def __init__(self, i_t: np.array, s_t: np.array, m_t: np.array):
exog = np.c_[s_t, np.repeat(1, len(s_t))] # exog.shape => (nobs, 2)
super(MultipleYsModel, self).__init__(
endog=np.c_[i_t, m_t], exog=exog, k_states=2,
initialization='diffuse')
self.ssm['design'] = np.zeros((self.k_endog, self.k_states, self.nobs))
self.ssm['design', 0, 0, :] = s_t
self.ssm['design', 1, 1, :] = 1
請注意,我們不必在任何地方指定 k_endog
。初始化會在檢查 endog
矩陣的維度後為我們執行此操作。
2) update()
函式¶
從以下變更:
def update(self, params, **kwargs):
params = super(TVRegressionExtended, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
self['transition', 0, 0] = params[4]
self['transition', 1, 1] = params[5]
到
def update(self, params, **kwargs):
params = super(MultipleYsModel, self).update(params, **kwargs)
#The following line is not needed (by default, this matrix is initialized by zeroes),
#But I leave it here so the dimensions are clearer
self['obs_intercept'] = np.repeat([np.array([0, 0])], self.nobs, axis=0).T
self['obs_cov', 0, 0] = params[0]
self['obs_cov', 1, 1] = params[1]
self['state_cov'] = np.diag(params[2:4])
#delta1, delta2, delta3
self['transition', 0, 0] = params[4]
self['transition', 0, 1] = params[5]
self['transition', 1, 1] = params[6]
其餘方法的變更方式相當明顯(需要新增參數名稱,確保索引正常運作等)。該函式的完整程式碼如下所示
[ ]:
starting_values = {
"var_e1": 0.2,
"var_e2": 0.1,
"var_w1": 0.15,
"var_w2": 0.18,
"delta1": 0.7,
"delta2": 0.1,
"delta3": 0.85,
}
class MultipleYsModel(sm.tsa.statespace.MLEModel):
def __init__(self, i_t: np.array, s_t: np.array, m_t: np.array):
exog = np.c_[s_t, np.repeat(1, len(s_t))] # exog.shape => (nobs, 2)
super(MultipleYsModel, self).__init__(
endog=np.c_[i_t, m_t], exog=exog, k_states=2, initialization="diffuse"
)
self.ssm["design"] = np.zeros((self.k_endog, self.k_states, self.nobs))
self.ssm["design", 0, 0, :] = s_t
self.ssm["design", 1, 1, :] = 1
# These have ok shape. Placeholders since I'm changing them
# in the update() function
self.ssm["selection"] = np.eye(self.k_states)
self.ssm["transition"] = np.eye(self.k_states)
# Dictionary of positions to names
self.position_dict = OrderedDict(
var_e1=1, var_e2=2, var_w1=3, var_w2=4, delta1=5, delta2=6, delta3=7
)
self.initial_values = starting_values
self.positive_parameters = slice(0, 4)
@property
def param_names(self):
return list(self.position_dict.keys())
@property
def start_params(self):
"""
Initial values
"""
# (optional) Use scale for var_e1 and var_e2 starting values
params = np.r_[
self.initial_values["var_e1"],
self.initial_values["var_e2"],
self.initial_values["var_w1"],
self.initial_values["var_w2"],
self.initial_values["delta1"],
self.initial_values["delta2"],
self.initial_values["delta3"],
]
return params
def transform_params(self, unconstrained):
"""
If you need to restrict parameters
For example, variances should be > 0
Parameters maybe have to be within -1 and 1
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = (
constrained[self.positive_parameters] ** 2
)
return constrained
def untransform_params(self, constrained):
"""
Need to reverse what you did in transform_params()
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = (
unconstrained[self.positive_parameters] ** 0.5
)
return unconstrained
def update(self, params, **kwargs):
params = super(MultipleYsModel, self).update(params, **kwargs)
# The following line is not needed (by default, this matrix is initialized by zeroes),
# But I leave it here so the dimensions are clearer
self["obs_intercept"] = np.repeat([np.array([0, 0])], self.nobs, axis=0).T
self["obs_cov", 0, 0] = params[0]
self["obs_cov", 1, 1] = params[1]
self["state_cov"] = np.diag(params[2:4])
# delta1, delta2, delta3
self["transition", 0, 0] = params[4]
self["transition", 0, 1] = params[5]
self["transition", 1, 1] = params[6]
[ ]:
mod = MultipleYsModel(i_hat, s_t, m_hat)
res = mod.fit()
print(res.summary())
額外內容:使用 pymc3 進行快速貝氏估計¶
在本節中,我將展示如何取得您的自訂狀態空間模型,並輕鬆將其插入 pymc3
並使用貝氏方法進行估計。特別是,此範例將向您展示使用稱為無轉彎採樣器 (NUTS) 的漢密爾頓蒙特卡羅版本的估計。
我基本上是複製此筆記本中包含的想法,因此請務必查看該筆記本以了解更多詳細資訊。
[ ]:
# Extra requirements
import pymc3 as pm
import theano
import theano.tensor as tt
我們需要定義一些輔助函數,將 Theano 連接到我們模型中隱含的概似函數。
[ ]:
class Loglike(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, model):
self.model = model
self.score = Score(self.model)
def perform(self, node, inputs, outputs):
(theta,) = inputs # contains the vector of parameters
llf = self.model.loglike(theta)
outputs[0][0] = np.array(llf) # output the log-likelihood
def grad(self, inputs, g):
# the method that calculates the gradients - it actually returns the
# vector-Jacobian product - g[0] is a vector of parameter values
(theta,) = inputs # our parameters
out = [g[0] * self.score(theta)]
return out
class Score(tt.Op):
itypes = [tt.dvector]
otypes = [tt.dvector]
def __init__(self, model):
self.model = model
def perform(self, node, inputs, outputs):
(theta,) = inputs
outputs[0][0] = self.model.score(theta)
我們會再次模擬用於模型 1 的數據。我們也會再次 fit
它,並保存結果,以便與我們得到的貝氏後驗分佈進行比較。
[ ]:
y_t, x_t, w_t, beta_x, beta_w = gen_data_for_model1()
plt.plot(y_t)
[ ]:
mod = TVRegression(y_t, x_t, w_t)
res_mle = mod.fit(disp=False)
print(res_mle.summary())
貝氏估計¶
我們需要為每個參數定義先驗分佈,以及抽樣次數和預燒階段的點數。
[ ]:
# Set sampling params
ndraws = 3000 # 3000 number of draws from the distribution
nburn = 600 # 600 number of "burn-in points" (which will be discarded)
[ ]:
# Construct an instance of the Theano wrapper defined above, which
# will allow PyMC3 to compute the likelihood and Jacobian in a way
# that it can make use of. Here we are using the same model instance
# created earlier for MLE analysis (we could also create a new model
# instance if we preferred)
loglike = Loglike(mod)
with pm.Model():
# Priors
intercept = pm.Uniform("intercept", 1, 10)
var_e = pm.InverseGamma("var.e", 2.3, 0.5)
var_x_coeff = pm.InverseGamma("var.x.coeff", 2.3, 0.1)
var_w_coeff = pm.InverseGamma("var.w.coeff", 2.3, 0.1)
# convert variables to tensor vectors
theta = tt.as_tensor_variable([intercept, var_e, var_x_coeff, var_w_coeff])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist("likelihood", loglike, observed=theta)
# Draw samples
trace = pm.sample(
ndraws,
tune=nburn,
return_inferencedata=True,
cores=1,
compute_convergence_checks=False,
)
後驗分佈與 MLE 估計相比如何?¶
明顯地,峰值集中在 MLE 估計值附近。
[ ]:
results_dict = {
"intercept": res_mle.params[0],
"var.e": res_mle.params[1],
"var.x.coeff": res_mle.params[2],
"var.w.coeff": res_mle.params[3],
}
plt.tight_layout()
_ = pm.plot_trace(
trace,
lines=[(k, {}, [v]) for k, v in dict(results_dict).items()],
combined=True,
figsize=(12, 12),
)