Skip to content

Commit 2f3f856

Browse files
committed
Use \widehat for Colab
Maybe colab is using old mathjax Ref mathjax/MathJax#1913
1 parent 244f2b0 commit 2f3f856

12 files changed

+63
-63
lines changed

notebooks/T3 - Bayesian inference.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -588,10 +588,10 @@
588588
"As such, barring any approximations (such as using `Bayes_rule_LG1` outside the linear-Gaussian case),\n",
589589
"the (full) posterior will be **optimal** from the perspective of any [proper scoring rule](https://en.wikipedia.org/wiki/Scoring_rule#Propriety_and_consistency).\n",
590590
"\n",
591-
"*But if you must* pick a single point value estimate $\\hat{x}$\n",
592-
"then you should **decide** on it by optimising (with respect to $\\hat{x}$)\n",
591+
"*But if you must* pick a single point value estimate $\\widehat{x}$\n",
592+
"then you should **decide** on it by optimising (with respect to $\\widehat{x}$)\n",
593593
"the expectation (with respect to $x$) of some utility/loss function,\n",
594-
"i.e. $\\Expect\\, \\text{Loss}(x - \\hat{x})$.\n",
594+
"i.e. $\\Expect\\, \\text{Loss}(x - \\widehat{x})$.\n",
595595
"For instance, if the posterior pdf happens to be symmetric\n",
596596
"(as in the linear-Gaussian context above),\n",
597597
"and your loss function is convex and symmetric,\n",

notebooks/T4 - Time series filtering.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -237,15 +237,15 @@
237237
" The following does relies only on the (\"hidden Markov model\") assumptions.\n",
238238
"\n",
239239
" - The analysis \"assimilates\" $y_k$ according to Bayes' rule to compute $p(x_k | y_{1:k})$,\n",
240-
" where $y_{1:k} = y_1, \\ldots, y_k$ is shorthand notation.\n",
241-
" $$\n",
242-
" p(x_k | y_{1:k}) \\propto p(y_k | x_k) \\, p(x_k | x_{1:k-1}) \\,.\n",
243-
" $$\n",
240+
" where $y_{1:k} = y_1, \\ldots, y_k$ is shorthand notation.\n",
241+
" $$\n",
242+
" p(x_k | y_{1:k}) \\propto p(y_k | x_k) \\, p(x_k | x_{1:k-1}) \\,.\n",
243+
" $$\n",
244244
" - The forecast \"propagates\" the uncertainty (i.e. density) according to the Chapman-Kolmogorov equation\n",
245-
" to produce $p(x_{k+1}| y_{1:k})$.\n",
246-
" $$\n",
247-
" p(x_{k+1} | y_{1:k}) = \\int p(x_{k+1} | x_k) \\, p(x_k | y_{1:k}) \\, d x_k \\,.\n",
248-
" $$\n",
245+
" to produce $p(x_{k+1}| y_{1:k})$.\n",
246+
" $$\n",
247+
" p(x_{k+1} | y_{1:k}) = \\int p(x_{k+1} | x_k) \\, p(x_k | y_{1:k}) \\, d x_k \\,.\n",
248+
" $$\n",
249249
"\n",
250250
" It is important to appreciate the benefits of the recursive form of these computations:\n",
251251
" It reflects the recursiveness (Markov property) of nature:\n",

notebooks/T7 - Geostats & Kriging [optional].ipynb

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -715,8 +715,8 @@
715715
"source": [
716716
"#### Kriging\n",
717717
"\n",
718-
"Kriging finds the best (minimum) mean square error (MSE), $\\Expect (\\hat{x} - x)^2$, among all linear \"predictors\",\n",
719-
"$\\hat{x} = \\vect{w}\\tr \\vect{y}$, that are unbiased (BLUP).\n",
718+
"Kriging finds the best (minimum) mean square error (MSE), $\\Expect (\\widehat{x} - x)^2$, among all linear \"predictors\",\n",
719+
"$\\widehat{x} = \\vect{w}\\tr \\vect{y}$, that are unbiased (BLUP).\n",
720720
"\n",
721721
"<a name='Exc-–-\"simple\"-kriging-(SK)'></a>\n",
722722
"\n",
@@ -725,13 +725,13 @@
725725
"Suppose $X(s)$ has a mean that is constant in space, $\\mu$, and known.\n",
726726
"Since it is easy to subtract (and later re-include) $\\mu$\n",
727727
"from both $x$ and the data $\\vect{y}$, we simply assume $\\mu = 0$.\n",
728-
"Thus, $\\Expect \\vect{y} = \\vect{0}$ and $\\Expect \\hat{x} = 0$ for any weights $\\vect{w}$,\n",
729-
"and so $\\hat{x}$ is already unbiased.\n",
728+
"Thus, $\\Expect \\vect{y} = \\vect{0}$ and $\\Expect \\widehat{x} = 0$ for any weights $\\vect{w}$,\n",
729+
"and so $\\widehat{x}$ is already unbiased.\n",
730730
"Meanwhile,\n",
731731
"$$\n",
732732
"\\begin{align}\n",
733733
" \\text{MSE}\n",
734-
" % \\Expect \\big( \\hat{x} - x \\big)^2\n",
734+
" % \\Expect \\big( \\widehat{x} - x \\big)^2\n",
735735
" &= \\Expect \\big( \\vect{w}\\tr \\vect{y} - x \\big)^2 \\\\\n",
736736
" % &= \\Expect \\big( \\vect{w}\\tr \\vect{y} \\vect{y}\\tr \\vect{w} - 2 x \\vect{y}\\tr \\vect{w} + x^2 \\big) \\\\\n",
737737
" &= \\vect{w}\\tr \\Expect \\big( \\vect{y} \\vect{y}\\tr \\big) \\vect{w}\n",
@@ -767,7 +767,7 @@
767767
" which then requires that the weights sum to one, i.e. $\\vect{w}\\tr\\vect{1} = 1$.\n",
768768
" Then, if $\\mat{C}_{\\vect{y}\\vect{y}}$ is the covariance of $\\vect{r}$,\n",
769769
" the BLUE weights become $(\\vect{1}\\tr \\mat{C}_{\\vect{y}\\vect{y}}^{-1})/(\\vect{1}\\tr \\mat{C}_{\\vect{y}\\vect{y}}^{-1} \\vect{1})$,\n",
770-
" and so $\\hat{x}(s)$ is constant in $s$, i.e. flat.\n",
770+
" and so $\\widehat{x}(s)$ is constant in $s$, i.e. flat.\n",
771771
" Thus we see the need to include the randomness of $x$ along with that of $\\vect{y}$.\n",
772772
" This seems contrary to the classical BLUE framing,\n",
773773
" but we have already seen it done for the Kalman gain (by augmenting the observations by the prior mean).\n",
@@ -777,7 +777,7 @@
777777
"\n",
778778
" Now for the \"dual\" perspective of kriging, which is held by **radial basis function (RBF) interpolation**.\n",
779779
" Ultimately, kriging (simple, ordinary, and universal) provides an estimate of the form\n",
780-
" $\\hat{x} = \\vect{y}\\tr (\\mat{A}_\\vect{y}^{-1} \\vect{b}_{\\vect{y} x})$,\n",
780+
" $\\widehat{x} = \\vect{y}\\tr (\\mat{A}_\\vect{y}^{-1} \\vect{b}_{\\vect{y} x})$,\n",
781781
" where $\\vect{y}$ is the observations,\n",
782782
" and the subscripts indicate that $\\mat{A}_\\vect{y}$ depends on the locations of $\\vect{y}$,\n",
783783
" while $\\vect{b}_{\\vect{y} x}$ depends on the locations of $\\vect{y}$ and $x$ both.\n",
@@ -807,7 +807,7 @@
807807
" - - -\n",
808808
"</details>\n",
809809
"\n",
810-
"However, we previously derived this $\\hat{x}$ as the posterior/conditional mean of Gaussian distribution.\n",
810+
"However, we previously derived this $\\widehat{x}$ as the posterior/conditional mean of Gaussian distribution.\n",
811811
"This perspective is that of GP regression,\n",
812812
"but was also highlighted by [Krige (1951)](#References),\n",
813813
"and makes it evident that kriging also provides an uncertainty estimate\n",
@@ -986,7 +986,7 @@
986986
"Let us do away with the assumption that the mean of the field is known,\n",
987987
"all the while retaining the assumption that it is constant in space.\n",
988988
"The resulting method is called ordinary kriging.\n",
989-
"In this case, unbiasedness of $\\hat{x} = \\vect{w}\\tr \\vect{y}$ requires that the weights sum to one.\n",
989+
"In this case, unbiasedness of $\\widehat{x} = \\vect{w}\\tr \\vect{y}$ requires that the weights sum to one.\n",
990990
"This can be imposed on the MSE minimization using a Lagrange multiplier $\\lambda$,\n",
991991
"yielding the augmented system to solve:\n",
992992
"$$ \\begin{pmatrix} \\mat{C}_{\\vect{y} \\vect{y}} & \\vect{1} \\\\ \\vect{1}\\tr & 0 \\end{pmatrix} \\begin{pmatrix} \\vect{w} \\\\ \\lambda \\end{pmatrix}\n",

notebooks/T8 - Monte-Carlo & cov estimation.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -350,7 +350,7 @@
350350
"- (a). What's the difference between error and residual?\n",
351351
"- (b). What's the difference between error and bias?\n",
352352
"- (c). Show that mean-square-error (MSE) = Bias${}^2$ + Var. \n",
353-
" *Hint: start by writing down the definitions of error, bias, and variance (of $\\hat{\\theta}$).*"
353+
" *Hint: start by writing down the definitions of error, bias, and variance (of $\\widehat{\\theta}$).*"
354354
]
355355
},
356356
{

notebooks/nb_mirrors/T3 - Bayesian inference.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -389,10 +389,10 @@ It merely states how to update our quantitative belief (weighted possibilities)
389389
As such, barring any approximations (such as using `Bayes_rule_LG1` outside the linear-Gaussian case),
390390
the (full) posterior will be **optimal** from the perspective of any [proper scoring rule](https://en.wikipedia.org/wiki/Scoring_rule#Propriety_and_consistency).
391391

392-
*But if you must* pick a single point value estimate $\hat{x}$
393-
then you should **decide** on it by optimising (with respect to $\hat{x}$)
392+
*But if you must* pick a single point value estimate $\widehat{x}$
393+
then you should **decide** on it by optimising (with respect to $\widehat{x}$)
394394
the expectation (with respect to $x$) of some utility/loss function,
395-
i.e. $\Expect\, \text{Loss}(x - \hat{x})$.
395+
i.e. $\Expect\, \text{Loss}(x - \widehat{x})$.
396396
For instance, if the posterior pdf happens to be symmetric
397397
(as in the linear-Gaussian context above),
398398
and your loss function is convex and symmetric,

notebooks/nb_mirrors/T3 - Bayesian inference.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -379,10 +379,10 @@ def pdf_fitted(x, mu, sigma2, dist=dist):
379379
# As such, barring any approximations (such as using `Bayes_rule_LG1` outside the linear-Gaussian case),
380380
# the (full) posterior will be **optimal** from the perspective of any [proper scoring rule](https://en.wikipedia.org/wiki/Scoring_rule#Propriety_and_consistency).
381381
#
382-
# *But if you must* pick a single point value estimate $\hat{x}$
383-
# then you should **decide** on it by optimising (with respect to $\hat{x}$)
382+
# *But if you must* pick a single point value estimate $\widehat{x}$
383+
# then you should **decide** on it by optimising (with respect to $\widehat{x}$)
384384
# the expectation (with respect to $x$) of some utility/loss function,
385-
# i.e. $\Expect\, \text{Loss}(x - \hat{x})$.
385+
# i.e. $\Expect\, \text{Loss}(x - \widehat{x})$.
386386
# For instance, if the posterior pdf happens to be symmetric
387387
# (as in the linear-Gaussian context above),
388388
# and your loss function is convex and symmetric,

notebooks/nb_mirrors/T4 - Time series filtering.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -197,15 +197,15 @@ the KF of eqns. (5)-(8) computes the *exact* Bayesian pdfs for $x_k$.
197197
The following does relies only on the ("hidden Markov model") assumptions.
198198

199199
- The analysis "assimilates" $y_k$ according to Bayes' rule to compute $p(x_k | y_{1:k})$,
200-
where $y_{1:k} = y_1, \ldots, y_k$ is shorthand notation.
201-
$$
202-
p(x_k | y_{1:k}) \propto p(y_k | x_k) \, p(x_k | x_{1:k-1}) \,.
203-
$$
200+
where $y_{1:k} = y_1, \ldots, y_k$ is shorthand notation.
201+
$$
202+
p(x_k | y_{1:k}) \propto p(y_k | x_k) \, p(x_k | x_{1:k-1}) \,.
203+
$$
204204
- The forecast "propagates" the uncertainty (i.e. density) according to the Chapman-Kolmogorov equation
205-
to produce $p(x_{k+1}| y_{1:k})$.
206-
$$
207-
p(x_{k+1} | y_{1:k}) = \int p(x_{k+1} | x_k) \, p(x_k | y_{1:k}) \, d x_k \,.
208-
$$
205+
to produce $p(x_{k+1}| y_{1:k})$.
206+
$$
207+
p(x_{k+1} | y_{1:k}) = \int p(x_{k+1} | x_k) \, p(x_k | y_{1:k}) \, d x_k \,.
208+
$$
209209

210210
It is important to appreciate the benefits of the recursive form of these computations:
211211
It reflects the recursiveness (Markov property) of nature:

notebooks/nb_mirrors/T4 - Time series filtering.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -194,15 +194,15 @@ def exprmt(seed=4, nTime=50, M=0.97, logR=1, logQ=1, analyses_only=False, logR_b
194194
# The following does relies only on the ("hidden Markov model") assumptions.
195195
#
196196
# - The analysis "assimilates" $y_k$ according to Bayes' rule to compute $p(x_k | y_{1:k})$,
197-
# where $y_{1:k} = y_1, \ldots, y_k$ is shorthand notation.
198-
# $$
199-
# p(x_k | y_{1:k}) \propto p(y_k | x_k) \, p(x_k | x_{1:k-1}) \,.
200-
# $$
197+
# where $y_{1:k} = y_1, \ldots, y_k$ is shorthand notation.
198+
# $$
199+
# p(x_k | y_{1:k}) \propto p(y_k | x_k) \, p(x_k | x_{1:k-1}) \,.
200+
# $$
201201
# - The forecast "propagates" the uncertainty (i.e. density) according to the Chapman-Kolmogorov equation
202-
# to produce $p(x_{k+1}| y_{1:k})$.
203-
# $$
204-
# p(x_{k+1} | y_{1:k}) = \int p(x_{k+1} | x_k) \, p(x_k | y_{1:k}) \, d x_k \,.
205-
# $$
202+
# to produce $p(x_{k+1}| y_{1:k})$.
203+
# $$
204+
# p(x_{k+1} | y_{1:k}) = \int p(x_{k+1} | x_k) \, p(x_k | y_{1:k}) \, d x_k \,.
205+
# $$
206206
#
207207
# It is important to appreciate the benefits of the recursive form of these computations:
208208
# It reflects the recursiveness (Markov property) of nature:

notebooks/nb_mirrors/T7 - Geostats & Kriging [optional].md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -424,8 +424,8 @@ estims["Inv-dist."][obs_indices] = observations # Fix singularities
424424

425425
#### Kriging
426426

427-
Kriging finds the best (minimum) mean square error (MSE), $\Expect (\hat{x} - x)^2$, among all linear "predictors",
428-
$\hat{x} = \vect{w}\tr \vect{y}$, that are unbiased (BLUP).
427+
Kriging finds the best (minimum) mean square error (MSE), $\Expect (\widehat{x} - x)^2$, among all linear "predictors",
428+
$\widehat{x} = \vect{w}\tr \vect{y}$, that are unbiased (BLUP).
429429

430430
<a name='Exc-–-"simple"-kriging-(SK)'></a>
431431

@@ -434,13 +434,13 @@ $\hat{x} = \vect{w}\tr \vect{y}$, that are unbiased (BLUP).
434434
Suppose $X(s)$ has a mean that is constant in space, $\mu$, and known.
435435
Since it is easy to subtract (and later re-include) $\mu$
436436
from both $x$ and the data $\vect{y}$, we simply assume $\mu = 0$.
437-
Thus, $\Expect \vect{y} = \vect{0}$ and $\Expect \hat{x} = 0$ for any weights $\vect{w}$,
438-
and so $\hat{x}$ is already unbiased.
437+
Thus, $\Expect \vect{y} = \vect{0}$ and $\Expect \widehat{x} = 0$ for any weights $\vect{w}$,
438+
and so $\widehat{x}$ is already unbiased.
439439
Meanwhile,
440440
$$
441441
\begin{align}
442442
\text{MSE}
443-
% \Expect \big( \hat{x} - x \big)^2
443+
% \Expect \big( \widehat{x} - x \big)^2
444444
&= \Expect \big( \vect{w}\tr \vect{y} - x \big)^2 \\
445445
% &= \Expect \big( \vect{w}\tr \vect{y} \vect{y}\tr \vect{w} - 2 x \vect{y}\tr \vect{w} + x^2 \big) \\
446446
&= \vect{w}\tr \Expect \big( \vect{y} \vect{y}\tr \big) \vect{w}
@@ -476,7 +476,7 @@ yielding $\vect{w}_{\text{SK}} = \mat{C}_{\vect{y} \vect{y}}^{-1} \, \vect{c}_{x
476476
which then requires that the weights sum to one, i.e. $\vect{w}\tr\vect{1} = 1$.
477477
Then, if $\mat{C}_{\vect{y}\vect{y}}$ is the covariance of $\vect{r}$,
478478
the BLUE weights become $(\vect{1}\tr \mat{C}_{\vect{y}\vect{y}}^{-1})/(\vect{1}\tr \mat{C}_{\vect{y}\vect{y}}^{-1} \vect{1})$,
479-
and so $\hat{x}(s)$ is constant in $s$, i.e. flat.
479+
and so $\widehat{x}(s)$ is constant in $s$, i.e. flat.
480480
Thus we see the need to include the randomness of $x$ along with that of $\vect{y}$.
481481
This seems contrary to the classical BLUE framing,
482482
but we have already seen it done for the Kalman gain (by augmenting the observations by the prior mean).
@@ -486,7 +486,7 @@ yielding $\vect{w}_{\text{SK}} = \mat{C}_{\vect{y} \vect{y}}^{-1} \, \vect{c}_{x
486486

487487
Now for the "dual" perspective of kriging, which is held by **radial basis function (RBF) interpolation**.
488488
Ultimately, kriging (simple, ordinary, and universal) provides an estimate of the form
489-
$\hat{x} = \vect{y}\tr (\mat{A}_\vect{y}^{-1} \vect{b}_{\vect{y} x})$,
489+
$\widehat{x} = \vect{y}\tr (\mat{A}_\vect{y}^{-1} \vect{b}_{\vect{y} x})$,
490490
where $\vect{y}$ is the observations,
491491
and the subscripts indicate that $\mat{A}_\vect{y}$ depends on the locations of $\vect{y}$,
492492
while $\vect{b}_{\vect{y} x}$ depends on the locations of $\vect{y}$ and $x$ both.
@@ -516,7 +516,7 @@ yielding $\vect{w}_{\text{SK}} = \mat{C}_{\vect{y} \vect{y}}^{-1} \, \vect{c}_{x
516516
- - -
517517
</details>
518518

519-
However, we previously derived this $\hat{x}$ as the posterior/conditional mean of Gaussian distribution.
519+
However, we previously derived this $\widehat{x}$ as the posterior/conditional mean of Gaussian distribution.
520520
This perspective is that of GP regression,
521521
but was also highlighted by [Krige (1951)](#References),
522522
and makes it evident that kriging also provides an uncertainty estimate
@@ -617,7 +617,7 @@ def simple_kriging(vg, dists_xy, dists_yy, observations, mu):
617617
Let us do away with the assumption that the mean of the field is known,
618618
all the while retaining the assumption that it is constant in space.
619619
The resulting method is called ordinary kriging.
620-
In this case, unbiasedness of $\hat{x} = \vect{w}\tr \vect{y}$ requires that the weights sum to one.
620+
In this case, unbiasedness of $\widehat{x} = \vect{w}\tr \vect{y}$ requires that the weights sum to one.
621621
This can be imposed on the MSE minimization using a Lagrange multiplier $\lambda$,
622622
yielding the augmented system to solve:
623623
$$ \begin{pmatrix} \mat{C}_{\vect{y} \vect{y}} & \vect{1} \\ \vect{1}\tr & 0 \end{pmatrix} \begin{pmatrix} \vect{w} \\ \lambda \end{pmatrix}

notebooks/nb_mirrors/T7 - Geostats & Kriging [optional].py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -389,8 +389,8 @@ def sample_2D(power=1.5, transf01="expo", Range=0.3, nugget=1e-2):
389389

390390
# #### Kriging
391391
#
392-
# Kriging finds the best (minimum) mean square error (MSE), $\Expect (\hat{x} - x)^2$, among all linear "predictors",
393-
# $\hat{x} = \vect{w}\tr \vect{y}$, that are unbiased (BLUP).
392+
# Kriging finds the best (minimum) mean square error (MSE), $\Expect (\widehat{x} - x)^2$, among all linear "predictors",
393+
# $\widehat{x} = \vect{w}\tr \vect{y}$, that are unbiased (BLUP).
394394
#
395395
# <a name='Exc-–-"simple"-kriging-(SK)'></a>
396396
#
@@ -399,13 +399,13 @@ def sample_2D(power=1.5, transf01="expo", Range=0.3, nugget=1e-2):
399399
# Suppose $X(s)$ has a mean that is constant in space, $\mu$, and known.
400400
# Since it is easy to subtract (and later re-include) $\mu$
401401
# from both $x$ and the data $\vect{y}$, we simply assume $\mu = 0$.
402-
# Thus, $\Expect \vect{y} = \vect{0}$ and $\Expect \hat{x} = 0$ for any weights $\vect{w}$,
403-
# and so $\hat{x}$ is already unbiased.
402+
# Thus, $\Expect \vect{y} = \vect{0}$ and $\Expect \widehat{x} = 0$ for any weights $\vect{w}$,
403+
# and so $\widehat{x}$ is already unbiased.
404404
# Meanwhile,
405405
# $$
406406
# \begin{align}
407407
# \text{MSE}
408-
# % \Expect \big( \hat{x} - x \big)^2
408+
# % \Expect \big( \widehat{x} - x \big)^2
409409
# &= \Expect \big( \vect{w}\tr \vect{y} - x \big)^2 \\
410410
# % &= \Expect \big( \vect{w}\tr \vect{y} \vect{y}\tr \vect{w} - 2 x \vect{y}\tr \vect{w} + x^2 \big) \\
411411
# &= \vect{w}\tr \Expect \big( \vect{y} \vect{y}\tr \big) \vect{w}
@@ -441,7 +441,7 @@ def sample_2D(power=1.5, transf01="expo", Range=0.3, nugget=1e-2):
441441
# which then requires that the weights sum to one, i.e. $\vect{w}\tr\vect{1} = 1$.
442442
# Then, if $\mat{C}_{\vect{y}\vect{y}}$ is the covariance of $\vect{r}$,
443443
# the BLUE weights become $(\vect{1}\tr \mat{C}_{\vect{y}\vect{y}}^{-1})/(\vect{1}\tr \mat{C}_{\vect{y}\vect{y}}^{-1} \vect{1})$,
444-
# and so $\hat{x}(s)$ is constant in $s$, i.e. flat.
444+
# and so $\widehat{x}(s)$ is constant in $s$, i.e. flat.
445445
# Thus we see the need to include the randomness of $x$ along with that of $\vect{y}$.
446446
# This seems contrary to the classical BLUE framing,
447447
# but we have already seen it done for the Kalman gain (by augmenting the observations by the prior mean).
@@ -451,7 +451,7 @@ def sample_2D(power=1.5, transf01="expo", Range=0.3, nugget=1e-2):
451451
#
452452
# Now for the "dual" perspective of kriging, which is held by **radial basis function (RBF) interpolation**.
453453
# Ultimately, kriging (simple, ordinary, and universal) provides an estimate of the form
454-
# $\hat{x} = \vect{y}\tr (\mat{A}_\vect{y}^{-1} \vect{b}_{\vect{y} x})$,
454+
# $\widehat{x} = \vect{y}\tr (\mat{A}_\vect{y}^{-1} \vect{b}_{\vect{y} x})$,
455455
# where $\vect{y}$ is the observations,
456456
# and the subscripts indicate that $\mat{A}_\vect{y}$ depends on the locations of $\vect{y}$,
457457
# while $\vect{b}_{\vect{y} x}$ depends on the locations of $\vect{y}$ and $x$ both.
@@ -481,7 +481,7 @@ def sample_2D(power=1.5, transf01="expo", Range=0.3, nugget=1e-2):
481481
# - - -
482482
# </details>
483483
#
484-
# However, we previously derived this $\hat{x}$ as the posterior/conditional mean of Gaussian distribution.
484+
# However, we previously derived this $\widehat{x}$ as the posterior/conditional mean of Gaussian distribution.
485485
# This perspective is that of GP regression,
486486
# but was also highlighted by [Krige (1951)](#References),
487487
# and makes it evident that kriging also provides an uncertainty estimate
@@ -577,7 +577,7 @@ def simple_kriging(vg, dists_xy, dists_yy, observations, mu):
577577
# Let us do away with the assumption that the mean of the field is known,
578578
# all the while retaining the assumption that it is constant in space.
579579
# The resulting method is called ordinary kriging.
580-
# In this case, unbiasedness of $\hat{x} = \vect{w}\tr \vect{y}$ requires that the weights sum to one.
580+
# In this case, unbiasedness of $\widehat{x} = \vect{w}\tr \vect{y}$ requires that the weights sum to one.
581581
# This can be imposed on the MSE minimization using a Lagrange multiplier $\lambda$,
582582
# yielding the augmented system to solve:
583583
# $$ \begin{pmatrix} \mat{C}_{\vect{y} \vect{y}} & \vect{1} \\ \vect{1}\tr & 0 \end{pmatrix} \begin{pmatrix} \vect{w} \\ \lambda \end{pmatrix}

0 commit comments

Comments
 (0)