|
715 | 715 | "source": [ |
716 | 716 | "#### Kriging\n", |
717 | 717 | "\n", |
718 | | - "Kriging finds the best (minimum) mean square error (MSE), $\\Expect (\\hat{x} - x)^2$, among all linear \"predictors\",\n", |
719 | | - "$\\hat{x} = \\vect{w}\\tr \\vect{y}$, that are unbiased (BLUP).\n", |
| 718 | + "Kriging finds the best (minimum) mean square error (MSE), $\\Expect (\\widehat{x} - x)^2$, among all linear \"predictors\",\n", |
| 719 | + "$\\widehat{x} = \\vect{w}\\tr \\vect{y}$, that are unbiased (BLUP).\n", |
720 | 720 | "\n", |
721 | 721 | "<a name='Exc-–-\"simple\"-kriging-(SK)'></a>\n", |
722 | 722 | "\n", |
|
725 | 725 | "Suppose $X(s)$ has a mean that is constant in space, $\\mu$, and known.\n", |
726 | 726 | "Since it is easy to subtract (and later re-include) $\\mu$\n", |
727 | 727 | "from both $x$ and the data $\\vect{y}$, we simply assume $\\mu = 0$.\n", |
728 | | - "Thus, $\\Expect \\vect{y} = \\vect{0}$ and $\\Expect \\hat{x} = 0$ for any weights $\\vect{w}$,\n", |
729 | | - "and so $\\hat{x}$ is already unbiased.\n", |
| 728 | + "Thus, $\\Expect \\vect{y} = \\vect{0}$ and $\\Expect \\widehat{x} = 0$ for any weights $\\vect{w}$,\n", |
| 729 | + "and so $\\widehat{x}$ is already unbiased.\n", |
730 | 730 | "Meanwhile,\n", |
731 | 731 | "$$\n", |
732 | 732 | "\\begin{align}\n", |
733 | 733 | " \\text{MSE}\n", |
734 | | - " % \\Expect \\big( \\hat{x} - x \\big)^2\n", |
| 734 | + " % \\Expect \\big( \\widehat{x} - x \\big)^2\n", |
735 | 735 | " &= \\Expect \\big( \\vect{w}\\tr \\vect{y} - x \\big)^2 \\\\\n", |
736 | 736 | " % &= \\Expect \\big( \\vect{w}\\tr \\vect{y} \\vect{y}\\tr \\vect{w} - 2 x \\vect{y}\\tr \\vect{w} + x^2 \\big) \\\\\n", |
737 | 737 | " &= \\vect{w}\\tr \\Expect \\big( \\vect{y} \\vect{y}\\tr \\big) \\vect{w}\n", |
|
767 | 767 | " which then requires that the weights sum to one, i.e. $\\vect{w}\\tr\\vect{1} = 1$.\n", |
768 | 768 | " Then, if $\\mat{C}_{\\vect{y}\\vect{y}}$ is the covariance of $\\vect{r}$,\n", |
769 | 769 | " the BLUE weights become $(\\vect{1}\\tr \\mat{C}_{\\vect{y}\\vect{y}}^{-1})/(\\vect{1}\\tr \\mat{C}_{\\vect{y}\\vect{y}}^{-1} \\vect{1})$,\n", |
770 | | - " and so $\\hat{x}(s)$ is constant in $s$, i.e. flat.\n", |
| 770 | + " and so $\\widehat{x}(s)$ is constant in $s$, i.e. flat.\n", |
771 | 771 | " Thus we see the need to include the randomness of $x$ along with that of $\\vect{y}$.\n", |
772 | 772 | " This seems contrary to the classical BLUE framing,\n", |
773 | 773 | " but we have already seen it done for the Kalman gain (by augmenting the observations by the prior mean).\n", |
|
777 | 777 | "\n", |
778 | 778 | " Now for the \"dual\" perspective of kriging, which is held by **radial basis function (RBF) interpolation**.\n", |
779 | 779 | " Ultimately, kriging (simple, ordinary, and universal) provides an estimate of the form\n", |
780 | | - " $\\hat{x} = \\vect{y}\\tr (\\mat{A}_\\vect{y}^{-1} \\vect{b}_{\\vect{y} x})$,\n", |
| 780 | + " $\\widehat{x} = \\vect{y}\\tr (\\mat{A}_\\vect{y}^{-1} \\vect{b}_{\\vect{y} x})$,\n", |
781 | 781 | " where $\\vect{y}$ is the observations,\n", |
782 | 782 | " and the subscripts indicate that $\\mat{A}_\\vect{y}$ depends on the locations of $\\vect{y}$,\n", |
783 | 783 | " while $\\vect{b}_{\\vect{y} x}$ depends on the locations of $\\vect{y}$ and $x$ both.\n", |
|
807 | 807 | " - - -\n", |
808 | 808 | "</details>\n", |
809 | 809 | "\n", |
810 | | - "However, we previously derived this $\\hat{x}$ as the posterior/conditional mean of Gaussian distribution.\n", |
| 810 | + "However, we previously derived this $\\widehat{x}$ as the posterior/conditional mean of Gaussian distribution.\n", |
811 | 811 | "This perspective is that of GP regression,\n", |
812 | 812 | "but was also highlighted by [Krige (1951)](#References),\n", |
813 | 813 | "and makes it evident that kriging also provides an uncertainty estimate\n", |
|
986 | 986 | "Let us do away with the assumption that the mean of the field is known,\n", |
987 | 987 | "all the while retaining the assumption that it is constant in space.\n", |
988 | 988 | "The resulting method is called ordinary kriging.\n", |
989 | | - "In this case, unbiasedness of $\\hat{x} = \\vect{w}\\tr \\vect{y}$ requires that the weights sum to one.\n", |
| 989 | + "In this case, unbiasedness of $\\widehat{x} = \\vect{w}\\tr \\vect{y}$ requires that the weights sum to one.\n", |
990 | 990 | "This can be imposed on the MSE minimization using a Lagrange multiplier $\\lambda$,\n", |
991 | 991 | "yielding the augmented system to solve:\n", |
992 | 992 | "$$ \\begin{pmatrix} \\mat{C}_{\\vect{y} \\vect{y}} & \\vect{1} \\\\ \\vect{1}\\tr & 0 \\end{pmatrix} \\begin{pmatrix} \\vect{w} \\\\ \\lambda \\end{pmatrix}\n", |
|
0 commit comments