Skip to content

Commit 968e78c

Browse files
committed
Merge pull request kubernetes#12487 from bprashanth/haproxy_docs
Multi-cluster services documentation, take II
2 parents 1e39eb0 + e27806d commit 968e78c

File tree

6 files changed

+378
-8
lines changed

6 files changed

+378
-8
lines changed

contrib/service-loadbalancer/Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ all: push
22

33
# 0.0 shouldn't clobber any released builds
44
TAG = 0.0
5-
PREFIX = bprashanth/servicelb
5+
PREFIX = gcr.io/google_containers/servicelb
66

77
server: service_loadbalancer.go
88
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' -o service_loadbalancer ./service_loadbalancer.go
@@ -11,7 +11,7 @@ container: server
1111
docker build -t $(PREFIX):$(TAG) .
1212

1313
push: container
14-
docker push $(PREFIX):$(TAG)
14+
gcloud docker push $(PREFIX):$(TAG)
1515

1616
clean:
1717
rm -f service_loadbalancer

contrib/service-loadbalancer/README.md

Lines changed: 90 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,11 +32,9 @@ __L7 load balancing of Http services__: The load balancer controller automatical
3232

3333
__L4 loadbalancing of Tcp services__: Since one needs to specify ports at pod creation time (kubernetes doesn't currently support port ranges), a single loadbalancer is tied to a set of preconfigured node ports, and hence a set of TCP services it can expose. The load balancer controller will dynamically add rules for each configured TCP service as it pops into existence. However, each "new" (unspecified in the tcpServices section of the loadbalancer.json) service will need you to open up a new container-host port pair for traffic. You can achieve this by creating a new loadbalancer pod with the `targetPort` set to the name of your service, and that service specified in the tcpServices map of the new loadbalancer.
3434

35-
3635
### Cross-cluster loadbalancing
3736

38-
Still trying this out.
39-
37+
On cloud providers that offer a private ip range for all instances on a network, you can setup multiple clusters in different availability zones, on the same network, and loadbalancer services across these zones. On GCE for example, every instance is a member of a single network. A network performs the same function that a router does: it defines the network range and gateway IP address, handles communication between instances, and serves as a gateway between instances and other networks. On such networks the endpoints of a service in one cluster are visible in all other clusters in the same network, so you can setup an edge loadbalancer that watches a kubernetes master of another cluster for services. Such a deployment allows you to fallback to a different AZ during times of duress or planned downtime (eg: database update).
4038

4139
### Examples
4240

@@ -188,6 +186,95 @@ $ mysql -u root -ppassword --host 104.197.63.17 --port 3306 -e 'show databases;'
188186
+--------------------+
189187
```
190188

189+
190+
#### Cross-cluster loadbalancing
191+
192+
First setup your 2 clusters, and a kubeconfig secret as described in the [sharing clusters example] (../../examples/sharing-clusters/README.md). We will create a loadbalancer in our first cluster (US) and have it publish the services from the second cluster (EU). This is the entire modified loadbalancer manifest:
193+
194+
```yaml
195+
apiVersion: v1
196+
kind: ReplicationController
197+
metadata:
198+
name: service-loadbalancer
199+
labels:
200+
app: service-loadbalancer
201+
version: v1
202+
spec:
203+
replicas: 1
204+
selector:
205+
app: service-loadbalancer
206+
version: v1
207+
template:
208+
metadata:
209+
labels:
210+
app: service-loadbalancer
211+
version: v1
212+
spec:
213+
volumes:
214+
# token from the eu cluster, must already exist
215+
# and match the name of the volume using in container
216+
- name: eu-config
217+
secret:
218+
secretName: kubeconfig
219+
nodeSelector:
220+
role: loadbalancer
221+
containers:
222+
- image: gcr.io/google_containers/servicelb:0.1
223+
imagePullPolicy: Always
224+
livenessProbe:
225+
httpGet:
226+
path: /healthz
227+
port: 8081
228+
scheme: HTTP
229+
initialDelaySeconds: 30
230+
timeoutSeconds: 5
231+
name: haproxy
232+
ports:
233+
# All http services
234+
- containerPort: 80
235+
hostPort: 80
236+
protocol: TCP
237+
# nginx https
238+
- containerPort: 443
239+
hostPort: 8080
240+
protocol: TCP
241+
# mysql
242+
- containerPort: 3306
243+
hostPort: 3306
244+
protocol: TCP
245+
# haproxy stats
246+
- containerPort: 1936
247+
hostPort: 1936
248+
protocol: TCP
249+
resources: {}
250+
args:
251+
- --tcp-services=mysql:3306,nginxsvc:443
252+
- --use-kubernetes-cluster-service=false
253+
# use-kubernetes-cluster-service=false in conjunction with the
254+
# kube/config will force the service-loadbalancer to watch for
255+
# services form the eu cluster.
256+
volumeMounts:
257+
- mountPath: /.kube
258+
name: eu-config
259+
env:
260+
- name: KUBECONFIG
261+
value: /.kube/config
262+
```
263+
264+
Note that it is essentially the same as the rc.yaml checked into the service-loadbalancer directory expect that it consumes the kubeconfig secret as an extra KUBECONFIG environment variable.
265+
266+
```cmd
267+
$ kubectl config use-context <us-clustername>
268+
$ kubectl create -f rc.yaml
269+
$ kubectl get pods -o wide
270+
service-loadbalancer-5o2p4 1/1 Running 0 13m kubernetes-minion-5jtd
271+
$ kubectl get node kubernetes-minion-5jtd -o json | grep -i externalip -A 2
272+
"type": "ExternalIP",
273+
"address": "104.197.81.116"
274+
$ curl http://104.197.81.116/nginxsvc
275+
Europe
276+
```
277+
191278
### Troubleshooting:
192279
- If you can curl or netcat the endpoint from the pod (with kubectl exec) and not from the node, you have not specified hostport and containerport.
193280
- If you can hit the ips from the node but not from your machine outside the cluster, you have not opened firewall rules for the right network.

contrib/service-loadbalancer/rc.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ spec:
1919
nodeSelector:
2020
role: loadbalancer
2121
containers:
22-
- image: bprashanth/servicelb:0.0
22+
- image: gcr.io/google_containers/servicelb:0.1
2323
imagePullPolicy: Always
2424
livenessProbe:
2525
httpGet:

contrib/service-loadbalancer/template.cfg

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ backend {{$svc.Name}}
4848
balance roundrobin
4949
# TODO: Make the path used to access a service customizable.
5050
reqrep ^([^\ :]*)\ /{{$svc.Name}}[/]?(.*) \1\ /\2
51-
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}} check
51+
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}}
5252
{{end}}
5353
{{end}}
5454

@@ -64,6 +64,6 @@ frontend {{$svc.Name}}
6464
backend {{$svc.Name}}
6565
balance roundrobin
6666
mode tcp
67-
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}} check
67+
{{range $j, $ep := $svc.Ep}}server {{$svcName}}_{{$j}} {{$ep}}
6868
{{end}}
6969
{{end}}
Lines changed: 220 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,220 @@
1+
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
2+
3+
<!-- BEGIN STRIP_FOR_RELEASE -->
4+
5+
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
6+
width="25" height="25">
7+
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
8+
width="25" height="25">
9+
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
10+
width="25" height="25">
11+
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
12+
width="25" height="25">
13+
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
14+
width="25" height="25">
15+
16+
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
17+
18+
If you are using a released version of Kubernetes, you should
19+
refer to the docs that go with that version.
20+
21+
<strong>
22+
The latest 1.0.x release of this document can be found
23+
[here](http://releases.k8s.io/release-1.0/examples/sharing-clusters/README.md).
24+
25+
Documentation for other releases can be found at
26+
[releases.k8s.io](http://releases.k8s.io).
27+
</strong>
28+
--
29+
30+
<!-- END STRIP_FOR_RELEASE -->
31+
32+
<!-- END MUNGE: UNVERSIONED_WARNING -->
33+
34+
# Sharing Clusters
35+
36+
This example demonstrates how to access one kubernetes cluster from another. It only works if both clusters are running on the same network, on a cloud provider that provides a private ip range per network (eg: GCE, GKE, AWS).
37+
38+
## Setup
39+
40+
Create a cluster in US (you don't need to do this if you already have a running kubernetes cluster)
41+
42+
```shell
43+
$ cluster/kube-up.sh
44+
```
45+
46+
Before creating our second cluster, lets have a look at the kubectl config:
47+
48+
```yaml
49+
apiVersion: v1
50+
clusters:
51+
- cluster:
52+
certificate-authority-data: REDACTED
53+
server: https://104.197.84.16
54+
name: <clustername_us>
55+
...
56+
current-context: <clustername_us>
57+
...
58+
```
59+
60+
Now spin up the second cluster in Europe
61+
62+
```shell
63+
$ ./cluster/kube-up.sh
64+
$ KUBE_GCE_ZONE=europe-west1-b KUBE_GCE_INSTANCE_PREFIX=eu ./cluster/kube-up.sh
65+
```
66+
67+
Your kubectl config should contain both clusters:
68+
69+
```yaml
70+
apiVersion: v1
71+
clusters:
72+
- cluster:
73+
certificate-authority-data: REDACTED
74+
server: https://146.148.25.221
75+
name: <clustername_eu>
76+
- cluster:
77+
certificate-authority-data: REDACTED
78+
server: https://104.197.84.16
79+
name: <clustername_us>
80+
...
81+
current-context: kubernetesdev_eu
82+
...
83+
```
84+
85+
And kubectl get nodes should agree:
86+
87+
```
88+
$ kubectl get nodes
89+
NAME LABELS STATUS
90+
eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready
91+
eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready
92+
eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready
93+
eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready
94+
95+
$ kubectl config use-context <clustername_us>
96+
$ kubectl get nodes
97+
NAME LABELS STATUS
98+
kubernetes-minion-5jtd kubernetes.io/hostname=kubernetes-minion-5jtd Ready
99+
kubernetes-minion-lqfc kubernetes.io/hostname=kubernetes-minion-lqfc Ready
100+
kubernetes-minion-sjra kubernetes.io/hostname=kubernetes-minion-sjra Ready
101+
kubernetes-minion-wul8 kubernetes.io/hostname=kubernetes-minion-wul8 Ready
102+
```
103+
104+
## Testing reachability
105+
106+
For this test to work we'll need to create a service in europe:
107+
108+
```
109+
$ kubectl config use-context <clustername_eu>
110+
$ kubectl create -f /tmp/secret.json
111+
$ kubectl create -f examples/https-nginx/nginx-app.yaml
112+
$ kubectl exec -it my-nginx-luiln -- echo "Europe nginx" >> /usr/share/nginx/html/index.html
113+
$ kubectl get ep
114+
NAME ENDPOINTS
115+
kubernetes 10.240.249.92:443
116+
nginxsvc 10.244.0.4:80,10.244.0.4:443
117+
```
118+
119+
Just to test reachability, we'll try hitting the Europe nginx from our initial US central cluster. Create a basic curl pod in the US cluster:
120+
121+
```yaml
122+
apiVersion: v1
123+
kind: Pod
124+
metadata:
125+
name: curlpod
126+
spec:
127+
containers:
128+
- image: radial/busyboxplus:curl
129+
command:
130+
- sleep
131+
- "360000000"
132+
imagePullPolicy: IfNotPresent
133+
name: curlcontainer
134+
restartPolicy: Always
135+
```
136+
137+
And test that you can actually reach the test nginx service across continents
138+
139+
```
140+
$ kubectl config use-context <clustername_us>
141+
$ kubectl -it exec curlpod -- /bin/sh
142+
[ root@curlpod:/ ]$ curl http://10.244.0.4:80
143+
Europe nginx
144+
```
145+
146+
## Granting access to the remote cluster
147+
148+
We will grant the US cluster access to the Europe cluster. Basically we're going to setup a secret that allows kubectl to function in a pod running in the US cluster, just like it did on our local machine in the previous step. First create a secret with the contents of the current .kube/config:
149+
150+
```shell
151+
$ kubectl config use-context <clustername_eu>
152+
$ go run ./make_secret.go --kubeconfig=$HOME/.kube/config > /tmp/secret.json
153+
$ kubectl config use-context <clustername_us>
154+
$ kubectl create -f /tmp/secret.json
155+
```
156+
157+
Create a kubectl pod that uses the secret, in the US cluster.
158+
159+
```json
160+
{
161+
"kind": "Pod",
162+
"apiVersion": "v1",
163+
"metadata": {
164+
"name": "kubectl-tester"
165+
},
166+
"spec": {
167+
"volumes": [
168+
{
169+
"name": "secret-volume",
170+
"secret": {
171+
"secretName": "kubeconfig"
172+
}
173+
}
174+
],
175+
"containers": [
176+
{
177+
"name": "kubectl",
178+
"image": "bprashanth/kubectl:0.0",
179+
"imagePullPolicy": "Always",
180+
"env": [
181+
{
182+
"name": "KUBECONFIG",
183+
"value": "/.kube/config"
184+
}
185+
],
186+
"args": [
187+
"proxy", "-p", "8001"
188+
],
189+
"volumeMounts": [
190+
{
191+
"name": "secret-volume",
192+
"mountPath": "/.kube"
193+
}
194+
]
195+
}
196+
]
197+
}
198+
}
199+
```
200+
201+
And check that you can access the remote cluster
202+
203+
```shell
204+
$ kubectl config use-context <clustername_us>
205+
$ kubectl exec -it kubectl-tester bash
206+
207+
kubectl-tester $ kubectl get nodes
208+
NAME LABELS STATUS
209+
eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready
210+
eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready
211+
eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready
212+
eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready
213+
```
214+
215+
For a more advanced example of sharing clusters, see the [service-loadbalancer](../../contrib/service-loadbalancer/README.md)
216+
217+
218+
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
219+
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/sharing-clusters/README.md?pixel)]()
220+
<!-- END MUNGE: GENERATED_ANALYTICS -->

0 commit comments

Comments
 (0)