Skip to content

Latest commit

 

History

History
144 lines (105 loc) · 5.82 KB

File metadata and controls

144 lines (105 loc) · 5.82 KB

Skupper enables inter-cluster TCP communication.

This is a simple demonstration of TCP communication tunneled through a Skupper network from a private to a public namespace and back again. We will set up a Skupper network between the two namespaces, start a TCP echo-server on the public namespace, then communicate to it from the private namespace, and receive its replies. We will assume that Kubernetes is running on your local machine, and we will create and access both namespaces from within a single shell.

Prerequisites

You will need the skupper command line tool installed, and on your executable path.

Step 1: Set up the demo.

On your machine make a directory for this tutorial, clone the tutorial repo, and download the skupper CLI tool:

 mkdir ${HOME}/tcp-echo-demo
 git clone https://github.com/skupperproject/skupper-example-tcp-echo ${HOME}/tcp-echo-demo

Step 2: Start your cluster and define two namespaces.

$ alias kc='kubectl'
$ oc cluster up
$ oc new-project public
Now using project "public" on server "https://127.0.0.1:8443".
...
$ oc new-project private
Now using project "private" on server "https://127.0.0.1:8443".
...

Step 3: Start Skupper in the public namespace.

$ kc config set-context --current --namespace=public
Context "private/127-0-0-1:8443/developer" modified.
$ skupper status
skupper not enabled for public
$ skupper init --cluster-local --id public
Skupper is now installed in 'public'.  Use 'skupper status' to get more information.
$ skupper status
Skupper enabled for "public". It is not connected to any other sites.

Step 4: Make a connection token, and start the service.

$ skupper connection-token ${HOME}/secret.yaml
token will only be valid for local cluster
$ oc apply -f ${HOME}/tcp-echo-demo/public-deployment-and-service.yaml
deployment.extensions/tcp-go-echo created
service/tcp-go-echo created

Step 5: Start Skupper in the private namespace.

$ kc config set-context --current --namespace=private
Context "private/127-0-0-1:8443/developer" modified.
$ skupper status
skupper not enabled for private
$ skupper init --cluster-local --id private
Skupper is now installed in 'private'.  Use 'skupper status' to get more information.
$ skupper status
Skupper enabled for "private". It is not connected to any other sites.

If you ever issue the "skupper status" command and see this response...

Skupper enabled for "private". Status pending...

... just wait a few seconds and re-issue the command.

Step 6: Make the connection.

After issuing the connect command, a new service will show up in this namespace called tcp-go-echo. (It may take as long as two minutes for the service to appear.)

$ skupper connect ${HOME}/secret.yaml
$ kc get svc
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
skupper-internal    ClusterIP   172.30.46.68     <none>        55671/TCP,45671/TCP   2m
skupper-messaging   ClusterIP   172.30.180.253   <none>        5671/TCP              2m
tcp-go-echo         ClusterIP   172.30.17.63     <none>        9090/TCP              38s

Step 7: Communicate across namespaces.

Using the IP address and port number from the 'kc get svc' result, send a message to the local service. Skupper will route the message to the service that is running on the other namespace, and will route the reply back here.

ncat Mr. Watson, come here. I want to see you.
tcp-go-echo-67c875768f-kt6dc : MR. WATSON, COME HERE. I WANT TO SEE YOU.

The tcp-go-echo program returns a capitalized version of the message, prepended by its name and pod ID.

What Just Happened ?

Your ncat TCP message was received by the Skupper-created tcp-go-echo proxy in namespace 'private', wrapped in an AMQP message, and sent over the Skupper network to the Skupper-created proxy in the 'public' namespace. That proxy sent the TCP packets to the tcp-go-echo server (which knows nothing about AMQP), received its response, and reversed the process. After another trip over the Skupper network, the TCP response packets arrived back at our ncat process.
We demonstrated this using two namespaces in a single local cluster for ease of demonstration, but the establishment and use of Skupper connectivity works just as easily between any two (or more) clusters, public or private, anywhere.

Step 8: Cleanup.

Let's tidy up so no one trips over any of this stuff later. In the private namespace, delete the Skupper artifacts. In public, delete both Kubernetes and Skupper atrifacts.

$ kc config set-context --current --namespace=private
Context "private/127-0-0-1:8443/developer" modified.
$ skupper delete
Skupper is now removed from 'private'.
$ kc config set-context --current --namespace=public
Context "private/127-0-0-1:8443/developer" modified.
$ kc delete -f ${HOME}/tcp-echo-demo/public-deployment-and-service.yaml
deployment.extensions "tcp-go-echo" deleted
service "tcp-go-echo" deleted
$ skupper delete
Skupper is now removed from 'public'.
$ oc cluster down