This is a simple demonstration of TCP communication tunneled through a Skupper network from a private to a public namespace and back again. We will use two namespaces for simplicity of setup, but this would work the same way on two separate clusters.
We will set up a Skupper network between the two namespaces, start a TCP echo-server on the public namespace, then communicate to it from the private namespace and receive its replies.
We will assume that Kubernetes is running on your local machine, and we will create and access both namespaces from within a single shell.
- Prerequisites
- Step 1: Set up the demo.
- Step 2: Start your cluster and define two namespaces.
- Step 3: Start Skupper in the public namespace.
- Step 4: Make a connection token, and start the service.
- Step 5: Start Skupper in the private namespace..
- Step 6: Make the connection.
- Step 7: Communicate across namespaces.
- Step 8: Cleanup.
- The
kubectlcommand-line tool, version 1.15 or later (installation guide) - The
skuppercommand-line tool, the latest version (installation guide) - Two Kubernetes namespaces, from any providers you choose, on any clusters you choose
On your machine make a directory for this tutorial, clone the tutorial repo, and download the skupper CLI tool:
mkdir ${HOME}/tcp-echo-demo
git clone https://github.com/skupperproject/skupper-example-tcp-echo ${HOME}/tcp-echo-demo
alias kc='kubectl'
oc cluster up
oc new-project public
oc new-project private
kc config set-context --current --namespace=public
skupper init --cluster-local --id public
skupper status
skupper connection-token ${HOME}/secret.yaml
oc apply -f ${HOME}/tcp-echo-demo/public-deployment-and-service.yaml
kc config set-context --current --namespace=private
skupper init --cluster-local --id private
skupper status
After issuing the connect command, a new service will show up in this namespace called tcp-go-echo. (It may take as long as two minutes for the service to appear.)
skupper connect ${HOME}/secret.yaml
kc get svc
Using the IP address and port number from the 'kc get svc' result, send a message to the local service. Skupper will route the message to the service that is running on the other namespace, and will route the reply back here.
ADDR=`kubectl get svc/tcp-go-echo -o=jsonpath='{.spec.clusterIP}'`
PORT=`kubectl get svc/tcp-go-echo -o=jsonpath='{.spec.ports[0].port}'`
telnet ${ADDR} ${PORT}
Mr. Watson, come here. I want to see you.
tcp-go-echo-67c875768f-kt6dc : MR. WATSON, COME HERE. I WANT TO SEE YOU.
The tcp-go-echo program returns a capitalized version of the message, prepended by its name and pod ID.
Your ncat TCP message was received by the Skupper-created tcp-go-echo proxy in namespace 'private', wrapped in an AMQP message, and sent over the Skupper network to the Skupper-created proxy in the 'public' namespace. That proxy sent the TCP packets to the tcp-go-echo server (which knows nothing about AMQP), received its response, and reversed the process. After another trip over the Skupper network, the TCP response packets arrived back at our ncat process.
We demonstrated this using two namespaces in a single local cluster for ease of demonstration, but the establishment and use of Skupper connectivity works just as easily between any two (or more) clusters, public or private, anywhere.
Let's tidy up so no one trips over any of this stuff later. In the private namespace, delete the Skupper artifacts. In public, delete both Kubernetes and Skupper atrifacts.
kc config set-context --current --namespace=private
skupper delete
kc config set-context --current --namespace=public
kc delete -f ${HOME}/tcp-echo-demo/public-deployment-and-service.yaml
skupper delete
oc cluster down