Skip to content

Latest commit

 

History

History
146 lines (91 loc) · 4.71 KB

File metadata and controls

146 lines (91 loc) · 4.71 KB

Skupper enables inter-cluster TCP communication

TCP tunneling with Skupper

Overview

This is a simple demonstration of TCP communication tunneled through a Skupper network from a private to a public cluster and back again. During development of this demonstration, the private cluster was running locally, while the public cluster was on AWS.
We will set up a Skupper network between the two clusters, start a TCP echo-server on the public cluster, then communicate to it from the private cluster and receive its replies. At no time is any port opened on the machine running the private cluster.

Prerequisites

  • The kubectl command-line tool, version 1.15 or later (installation guide)
  • The skupper command-line tool, the latest version (installation guide)
  • Two Kubernetes namespaces, from any providers you choose, on any clusters you choose. ( In this example, the namespaces are called 'public' and 'private'. )
  • A private cluster running on your local machine.
  • A public cluster is running on a public cloud provider.

Step 1: Set up the demo

  1. On your local machine, make a directory for this tutorial and clone the example repo:

    mkdir ${HOME}/tcp-echo
    cd ${HOME}/tcp-echo
    git clone https://github.com/skupperproject/skupper-example-tcp-echo
    
  2. Prepare the target clusters.

    1. On your local machine, log in to both clusters in a separate terminal session.
    2. In each cluster, set the kubectl config context to use the demo namespace (see kubectl cheat sheet)

Step 2: Set Up the Virtual Application Network

  1. In the terminal for the public cluster, create the public namespace and deploy the tcp echo server in it :

    kubectl create namespace public
    kubectl config set-context --current --namespace=public
    kubectl apply -f ${HOME}/tcp-echo/public-deployment.yaml
  2. Still in the public cluster, start Skupper, expose the tcp-echo deployment, and generate a connection token :

    skupper init
    skupper expose --port 9090 deployment tcp-go-echo
    skupper connection-token ${HOME}/tcp-echo/public_secret.yaml
  3. In the private cluster, create the private namespace :

    kubectl create namespace private
    kubectl config set-context --current --namespace=private
  4. Start Skupper in the private cluster, and connect to public :

    skupper init
    skupper connect ${HOME}/tcp-echo/public_secret.yaml
  5. See that Skupper is now exposing the public-cluster tcp-echo service on this private cluster. (This may take a few seconds. If it's not there immediately, wait a few seconds and try again.) :

    kubectl get svc
    
    # Example output :
    # NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
    # skupper-internal    ClusterIP   172.30.202.39    <none>        55671/TCP,45671/TCP   22s
    # skupper-messaging   ClusterIP   172.30.207.178   <none>        5671/TCP              22s
    # tcp-go-echo         ClusterIP   172.30.106.241   <none>        9090/TCP              8s
    

Step 3: Access the public service remotely

One the private cluster, run telnet on the cluster-IP and port that Skupper has exposed for the tcp-echo service.

telnet 172.30.106.241 9090
Trying 172.30.106.241...
Connected to 172.30.106.241.
Escape character is '^]'.
Do what thou wilt shall be the whole of the law.
tcp-go-echo-f55984966-v5px2 : DO WHAT THOU WILT SHALL BE THE WHOLE OF THE LAW.
^]
telnet> quit
Connection closed.

Cleaning Up

Delete the pod and the virtual application network that were created in the demonstration.

  1. In the terminal for the public cluster:

    # Get POD ID with 'kubectl get pods'
    $ kubectl delete pod tcp-go-echo-<POD-ID>
    $ skupper delete
  2. In the terminal for the private cluster:

    $ skupper delete

Next steps