Skip to content

Latest commit

 

History

History
159 lines (101 loc) · 5.84 KB

File metadata and controls

159 lines (101 loc) · 5.84 KB

Skupper enables inter-cluster TCP communication

TCP tunneling with Skupper

Overview

This is a simple demonstration of TCP communication tunneled through a Skupper network from a private to a public cluster and back again. During development of this demonstration, the private cluster was running locally, while the public cluster was on AWS.
We will set up a Skupper network between the two clusters, start a TCP echo-server on the public cluster, then communicate to it from the private cluster and receive its replies. At no time is any port opened on the machine running the private cluster.

Prerequisites

  • The kubectl command-line tool, version 1.15 or later (installation guide)
  • The skupper command-line tool, the latest version (installation guide)
  • Two Kubernetes namespaces, from any providers you choose, on any clusters you choose. ( In this example, the namespaces are called 'public' and 'private'. )
  • A private cluster running on your local machine.
  • A public cluster is running on a public cloud provider.

Step 1: Set up the demo

  1. On your local machine, make a directory for this tutorial and clone the example repo:

    mkdir ${HOME}/tcp-echo
    cd ${HOME}/tcp-echo
    git clone https://github.com/skupperproject/skupper-example-tcp-echo
    
  2. Prepare the target clusters.

    1. On your local machine, log in to both clusters in a separate terminal session.
    2. In each cluster, set the kubectl config context to use the demo namespace (see Skupper Getting Started Guide)

Step 2: Set Up the Virtual Application Network

  1. In the terminal for the public cluster, create the public namespace and deploy the tcp echo server in it :

    kubectl create namespace public
    kubectl config set-context --current --namespace=public
    kubectl apply -f ${HOME}/tcp-echo/public-deployment.yaml
  2. Still in the public cluster, start Skupper, expose the tcp-echo deployment, and generate a connection token :

    skupper init
    skupper expose --port 9090 deployment tcp-go-echo
    skupper connection-token ${HOME}/tcp-echo/public_secret.yaml

    Please Note: The connection token contains a secret and should only be shared with trusted sites.

  3. In the private cluster, create the private namespace :

    kubectl create namespace private
    kubectl config set-context --current --namespace=private
  4. Start Skupper in the private cluster, and connect to public :

    skupper init
    skupper connect ${HOME}/tcp-echo/public_secret.yaml
  5. See that Skupper is now exposing the public-cluster tcp-echo service on this private cluster. (This may take a few seconds. If it's not there immediately, wait a few seconds and try again.) :

    kubectl get svc
    
    # Example output :
    # NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
    # skupper-internal    ClusterIP   172.30.202.39    <none>        55671/TCP,45671/TCP   22s
    # skupper-messaging   ClusterIP   172.30.207.178   <none>        5671/TCP              22s
    # tcp-go-echo         ClusterIP   172.30.106.241   <none>        9090/TCP              8s
    

Step 3: Access the public service remotely

One the private cluster, run telnet on the cluster-IP and port that Skupper has exposed for the tcp-echo service.

telnet 172.30.106.241 9090
Trying 172.30.106.241...
Connected to 172.30.106.241.
Escape character is '^]'.
Do what thou wilt shall be the whole of the law.
tcp-go-echo-f55984966-v5px2 : DO WHAT THOU WILT SHALL BE THE WHOLE OF THE LAW.
^]
telnet> quit
Connection closed.

What just happened?

The TCP echo server was deployed and running on a publicly accessible cluster. The use of Skupper on that cluster allowed us to generate a connection token, which we then used to securely connect to the public cluster from our privat one. Since the connection was initiated by the Skupper instance on the private cluster, no ports on the private cluster were opened.
Because we told Skupper to expose the TCP Echo service, when the two Skupper instances connected with each other the private instance learned about that service. The Skupper instance on the private cluster then made a forwarder to that service available on its cluster.
We were then able to use telnet to send TCP messages to an IP address and port on the private cluster which Skupper then forwarded to the actual service running in public. Skupper then brought the response back to us, making it feel as if the service were runing locally.
All Skupper traffic between the two clusters was TLS encrypted.

Cleaning Up

Delete the pod and the virtual application network that were created in the demonstration.

  1. In the terminal for the public cluster:

    # Get POD ID with 'kubectl get pods'
    $ kubectl delete pod tcp-go-echo-<POD-ID>
    $ skupper delete
  2. In the terminal for the private cluster:

    $ skupper delete

Next steps