Articles from kubernetes category

Google Cloud Kubernetes Deployment

Mon 02 August 2021 | general kubernetes

Kubernetes banner

SFTP and HTTPS file server cluster overview

This article describes the deployment of a trial SFTPPlus engine using an already created Google Cloud Platform Kubernetes Engine service (GKE).

The example Kubernetes YAML file can be found in our GitHub SFTPPlus Kubernetes repository while the container source available at our GitHub SFTPPlus Docker repository.

The actual user data is persisted using a single Google Cloud Storage bucket.

You can adapt the example from this article to any other Kubernetes system, like OpenShift, Azure Kubernetes Service (AKS) or Amazon AWS.

We would love to hear your feedback. Get in touch with us for comments or questions.

Final result

Once completing the steps in this guide, you will have an SFTPPlus application with the following services:

  • Port 10020 - HTTPS web management interface
  • Port 443 - HTTPS server for end-user file management
  • Port 22 - SFTP server for end-user file access

All these services will be available via the public IP address associated with your load balancer.

The Google Cloud storage bucket is made available inside each container as the /srv/storage local path.

SFTPPlus also supports legacy FTP, explicit or implicit FTPS or plain HTTP file access. They are not included in this guide to reduce the complexity on the provided information and configuration.

SFTPPlus GKE deployment diagram

Deployment components

For deploy the SFTPPlus into the cluster we will use the following components:

  • The SFTPPlus Trial container image hosted at Docker Hub.
  • A Google Kubernetes Engine that was already created.
  • A Google Cloud Storage bucket to persist user data. These are the files and directories available to end-users. You can create a new bucket or use an existing one.
  • A Google Cloud service accounts with write access to the storage bucket.
  • Kubernetes cluster secret to store the Google Cloud Storage credentials. This will be used by Kubernetes pods to access the persistence storage. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the SFTPPlus application. Instructions for creating this are provided below.
  • A Kubernetes Load Balancer server for connecting the application to the Internet. Instructions for creating this are provided below.

Cloud storage secure access from Kubernetes cluster

Since the persistent data is stored outside the cluster, we need to setup the authentication to your cloud storage bucket from within the Kubernetes cluster.

We assume that you will use the Google Cloud console to create the storage bucket and the Service account.

Once the Service account is created, create a new credentials key or associate with an existing one.

You will need to upload/copy the Service account's credential key to your cloud shell. For this example we assume that the file is named gcs-credentials.json and the secret is created using the sftpplus.gcs.credentials name. Then it can be imported into Kubernetes using the following command line:

kubectl create secret generic sftpplus.gcs.credentials \
    --from-file  gcs-credentials.json

Load Balancer and Internet access

For accessing the SFTPPlus over the Internet we will use a standard Kubernetes Load Balancer service .

Below, you can find an example YAML file named sftpplus-service.yaml that can be copied to your cloud console:

apiVersion: v1
kind: Service
metadata:
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sftpplus-app
  name: sftpplus-app-load-balancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 10020-to-10020-tcp
    nodePort: 30500
    port: 10020
    protocol: TCP
    targetPort: 10020
  - name: 443-to-10443-tcp
    nodePort: 32013
    port: 443
    protocol: TCP
    targetPort: 10443
  - name: 22-to-10022-tcp
    nodePort: 32045
    port: 22
    protocol: TCP
    targetPort: 10022
  selector:
    app: sftpplus-app
  sessionAffinity: None
  type: LoadBalancer

If you want to have the SFTPPlus services available on other port numbers, you can do so by updating the port configuration values. nodePort and targetPort don't need to be updated.

Create or update the load balancer service with the following command:

kubectl apply -f sftpplus-service.yaml

Kubernetes SFTPPlus application deployment

The SFTPlus application will be deployed as a container inside a pod.

The configuration and data is persisted outside of the cluster, using a cloud storage bucket

The deployment to the cluster can be done using the following YAML file named sftpplus-workload.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sftpplus-app
  name: sftpplus-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sftpplus-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: sftpplus-app
    spec:
      containers:
      - env:
        - name: GCS_BUCKET
          value: sftpplus-trial-srv-storage
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: /srv/gcs-credentials/gcs-credentials.json
        image: proatria/sftpplus-trial:4.12.0-cloud
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command:
              - fusermount
              - -u
              - /srv/storage
        name: sftpplus-trial
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /srv/gcs-credentials
          name: gcs-credentials-key
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: gcs-credentials-key
        secret:
          defaultMode: 420
          secretName: sftpplus.gcs.credentials

You should replace sftpplus-trial-srv-storage with the name of your storage bucket.

This does the following:

  • Creates a new container using the SFTPPlus trial image hosted on DockerHub.
  • Will give access to cloud storage bucket from inside the container at path /srv/storage.
  • Will make the Service account credentials available in the /srv/gcs-credentials/gcs-credentials.json file, available via the cluster secrets.

With the YAML file available in the cloud console, you can create or upload the workload by using the following command:

kubectl apply -f sftpplus-workload.yaml
• • •

Kubernetes with NFS persistence Deployment

Mon 07 June 2021 | general kubernetes

Kubernetes banner

Deployment Foundation

This article describes the deployment of a trial SFTPPlus engine using the Google Cloud Platform Kubernetes Engine service, with the persisted data shared between the cluster nodes using an NFS server that is also hosted inside the cluster.

The deployment is done in a single zone.

The container image used in this example is the DockerHub SFTPPlus Trial.

The source of the container image is available from our public GitHub SFTPPlus Docker repository.

The example Kubernetes YAML file can be found in our GitHub SFTPPlus Kubernetes repository

The actual user data is persisted using a single Google Cloud Compute Engine storage disk.

The information from this article can be adapted to use any other container image or deployed into any other Kubernetes Engine service, like Azure Kubernetes Service (AKS) or Amazon Elastic Kubernetes Service.

It assumes that you already have a working Kubernetes cluster.

It assumes that the SFTPPlus application version and configuration is managed and versioned using the container image.

For any comments or questions, don't hesitate to get in touch with us.

Final result

Once completing the steps in this guide, you will have an SFTPPlus application with the following services:

  • Port 10020 - HTTPS web based management console
  • Port 443 - HTTPS end-user file management service
  • Port 22 - SFTP end-user service
  • Port 80 - Let's Encrypt Validation service

All these services will be available via your cluster IP address.

The Compute Engine disk is made available inside each container as the /srv/storage local path.

SFTPPlus GKE deployment diagram

Moving parts

For implementing the SFTPPlus service we will be using the following parts:

  • The SFTPPlus Trial container image hosted at Docker Hub.
  • A Google Kubernetes Engine with at least 2 nodes, each node with a minimum of 2 GB of memory and 100GB of storage. This is a prerequisite for this article.
  • Kubernetes persistence volume (and persistence volume claim) to store the user data. Instructions for creating this are provided below.
  • A Kubernetes Load Balancer service for connecting the application to the Internet. Instructions for creating this are provided below.
  • A Kubernetes Cluster IP service for allowing concurrent access to cluster pods to the persistence disk. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the NFS server that will make the data from the persistence disk available to multiple pods inside the cluster. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the SFTPPlus application. Instructions for creating this are provided below.

Kubernetes load balancer and Internet access

This section describes the process of creating a Kubernetes load balancer service to allow external Internet access to the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-service.yaml to your cloud console:

apiVersion: v1
kind: Service
metadata:
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sftpplus-app
  name: sftpplus-app-load-balancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 443-to-10443-tcp
    nodePort: 32013
    port: 443
    protocol: TCP
    targetPort: 10443
  - name: 22-to-10022-tcp
    nodePort: 32045
    port: 22
    protocol: TCP
    targetPort: 10022
  selector:
    app: sftpplus-app
  sessionAffinity: None
  type: LoadBalancer

If you want to make the SFTPPlus services available on other port numbers, you can do so by updating the port configuration values. nodePort and targetPort don't need to be updated.

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f sftpplus-service.yaml

Cluster NFS service

To allow multiple pods to access the same persistence disk at the same time, we are going to create an internal ClusterIP service.

It assumes that you will upload the following YAML file named nfs-service.yaml to your cloud console:

apiVersion: v1
kind: Service
metadata:
  labels:
    role: nfs-server
  name: nfs-server
  namespace: default
spec:
  ports:
  - name: 2049-to-2049-tcp
    port: 2049
    protocol: TCP
    targetPort: 2049
  - name: 20048-to-20048-tcp
    port: 20048
    protocol: TCP
    targetPort: 20048
  - name: 111-to-111-tcp
    port: 111
    protocol: TCP
    targetPort: 111
  selector:
    role: nfs-server
  sessionAffinity: None
  type: ClusterIP

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f nfs-service.yaml

Persistence provisioning

Here we create 2 persistent volume claims:

  • One for the actual persisted disk available to the NFS server
  • Another one to access the NFS server as a persistent disk from multiple pods.

It assumes that you will upload the following YAML file named nfs-pv.yaml to your cloud console:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-disk-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS-CLUSTER-IP
    path: "/"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi

You should replace the NFS-CLUSTER-IP with the internal cluster IP generated after the execution of the nfs-service.yaml file.

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f nfs-pv.yaml

Cluster NFS server workload

Next we will create the actual NFS server workflow that will connect to the Compute Engine disk and make it available over the internal cluster network.

It assumes that you will upload the following YAML file named nfs-app.yaml to your cloud console:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nfs-server
  name: nfs-server
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      role: nfs-server
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        role: nfs-server
    spec:
      containers:
      - image: gcr.io/google_containers/volume-nfs:0.8
        imagePullPolicy: IfNotPresent
        name: nfs-server
        ports:
        - containerPort: 2049
          name: nfs
          protocol: TCP
        - containerPort: 20048
          name: mountd
          protocol: TCP
        - containerPort: 111
          name: rpcbind
          protocol: TCP
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /exports
          name: nfs-server-disk
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: nfs-server-disk
        persistentVolumeClaim:
          claimName: nfs-disk-claim

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f nfs-app.yaml

Cluster SFTPPlus application workload

This section describes the creation and configuration of a workload that will run a pod hosting the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-workload.yaml to your cloud console:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sftpplus-app
  name: sftpplus-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sftpplus-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: sftpplus-app
    spec:
      containers:
      - image: proatria/sftpplus-trial
        imagePullPolicy: Always
        name: sftpplus-trial
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /srv/storage
          name: nfs-server
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: nfs-server
        persistentVolumeClaim:
          claimName: nfs-pvc

With the YAML file available in the cloud console, you can create the workload by using the following command:

kubectl create -f sftpplus-workload.yaml
• • •

Managing SFTPPlus configuration in Kubernetes

Mon 01 March 2021 | general kubernetes

Kubernetes banner

Introduction

This article describes managing the SFTPPlus configuration for a deployment through any Kubernetes Engine service.

It only looks at stateless configuration management. Data persistence is described in other articles.

The container image used in this example is the DockerHub SFTPPlus Trial.

The source of the container image is available from our public GitHub SFTPPlus Docker repository.

The example Kubernetes YAML file can be found in our GitHub SFTPPlus Kubernetes repository

It assumes that you already have a working Kubernetes cluster.

Don't hesitate to get in touch with us for comments or questions.

Final result

Once completing the steps in this guide, you will have an SFTPPlus application with the following services:

  • Port 10020 - HTTPS web based management console
  • Port 443 - HTTPS end-user file management service
  • Port 22 - SFTP end-user service

All these services will be available via your cluster IP address.

The management console will be used in read-only mode to verify the state of the SFTPPlus application.

Any configuration changes for SFTPPlus will be done by editing the cluster ConfigMaps or Secrets values.

The ConfigMaps changes are not observed in real time inside the pods. They are in-memory read-only data created together with the pod based on the ConfigMap value at the time of the pod creation. To have the SFTPPlus application use the updated configuration you will need to redeploy each pod. This can be done using the cluster rolling updates features.

SFTPPlus GKE deployment diagram

Moving parts

For implementing the SFTPPlus service we will be using the following parts:

  • The SFTPPlus Trial container image hosted at Docker Hub.
  • A Google Kubernetes Engine. This is a prerequisite for this article. The article doesn't cover the creation of a new Kubernetes cluster.
  • A Kubernetes Load Balancer service for connecting the application to the Internet. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the SFTPPlus application. Instructions for creating this are provided below.
  • A Kubernetes secret for storing the private key and other sensitive information. Instructions for creating this are provided below.
  • A Kubernetes ConfigMap for storing the configuration file content. Instructions for creating this are provided below.

SFTPPlus cluster ConfigMap configuration

This section describes the process of creating the SFTPPlus configuration files that are managed inside the cluster as ConfigMap objects.

It assumes that you will upload the following YAML file named sftpplus-configuration.yaml to your cloud console:

apiVersion: v1
data:
  server.ini: |
    [server]
    uuid = single-server-uuid
    name = sftpplus-pod
    authentications = username-blocker-uuid, ban-ip-uuid, DEFAULT-AUTHENTICATION
    manager_authentications = ban-ip-uuid, DEFAULT-AUTHENTICATION
    password_minimum_strength = 4
    password_minimum_length = 8
    password_hashing_scheme = crypt-sha512

    ssl_certificate = /opt/sftpplus/secrets/server_certificate.pem
    ssh_host_private_keys = /opt/sftpplus/secrets/ssh_host_rsa_key

    [authentications/DEFAULT-AUTHENTICATION]
    enabled = Yes
    type = application
    name = SFTPPlus Accounts and Administrators
    description = This authentication method allows authentication of accounts
        and administrators defined in this configuration file.

    [authentications/username-blocker-uuid]
    enabled = Yes
    type = deny-username
    name = Deny Admin Accounts
    description = Deny all administrator accounts.
    ; You can add more accounts to the list.
    usernames = root, adm, admin, administrator


    [authentications/ban-ip-uuid]
    enabled = Yes
    type = ip-time-ban
    name = Ban IP with multiple failures
    description = Will ban the source IP for 10 minutes after 10 consecutive failures.
    ban_interval = 600
    ban_after_count = 10

    [event-handlers/e137661a-150d-48f4-9239-4d9661492c11]
    enabled = True
    type = standard-stream
    name = Standard Output Logger
    entry_content = {timestamp.iso_8601_local} {id} {component.uuid} {account.name} {account.peer.address}:{account.peer.port} {message}

    [services/DEFAULT-MANAGER]
    enabled = Yes
    name = local-manager
    type = manager
    address = 0.0.0.0
    port = 10020
    ssl_cipher_list = secure
    ssl_allowed_methods = tlsv1.2 tlsv1.3


    [services/sftp-1]
    enabled = Yes
    name = sftp-service
    type = ssh
    sftp = Yes
    scp = No
    address = 0.0.0.0
    port = 10022
    ssh_cipher_list = secure
    ignore_create_permissions = No
    idle_connection_timeout = 300
    maximum_concurrent_connections = Disabled

    [services/https-1]
    enabled = Yes
    name = https
    protocol = https
    address = 0.0.0.0
    port = 10443

    [resources/DEFAULT-LETS-ENCRYPT]
    enabled = no
    name = Lets-Encrypt-Client
    type = lets-encrypt

    [resources/DEFAULT-SQLITE]
    name = Embedded DB
    type = sqlite
    path = log/cache.db3

    [resources/DEFAULT-EMAIL-CLIENT]
    name = Email-Client
    type = email-client
    email_from_address = sftpplus@example.com
    email_to_recipients = admin-team@example.com
    address = smtp.example.com
    port = 25

    [resources/DEFAULT-ANALYTICS]
    enabled = Yes
    type = analytics
    name = Analytics engine
    monitor_interval = 600

    [administrators/DEFAULT-ADMINISTRATOR-UUID]
    enabled = Yes
    name = admin
    password = $6$rounds=80000$oPp2OCqqSflb2YN5$KdXiAO6fhkObjBx6tJnS/EZ3bzcxeO1RPvJchBVXR00Gnj5O35fAC07psTBz4KE2AGbq/lZ.ifS7SrkDZmow00
    role = DEFAULT-ROLE

    [roles/DEFAULT-ROLE]
    enabled = Yes
    name = Default Super-Administrators

    [groups/DEFAULT_GROUP]
    name = DEFAULT_GROUP
    enabled = Yes
    home_folder_path = /srv/home
    create_home_folder = Yes

    [accounts/bdb99c31-1119-4b8b-b609-63672a9a0b6f]
    name = test_user
    type = application
    enabled = yes
    group = DEFAULT_GROUP
    home_folder_path = /srv/storage/test_user
    password = $5$DfjfEI8R1.fpGQg9$A95Q7ENuO2Bfk95k8gCwOP6YzWmVe8vTz2fcPkGpmp6
    ssh_authorized_keys_content = ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC4fV6tSakDSB6ZovygLsf1iC9P3tJHePTKAPkPAWzlu5BRHcmAu0uTjn7GhrpxbjjWMwDVN0Oxzw7teI0OEIVkpnlcyM6L5mGk+X6Lc4+lAfp1YxCR9o9+FXMWSJP32jRwI+4LhWYxnYUldvAO5LDz9QeR0yKimwcwRToF6/jpLw== Comment for this key

kind: ConfigMap
metadata:
  name: sftpplus.configuration
  namespace: default

You can modify the content of the server.ini ConfigMap key to match your desired configuration.

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl apply -f sftpplus-configuration.yaml

Certificates and private key management

The certificates and their associated private keys, together with the SSH private keys are managed inside the cluster using the Secret configuration object.

For simplicity, we will use a single opaque secret that will store both SSL certificates and SSH keys.

It assumes that you will upload the following YAML file named sftpplus-secrets.yaml to your cloud console:

apiVersion: v1
kind: Secret
metadata:
  name: sftpplus.secrets
  namespace: default
type: Opaque
stringData:
  server_certificate.pem: |

    -----BEGIN CERTIFICATE-----
    MIIEqzCCApOgAwIBAgIRAIvhKg5ZRO08VGQx8JdhT+UwDQYJKoZIhvcNAQELBQAw
    CONTENT OF YOUR SSL CERTIFICATE
    EACH LINE STARTING WITH 4 empty spaces.
    n5Z5MqkYhlMI3J1tPRTp1nEt9fyGspBOO05gi148Qasp+3N+svqKomoQglNoAxU=
    -----END CERTIFICATE-----
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpAIBAAKCAQEAzLUJYbSpjSAOSpxfns/w111mRls/FrHIC358fCxZsWzVXX/6
    CONTENT OF YOUR SSL PRIVATE KEY
    3042tKnu6zmZTLfcZFxQ8rCrrzzezs2odb9FxVA3bTc18tmudeAUyQ==
    -----END RSA PRIVATE KEY-----

  ssh_host_rsa_key: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpAIBAAKCAQEAzLUJYbSpjSAOSpxfns/w111mRls/FrHIC358fCxZsWzVXX/6
    CONTENT OF YOUR SSH PRIVATE KEY
    3042tKnu6zmZTLfcZFxQ8rCrrzzezs2odb9FxVA3bTc18tmudeAUyQ==
    -----END RSA PRIVATE KEY-----

For security reasons, the above example does not include real keys and certificates. You will need to replace them with your own data. It is important to use the same indentation for the content of the file.

kubectl apply -f sftpplus-secrets.yaml

Kubernetes load balancer and Internet access

This section describes the process of creating a Kubernetes load balancer service to allow external Internet access to the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-service.yaml to your cloud console:

apiVersion: v1
kind: Service
metadata:
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sftpplus-app
  name: sftpplus-app-load-balancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 10020-to-10020-tcp
    nodePort: 30500
    port: 10020
    protocol: TCP
    targetPort: 10020
  - name: 443-to-10443-tcp
    nodePort: 32013
    port: 443
    protocol: TCP
    targetPort: 10443
  - name: 22-to-10022-tcp
    nodePort: 32045
    port: 22
    protocol: TCP
    targetPort: 10022
  selector:
    app: sftpplus-app
  sessionAffinity: None
  type: LoadBalancer

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl apply -f sftpplus-service.yaml

Application pods

This section describes the creation and configuration of a workload that will run one or more pods hosting the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-workload.yaml to your cloud console:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sftpplus-app
  name: sftpplus-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sftpplus-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: sftpplus-app
    spec:
      containers:
      - image: proatria/sftpplus-trial
        imagePullPolicy: Always
        name: sftpplus-trial
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/sftpplus/configuration
          name: sftpplus-configuration
        - mountPath: /opt/sftpplus/secrets
          name: sftpplus-secrets

      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: sftpplus-configuration
        configMap:
          name: sftpplus.configuration
      - name: sftpplus-secrets
        secret:
          secretName: sftpplus.secrets

The content of the cluster secret is available inside /opt/sftpplus/secrets. The cluster ConfigMap is available inside /opt/sftpplus/configuration.

Each key of the Secret or ConfigMap object will be converted into a file with the same name as the key name and the same content as the key content.

With the YAML file available in the cloud console, you can create the workload by using the following command:

kubectl apply -f sftpplus-workload.yaml
• • •

Google Kubernetes SFTPPlus single pod deployment

Mon 04 January 2021 | general kubernetes

Kubernetes banner

Introduction

This article describes deploying the SFTPPlus application to the Google Kubernetes engine (GKE) using a single pod for which the configuration and data is persisted outside the cluster using a Compute Engine storage disk.

The container image used in this example is the DockerHub SFTPPlus Trial.

The source of the container image is available from our public GitHub SFTPPlus Docker repository.

The example Kubernetes YAML file can be found in our GitHub SFTPPlus Kubernetes repository

You can use the information to deploy an SFTP file transfer server to any other Kubernetes Engine service.

For any comments or questions, don't hesitate to get in touch with us.

The diagram from this article is available on request.

Final result

Once completing the steps in this guide, you will have an SFTPPlus application with the following services:

  • Port 10020 - HTTPS web based management console
  • Port 443 - HTTPS end-user file management service
  • Port 22 - SFTP end-user service

All these services will be available via your cluster IP address.

The local files for each pod should be considered disposable. They are lost once the pod is terminated.

To persist the SFTPPlus configuration and end-user data, an external volume is used.

The HTTPS web based management console is accessed in read-only mode, as the configuration is managed via the cluster infrastructure and not using the SFTPPlus configuration management functionality.

SFTPPlus GKE deployment diagram

Moving parts

For implementing the SFTPPlus service we will be using the following parts:

  • The SFTPPlus Trial container image hosted at Docker Hub.
  • A Google Kubernetes Engine with at least 2 nodes, each node with a minimum of 2 GB of memory and 100GB of storage. The storage is used for the whole cluster, and it's not the dedicated storage required for SFTPPlus. This is a prerequisite for this article.
  • A Google Compute Engine persistence disk created outside of the cluster. To simplify the configuration, the disk is attached directly to the pod, without creating a separate persistence volume and persistence volume claim.
  • A Kubernetes Load Balancer service for connecting the application to the Internet. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the SFTPPlus application. Instructions for creating this are provided below.

Kubernetes load balancer and Internet access

This section describes the process of creating a Kubernetes Load Balancer service to allow external Internet access to the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-service.yaml to your cloud console:

apiVersion: v1
kind: Service
metadata:
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sftpplus-app
  name: sftpplus-app-load-balancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 10020-to-10020-tcp
    nodePort: 30500
    port: 10020
    protocol: TCP
    targetPort: 10020
  - name: 443-to-10443-tcp
    nodePort: 32013
    port: 443
    protocol: TCP
    targetPort: 10443
  - name: 22-to-10022-tcp
    nodePort: 32045
    port: 22
    protocol: TCP
    targetPort: 10022
  selector:
    app: sftpplus-app
  sessionAffinity: None
  type: LoadBalancer

The file can be updated to add more ports if required or export the services using different external ports.

With the YAML file available you can create the service by using the following command:

kubectl create -f sftpplus-service.yaml

Application pods

This section describes the creation and configuration of a workload that will run a single pod hosting the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-workload.yaml to your cloud console:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: sftpplus-app
  name: sftpplus-app
  namespace: default
spec:
  replicas: 1
  serviceName: "sftpplus-app"
  selector:
    matchLabels:
      app: sftpplus-app
  template:
    metadata:
      labels:
        app: sftpplus-app
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - image: proatria/sftpplus-trial:4.12.0-cloud
        imagePullPolicy: Always
        name: sftpplus-trial
        env:
        - name: SFTPPLUS_CONFIGURATION
          value: /srv/storage/configuration
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /srv/storage
          name: sftpplus-disk

      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - gcePersistentDisk:
          fsType: ext4
          pdName: sftpplus-disk
        name: sftpplus-disk

You should replace sftpplus-disk with the name of the manually created Compute Engine disk.

The content of the cluster secret is available inside /opt/sftpplus/secrets. The cluster ConfigMap is available inside /opt/sftpplus/configuration.

Each key of the Secret or ConfigMap object will be converted into a file with the same name as the key name and the same content as the key content.

With the YAML file available in the cloud console, you can create the workload by using the following command:

kubectl create -f sftpplus-workload.yaml
• • •