Articles from general category

SFTPPlus Release 4.13.0

Mon 30 August 2021 | general release

We are happy to account the latest release of SFTPPlus version 4.13.0.

A major update with this release is the addition of the SMB client-side protocol. This allows SFTPPlus to connect to any standard SBM/CIFS server like a Windows Share, Samba or Azure Files.

The Azure File REST API is now fully supported for both push and pull transfers.

This release include an import defect fix for SharePoint Online Authentication. The Microsoft login service was updated at the end of August 2021 breaking any previously released SFTPPlus version.

Security Fixes

  • Python libraries were updated to fix CVE-2021-23336, addressing a web cache poisoning issue reported in urllib.parse.parse_qsl(). SFTPPlus is not using urllib.parse.parse_qsl() and was never vulnerable to this security issue. If you are explicitly calling urllib.parse.parse_qsl() as part of a custom SFTPPlus Python extension, update to this version to fix CVE-2021-23336. [#5682]

New Features

  • You can now use Azure Files as a source location for a transfer. [client-side][http] [#5016]
  • You can now configure a SMB (Windows Share, Azure Files, Samba) location as the source and destination for a transfer. [client-side][smb] [#4701][#5685]
  • Azure Storage API was updated to use API version 2020-04-08. [#3010-1]
  • Azure Files locations can now list directories and get the attributes of items. [client-side][http] [#3010]
  • You can now configure a timeout for the HTTP authentication method. In the previous version, the HTTP authentication connection was closed after a fixed 120 seconds if the server didn't return a response. [server-side] [#5696]
  • The RADIUS authentication method now supports CHAP, MS-CHAP-V1 and MS-CHAP-V2. [server-side] [#5701]
  • The RADIUS authentication method can be configured with a custom NAS-Port number and now has a debug option. [server-side] [#5702]
  • The group_mapping configuration now does case insensitive matching for the attribute names. [server-side][ldap][radius] [#5706-1]
  • You can now configure the RADIUS authentication to continue validating the credentials even when the RADIUS server returned a successful response. This can be used to implement multi-factor authentication for legacy operating system accounts, by sending first the requests to a MFA aware RADIUS server. [server-side] [#5706]
  • You can now configure a transfer using a temporary file name to an Azure Files location destination. [#5022]
  • AIX 7.1 and newer for IBM Power Systems is now a supported platform. AIX packages embed OpenSSL 1.0.2 libraries patched with latest security fixes, up to and including CVE-2020-1971, CVE-2021-23840, CVE-2021-23841. [#5581]
  • Alpine Linux 3.14 on x86_64 is now supported. [#5682]
  • When failing to initialize the data connection the error message now indicates whether a passive or active connection was attempted. In previous versions both passive and active connections had the same error message. [server-side][ftp] [#5681]
  • The data associated with an event will now contain the file extension and the file base name without the extension. [#5686]
  • You can now configure the duration for which SFTPPlus will wait for the RADIUS server to provide a response. In previous versions, a fixed timeout of 10 seconds was used. [server-side][radius] [#5694]

Defect Fixes

  • The SharePoint Online authentication was updated to work with latest Microsoft server changes. [client-side][webdav] [#5710]
  • HTTP and HTTPS file downloads now work with cURL. This was a regression introduced in version 4.12.0. [server-side][http][https] [#5693-1]
  • HTTP and HTTPS file transfer services now support resuming downloads. [server-side][http][https] [#5693]
  • The links and commands to start the Local Manager and documentation pages will now start much faster. [local-manager] [#5677]
  • An extra event with ID 20024 is no longer emitted when failing to initialize the FTP client passive connection. [client-side][ftp][ftps] [#5681-1]
  • An FTP transfer and location no longer fails when the remote directory can't be listed. The error is emitted and the directory listing is retried. [client-side][ftp][ftps] [#5681-2]

Deprecations and Removals

  • Alpine Linux 3.12 is no longer supported. We recommend using Alpine Linux 3.14 on x86_64 for your containerized SFTPPlus deployments. [#5682]
  • The default authentication method for RADIUS is now MS-CHAP-V2. In previous versions the default method was PAP. [server-side] [#5701]

You can check the full release notes here.

• • •

Google Cloud Kubernetes Deployment

Mon 02 August 2021 | general kubernetes

Kubernetes banner

SFTP and HTTPS file server cluster overview

This article describes the deployment of a trial SFTPPlus engine using an already created Google Cloud Platform Kubernetes Engine service (GKE).

The example Kubernetes YAML file can be found in our GitHub SFTPPlus Kubernetes repository while the container source available at our GitHub SFTPPlus Docker repository.

The actual user data is persisted using a single Google Cloud Storage bucket.

You can adapt the example from this article to any other Kubernetes system, like OpenShift, Azure Kubernetes Service (AKS) or Amazon AWS.

We would love to hear your feedback. Get in touch with us for comments or questions.

Final result

Once completing the steps in this guide, you will have an SFTPPlus application with the following services:

  • Port 10020 - HTTPS web management interface
  • Port 443 - HTTPS server for end-user file management
  • Port 22 - SFTP server for end-user file access

All these services will be available via the public IP address associated with your load balancer.

The Google Cloud storage bucket is made available inside each container as the /srv/storage local path.

SFTPPlus also supports legacy FTP, explicit or implicit FTPS or plain HTTP file access. They are not included in this guide to reduce the complexity on the provided information and configuration.

SFTPPlus GKE deployment diagram

Deployment components

For deploy the SFTPPlus into the cluster we will use the following components:

  • The SFTPPlus Trial container image hosted at Docker Hub.
  • A Google Kubernetes Engine that was already created.
  • A Google Cloud Storage bucket to persist user data. These are the files and directories available to end-users. You can create a new bucket or use an existing one.
  • A Google Cloud service accounts with write access to the storage bucket.
  • Kubernetes cluster secret to store the Google Cloud Storage credentials. This will be used by Kubernetes pods to access the persistence storage. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the SFTPPlus application. Instructions for creating this are provided below.
  • A Kubernetes Load Balancer server for connecting the application to the Internet. Instructions for creating this are provided below.

Cloud storage secure access from Kubernetes cluster

Since the persistent data is stored outside the cluster, we need to setup the authentication to your cloud storage bucket from within the Kubernetes cluster.

We assume that you will use the Google Cloud console to create the storage bucket and the Service account.

Once the Service account is created, create a new credentials key or associate with an existing one.

You will need to upload/copy the Service account's credential key to your cloud shell. For this example we assume that the file is named gcs-credentials.json and the secret is created using the sftpplus.gcs.credentials name. Then it can be imported into Kubernetes using the following command line:

kubectl create secret generic sftpplus.gcs.credentials \
    --from-file  gcs-credentials.json

Load Balancer and Internet access

For accessing the SFTPPlus over the Internet we will use a standard Kubernetes Load Balancer service .

Below, you can find an example YAML file named sftpplus-service.yaml that can be copied to your cloud console:

apiVersion: v1
kind: Service
metadata:
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sftpplus-app
  name: sftpplus-app-load-balancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 10020-to-10020-tcp
    nodePort: 30500
    port: 10020
    protocol: TCP
    targetPort: 10020
  - name: 443-to-10443-tcp
    nodePort: 32013
    port: 443
    protocol: TCP
    targetPort: 10443
  - name: 22-to-10022-tcp
    nodePort: 32045
    port: 22
    protocol: TCP
    targetPort: 10022
  selector:
    app: sftpplus-app
  sessionAffinity: None
  type: LoadBalancer

If you want to have the SFTPPlus services available on other port numbers, you can do so by updating the port configuration values. nodePort and targetPort don't need to be updated.

Create or update the load balancer service with the following command:

kubectl apply -f sftpplus-service.yaml

Kubernetes SFTPPlus application deployment

The SFTPlus application will be deployed as a container inside a pod.

The configuration and data is persisted outside of the cluster, using a cloud storage bucket

The deployment to the cluster can be done using the following YAML file named sftpplus-workload.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sftpplus-app
  name: sftpplus-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sftpplus-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: sftpplus-app
    spec:
      containers:
      - env:
        - name: GCS_BUCKET
          value: sftpplus-trial-srv-storage
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: /srv/gcs-credentials/gcs-credentials.json
        image: proatria/sftpplus-trial:4.12.0-cloud
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command:
              - fusermount
              - -u
              - /srv/storage
        name: sftpplus-trial
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /srv/gcs-credentials
          name: gcs-credentials-key
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: gcs-credentials-key
        secret:
          defaultMode: 420
          secretName: sftpplus.gcs.credentials

You should replace sftpplus-trial-srv-storage with the name of your storage bucket.

This does the following:

  • Creates a new container using the SFTPPlus trial image hosted on DockerHub.
  • Will give access to cloud storage bucket from inside the container at path /srv/storage.
  • Will make the Service account credentials available in the /srv/gcs-credentials/gcs-credentials.json file, available via the cluster secrets.

With the YAML file available in the cloud console, you can create or upload the workload by using the following command:

kubectl apply -f sftpplus-workload.yaml
• • •

SFTPPlus Release 4.12.0

Tue 06 July 2021 | general release

We are announcing the latest release of SFTPPlus version 4.12.0.

This is an incremental release which included both minor defect fixes and new functionality. Below are the complete changes for this release.

New Features

  • The source_ip_filter configuration option now allows defining a range of allowed IP addresses using the Classless Inter-Domain Routing (CIDR) notation. [#1044]
  • When a new component is created using the Local Manager interface, the component is automatically started if "Launch at startup" is enabled. [local-manager] [#1917]
  • WebDAVS locations now support HTTP Basic Authentication. [client-side][webdavs][https] [#3913]
  • SFTPPlus can now be launched with a read-only configuration file and cache. [server-side] [#5591]
  • Azure Files Locations now support automatic directory creation. [client-side][http] [#5593]
  • The account configuration now contains the account creation time in ISO format. [server-side] [#5635]
  • TOTP multi-factor authentication for LDAP users is now possible even with standard LDAP servers not providing native TOTP support. [#5663]
  • The SFTPPlus download page now has specific entries for Amazon Linux and older Red Hat Enterprise Linux versions. These entries link to the generic Linux SFTPPlus package, which works with any glibc-based Linux distribution. [#5664]

Defect Fixes

  • The "Enabled at startup" configuration option was renamed as "Launch at startup". [local-manager] [#1917]
  • The last login report now only shows the IP address, the port number is no longer shown. This makes it easier to search based on IP only. [#5637]
  • Event with ID 60070 emitted when the destination location is connecting and not yet ready for a transfer, was updated from the failure group to the informational one. [#5643]

Deprecations and Removals

  • SUSE Linux Enterprise Server (SLES) 11 and 12 on X86_64 are no longer supported. Use the generic Linux package on SLES and contact us if you need specific support for SFTPPlus on any version of SUSE Linux Enterprise Server, including using OS-provided OpenSSL libraries instead of our generic ones. [#5664]

You can check the full release notes here.

• • •

Kubernetes with NFS persistence Deployment

Mon 07 June 2021 | general kubernetes

Kubernetes banner

Deployment Foundation

This article describes the deployment of a trial SFTPPlus engine using the Google Cloud Platform Kubernetes Engine service, with the persisted data shared between the cluster nodes using an NFS server that is also hosted inside the cluster.

The deployment is done in a single zone.

The container image used in this example is the DockerHub SFTPPlus Trial.

The source of the container image is available from our public GitHub SFTPPlus Docker repository.

The example Kubernetes YAML file can be found in our GitHub SFTPPlus Kubernetes repository

The actual user data is persisted using a single Google Cloud Compute Engine storage disk.

The information from this article can be adapted to use any other container image or deployed into any other Kubernetes Engine service, like Azure Kubernetes Service (AKS) or Amazon Elastic Kubernetes Service.

It assumes that you already have a working Kubernetes cluster.

It assumes that the SFTPPlus application version and configuration is managed and versioned using the container image.

For any comments or questions, don't hesitate to get in touch with us.

Final result

Once completing the steps in this guide, you will have an SFTPPlus application with the following services:

  • Port 10020 - HTTPS web based management console
  • Port 443 - HTTPS end-user file management service
  • Port 22 - SFTP end-user service
  • Port 80 - Let's Encrypt Validation service

All these services will be available via your cluster IP address.

The Compute Engine disk is made available inside each container as the /srv/storage local path.

SFTPPlus GKE deployment diagram

Moving parts

For implementing the SFTPPlus service we will be using the following parts:

  • The SFTPPlus Trial container image hosted at Docker Hub.
  • A Google Kubernetes Engine with at least 2 nodes, each node with a minimum of 2 GB of memory and 100GB of storage. This is a prerequisite for this article.
  • Kubernetes persistence volume (and persistence volume claim) to store the user data. Instructions for creating this are provided below.
  • A Kubernetes Load Balancer service for connecting the application to the Internet. Instructions for creating this are provided below.
  • A Kubernetes Cluster IP service for allowing concurrent access to cluster pods to the persistence disk. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the NFS server that will make the data from the persistence disk available to multiple pods inside the cluster. Instructions for creating this are provided below.
  • A Kubernetes workload for hosting the SFTPPlus application. Instructions for creating this are provided below.

Kubernetes load balancer and Internet access

This section describes the process of creating a Kubernetes load balancer service to allow external Internet access to the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-service.yaml to your cloud console:

apiVersion: v1
kind: Service
metadata:
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: sftpplus-app
  name: sftpplus-app-load-balancer
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 443-to-10443-tcp
    nodePort: 32013
    port: 443
    protocol: TCP
    targetPort: 10443
  - name: 22-to-10022-tcp
    nodePort: 32045
    port: 22
    protocol: TCP
    targetPort: 10022
  selector:
    app: sftpplus-app
  sessionAffinity: None
  type: LoadBalancer

If you want to make the SFTPPlus services available on other port numbers, you can do so by updating the port configuration values. nodePort and targetPort don't need to be updated.

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f sftpplus-service.yaml

Cluster NFS service

To allow multiple pods to access the same persistence disk at the same time, we are going to create an internal ClusterIP service.

It assumes that you will upload the following YAML file named nfs-service.yaml to your cloud console:

apiVersion: v1
kind: Service
metadata:
  labels:
    role: nfs-server
  name: nfs-server
  namespace: default
spec:
  ports:
  - name: 2049-to-2049-tcp
    port: 2049
    protocol: TCP
    targetPort: 2049
  - name: 20048-to-20048-tcp
    port: 20048
    protocol: TCP
    targetPort: 20048
  - name: 111-to-111-tcp
    port: 111
    protocol: TCP
    targetPort: 111
  selector:
    role: nfs-server
  sessionAffinity: None
  type: ClusterIP

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f nfs-service.yaml

Persistence provisioning

Here we create 2 persistent volume claims:

  • One for the actual persisted disk available to the NFS server
  • Another one to access the NFS server as a persistent disk from multiple pods.

It assumes that you will upload the following YAML file named nfs-pv.yaml to your cloud console:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-disk-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS-CLUSTER-IP
    path: "/"

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi

You should replace the NFS-CLUSTER-IP with the internal cluster IP generated after the execution of the nfs-service.yaml file.

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f nfs-pv.yaml

Cluster NFS server workload

Next we will create the actual NFS server workflow that will connect to the Compute Engine disk and make it available over the internal cluster network.

It assumes that you will upload the following YAML file named nfs-app.yaml to your cloud console:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nfs-server
  name: nfs-server
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      role: nfs-server
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        role: nfs-server
    spec:
      containers:
      - image: gcr.io/google_containers/volume-nfs:0.8
        imagePullPolicy: IfNotPresent
        name: nfs-server
        ports:
        - containerPort: 2049
          name: nfs
          protocol: TCP
        - containerPort: 20048
          name: mountd
          protocol: TCP
        - containerPort: 111
          name: rpcbind
          protocol: TCP
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /exports
          name: nfs-server-disk
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: nfs-server-disk
        persistentVolumeClaim:
          claimName: nfs-disk-claim

With the YAML file available in the cloud console, you can create the service by using the following command:

kubectl create -f nfs-app.yaml

Cluster SFTPPlus application workload

This section describes the creation and configuration of a workload that will run a pod hosting the SFTPPlus application.

It assumes that you will upload the following YAML file named sftpplus-workload.yaml to your cloud console:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sftpplus-app
  name: sftpplus-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sftpplus-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: sftpplus-app
    spec:
      containers:
      - image: proatria/sftpplus-trial
        imagePullPolicy: Always
        name: sftpplus-trial
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /srv/storage
          name: nfs-server
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: nfs-server
        persistentVolumeClaim:
          claimName: nfs-pvc

With the YAML file available in the cloud console, you can create the workload by using the following command:

kubectl create -f sftpplus-workload.yaml
• • •

SFTPPlus Release 4.11.0

Fri 07 May 2021 | general release

We are announcing the latest release of SFTPPlus version 4.11.0.

This is an incremental release which updates the security libraries together with various defect fixes and adding backward compatible new features.

It included an important change that fixes the display in Internet Explorer of the Authentications page.

Below are the complete changes for this release.

Security Fixes

  • Python has been patched with latest security patches from ActiveState. Fixes CVE-2020-27619, CVE-2020-26116, CVE-2019-20907, CVE-2020-8492. On Linux and macOS, CVE-2021-3177 has also been fixed. [#5600-2]
  • The OpenSSL libraries used for Python's cryptography on Windows, generic Linux, and macOS were updated to version 1.1.1k. Fixes CVE-2020-1971, CVE-2021-23840, CVE-2021-23841, CVE-2021-3449, and CVE-2021-3450. On generic Linux and macOS, same CVEs were fixed for Python's stdlib ssl module. [#5600]

New Features

  • The LDAP authentication method now supports IPv4 LDAP over TLS/SSL, also referred to as LDAPS. [server-side] [#2227]
  • It is now possible to configure the timeout delay for the external commands called during a transfer. In previous versions this was fixed to 15 seconds. [client-side] [#5549]
  • You can now configure the OS authentication method to associate the authenticated accounts to a specific SFTPPlus group or to a SFTPPlus group having the same name as the OS group name. In previous versions, the accounts were associated with the default SFTPPlus group. [server-side] [#5559]
  • The client-side WebDAV location is now configured using a URL. This allows for configuring the connection to WebDAV pages that are not located in the HTTP server's root path. [client-side][webdav] [#5602]
  • The file-dispatcher event handler now supports explicit globbing matching expressions to define a full destination path. In the previous version, when a globbing expression was used, the destination path was defining only the base directory and the file name was always appended to it. [#5604-1]
  • You can now explicitly define a globbing matching expression using the g/EXPRESSION/ format. [#5604]
  • Events with ID 60012 and 60017 emitted on a successful client-side transfer now contain the destination file path as part of the attached data. [client-side] [#5597]

Defect Fixes

  • In the Local Manager, in the list of accounts for a local file authentication method, you will now see the name of the associated group. In previous versions, the group was listed as UNKNOWN. [#2368]
  • The authentications page of the Local Manager web console was fixed to work with Internet Explorer. This was a defect introduced in version 4.10.0. [#5547]
  • Defining configuration options inside the Local Manager using text values containing new lines characters other than the default Unix or Windows characters no longer generates an invalid configuration file. [manager] [#5553]
  • The OS authentication manager will now show an error at startup when no group is configured for allowed users or administrators. In the previous versions, the OS authentication would start just fine and then deny any authentication request. [#5559]
  • On Linux and macOS the OpenPGP event handler now works when the main SFTPPlus process is started as root. [#5592]
  • For a file transfer configured to not transfer duplicated files via the transfer_memory_duration and ignore_duplicate_paths options, when the rename operation fails the full file transfer is retried as a transfer restart. In previous versions the file was not re-transferred after the failed rename operation. [client-side] [#5597]
  • The documentation for the file-dispatcher event handler was updated to include information about variables available when defining the destination path. [#5604]
  • The FTP idle_data_connection_timeout will now use the default value when set to zero or a negative number, as documented. In previous versions, the timeout was disabled when the value was zero. [server-side][ftp] [#5610]

Deprecations and Removals

  • For transfers executed using a temporary file name, the destination_path attribute of the events with ID 60012 now contains the temporary path. This is because, at the time the event is emitted, the file is not yet renamed to the final destination path. In previous versions, it was containing the final destination path. [client-side] [#5597]
  • Specific support for Amazon Linux 2 and Red Hat Enterprise Linux 7.x (including derivatives such as CentOS and Oracle Linux) has been removed due to OpenSSL 1.0.2 no longer being supported by the upstream cryptography project. Use the generic x64 Linux package instead. [#5600]
  • The address and port configuration options for the WebDAV client were removed and replaced with the url configuration. The configuration options are automatically migrated to the url option. [client-side][webdav] [#5602]
  • The default value for connection_retry_interval was increased from 60 seconds to 300 seconds (5 minutes). The default value for connection_retry_count was increased from 2 to 12. This will make a connection for a remote SFTP or FTP location to be retried for 1 hour before stopping the transfers. [client-side] [#5610]

You can check the full release notes here.

• • •