Aspera Transfer CLUSTER Manager

Aspera's award winning Aspera Transfer Cluster Manager (ATCM) is an essential management system for provisioning, monitoring, managing and auto-scaling of Aspera transfer clusters. ATCM provides Autoscale capabilities that allow the Aspera transfer nodes to self-scale in today's dynamic cloud environments. The technology is used by the Aspera Transfer Service and is available for customers using Aspera On Demand. The core capabilities are an autoscale cluster manager service for elastic autoscaling of transfer hosts and client load balancing and automatic cluster configuration, the new node/v4 multi-tenant secure access key system, and a new cluster management UI for managing policies and reporting on transfers across the cluster. ATCM supports Aspera On Demand clusters on SoftLayer, AWS and Azure.

Administrators have a comfortable web UI and corresponding set of APIs for all necessary cluster management tasks. Provisioning can be achieved using customizable templates, access keys can be created for Amazon, Azure, Azure SAS, IBM COS, and Google. Advanced storage features such as AWS Infrequent access, AWS Reduced Redundancy, AWS Server side encryption at rest AES 256 or KMS, Azure Page BLOB, Block Blob and Cool storage are all supported. ATCM works in tandem with ScaleKV, Aspera's distributed and scalable in memory data store for aggregating all transfer authorization, management and reporting data across the cluster.

The Autoscale functionality is complemented by ScaleKV, the scale out data store for distributed, high throughput collection, aggregation and reporting of all transfer performance and file details across the Autoscale cluster.

The service provides automatic start/stop of transfer server instances, automatic balancing of client requests across available instances, monitoring and restart of all critical services on the server instances, and configurable service levels to manage maximum transfer load per instance, available idle instances for "burst", and automatic decommissioning of unused instances.



Moving huge files and data sets or massive file repositories globally to and from your cloud infrastructure can be very challenging. Traditional transfer software technologies are slow and unreliable, and shipping physical disk storage is time consuming, expensive and exposes your data to unnecessary security risks. Using the web interface, the Aspera Transfer Cluster Manager enables you to set user-defined policies that automatically manage the provisioning and deprovisioning of servers in a cluster to quickly accommodate large spikes and drops in transfer loads.


The ATCM manages transfer throughput SLAs and compute/bandwidth costs with elastic scaling. The service is part of the Aspera transfer server software stack, and automatically manages the number of server instances needed to support client transfer demands based on user-defined policies and automatically manages the number of nodes in use and booted up in reserve but idle.


As transfer loads increase and decrease, nodes are moved from idle to available for client requests, and from available to highly utilized and back again based on user-defined load metrics such as tolerances for low and high transfer throughput and online burst capacity. If the minimum number of available nodes drops below the user-defined threshold, the cluster manager boots up new nodes automatically, and then brings them back down when they are no longer needed. Similarly, unavailable/down nodes are automatically detected and restarted, and client requests are re-pointed to healthy nodes.


Works in conjunction with Aspera Direct-to-Cloud storage - infrastructure independent. All of the Autoscale capabilities are implemented in the Aspera software and thus are portable across cloud providers. Includes support for AWS, IBM Cloud, and Azure autoscale clusters.


All transfer initiation and management scales across the cluster with ScaleKV technology, a new Aspera created scale out data store for distributed, high throughput collection, aggregation and reporting of all transfer performance and file details across the autoscale cluster, and transfer initiation across the cluster.


Key API capabilities provided by the Cluster:

  • Query transfer status across the cluster - The new transfer API (node/v4) returns transfer progress and details across the entire cluster. Transfer statistics are automatically sharded across the memory of all nodes in the cluster, keeping up with very high transfer rates and number of sessions.
  • Transfer using all nodes in the Cluster - The new transfer API (node/v4) allows third party applications to initiate transfers that use multiple / all nodes in the cluster for throughput beyond the capability of a single node, and automatic failover and fault tolerance.
  • Securely Support Multiple Tenants - New mulit-tenant secure access key system (node/v4) allows Aspera administrators of applications such as Files, faspex and Shares to securely support multiple tenants on the same  cluster with private access to separate cloud storage and transfer reporting.
  • Report transfer status and usage by Tenant - The transfer API (node/v4) allows queries of transfer history by access key to securely report history and usage by access key tenant. (Aspera On Demand usage reporting is being enhanced with usage by access key.)


A New web-based cluster management UI manages access keys, cluster configuration including Autoscale policy and in memory data store for transfer statistics.

Key features include:

  • Configuring auto-scale policies and transfer node (aspera.conf) templates and automatically implementing these on new and running clusters
  • Configuring and installing SSL certificates for Aspera nodes
  • Enhanced service monitoring and management to monitor and restart key Aspera services and automatically decommission failed nodes
  • Live cluster upgrade allowing a cluster manager to migrate a live running cluster to a new machine image version without disrupting running transfers.
  • Enhanced resilience to failure including ability to backup and restore the cluster state, options to back the main state store on a clustered database (AWS RDS, Azure ClearDB), and in failure recovery, the ability to launch new node images with configuration from the backup.
  • Easy duplication of cluster template configurations through the UI
  • Create access keys for all supported Aspera transfer cluster platforms (AWS, IBM Cloud, Azure in 3.7)
  • Health monitoring and restart of all critical Aspera services on individual instances. 
  • Updating of aspera.conf file across all nodes in the cluster.
  • View all active and completed transfers with filters by access key

The Aspera Transfer Cluster Manager is core technolgy in the Aspera Transfer Service and is available for customer-managed deployment as an Aspera on Demand subscription with pricing based on a committed amount of data transferred. The ATCM is packaged as a virtual machine image and can provision clusters of Aspera servers leveraging a deployment template. The clusters are deployed in a virtual private cloud (VPC).


The standard ATCM offering includes the following components:

  • A desktop and command line client for use with the service
  • Pre-packaged virtual machine images built for the cloud platform or your choosing
  • ATCM web-based UI 
  • Can be enabled to support your separate Aspera Files, Faspex and Shares applications


  • SoftLayer 
  • AWS
  • Azure & Google (coming soon)

Cloud Requirements

  • Amazon
    • S3 Object Storage
    • EC2 Server with an Amazon Machine Image (AMI) from Aspera for both the Aspera Transfer Node and Cluster Manager
    • Security Group, VPC
  • SoftLayer
    • Swift Object Storage
    • Server instance with CCI Image IDs from Aspera for both the Aspera Transfer Node Image ID and Cluster Manager Image ID
    • DNS service with an assigned Domain to the Route 53 service

Click here for more ATCM requirements and support detail on the Aspera support site. Download information is available here.



A full-featured Software as a Service offering for sharing large files and data sets directly from cloud and on premises storage – located anywhere, to anywhere, with anyone.


An easy to use web application for sharing files and directories of any size with employees across your organization or with external customers and partners.


Global person-to-person file delivery and collaboration platform for file-based collection, distribution, and collaboration among geographically dispersed teams. 


Web application for transfer management, monitoring and control across your entire Aspera network.


A versatile server application which enables high-speed movement of files across global enterprises, high-volume content ingest and distribution, and replaces FTP/SFTP servers for transfers of large, business-critical data.


A hosted multi-tenant software service running in the Cloud that enables high-speed upload and download of large files and large data sets directly into Cloud object storage.

What's New with Autoscale Cluster Manager

The recent releases of the Autoscale Cluster Manager and cluster nodes include the following Transfer Performance Enhancements for Amazon:

  • The transfer server has been upgraded to the most recent IBM Aspera Enterprise Server 3.7.3 adding all of the latest benefits and is now available for Google, in addition to AWS, IBM Cloud, and Azure (beta)
  • Cluster node network drivers have been upgraded to enable AWS "enhanced networking", enabling a 3 Gbps transfer throughput for a single session on a single Amazon virtual machine
  • Cluster nodes on Google have also achieved a 3 Gbps transfer speed, per individual session

The Cluster manager and transfer nodes have also been improved to enhance support for running clusters in a Private VPC (AWS), with features such as:

  • Admins can configure a specific private IP address to be used as state-store host and node configuration download when launching clusters
  • Admins can use the Cluster Manager's private IP address for cluster nodes to download the node configuration and connect to the state store
  • Admins can now specify private DNS names when launching a cluster.
  • Cluster nodes now supports IBM COS - S3 as a storage type
  • Support has been added for IBM Cloud VLAN 'id' instead of VLAN 'number'
  • Cluster nodes have been configured with overcommit_memory, Transparent Huge Pages and TCP backlog
  • Cluster manager now supports cluster nodes in AWS eu-central-1 (Frankfurt)

The Cluster Provisioning system has been improved with the following enhancements:

  • Admins can now run a custom first boot script on the Cluster Manager specified through user data. For example, you can assign an elastic or secondary private IP address needed for automated failure recovery
  • Admins can now run a custom first boot script on cluster nodes specified in the cluster configuration
  • Admins can now configure cluster nodes to create and mount a separate "swap volume" when using instance types that do not provide local instance store volumes

The Cluster Management service and transfer nodes now support these configuration options:

  • Extended timeout settings for ScakeKV and extended open file limits for increased robustness
  • New options to configure bandwidth settings for the Transfer Node Usage calculation and thus control thresholds for auto-scaling and SLAs
  • Additional Configuration options for more control in external service management
  • New default configurations disabling asperacentral vacuum and http fallback

The Cluster Manager itself has been enhancements with the following features:

  • Cluster Manager and nodes now include jq and cloud-specific command line utilities
  • Error messages in the status tab are greyed out when an activity returns to healthy
  • Admins can now specify multiple DNS hosted zones and configure transfer nodes with separate hosted zones for public and private IP addresses, and configure hosted zone IDs for when there are multiple hosted zones with the same name
  • The password for the admin user of the ATC-API is automatically set to the Instance ID on first boot of the instance and the default Cluster Manager console timeout is now 2 weeks
  • The Cluster Manager console now shows the Public IP and Private IP columns instead of the Hostname column for cluster nodes, The cluster manager activity tab now shows the node ID
  • Logs on cluster nodes and the Cluster Manager now rotate

Latest Release

Use Cases

Performance calculator

“Aspera offers a mature, fast, reliable and secure solution capable of handling the high volume of video content we need to deliver directly to cloud storage."  Simon Christie Broadcast Systems Engineer, Distribution & Broadcast Technology, Channel 4