Transfer Cluster Manager with AutoScale

The new Autoscale capability in Enterprise Server 3.6 allows Aspera transfer nodes to self-scale in today's dynamic cloud environments. The core capabilities are an autoscale cluster manager service for elastic auto scaling of transfer hosts and client load balancing and automatic cluster configuration, the new node/v4 multi-tenant secure access key system, and a new cluster management UI for managing policies and reporting on transfers across the cluster. Version 3.6 supports Aspera On Demand clusters on SoftLayer, AWS and Azure.

The service allows for dynamic, real-time scale out of transfer capacity with automatic start/stop of transfer server instances, automatic balancing of client requests across available instances and configurable service levels to manage maximum transfer load per instance, available idle instances for "burst" and automatic decommissioning of unused instances.

  • Manages Transfer Throughput SLAs and compute/bandwidth costs with elastic scaling - The service is part of the Aspera transfer server software stack, and automatically manages the number of server instances needed to support client transfer demands based on user-defined policies and automatically manages the number of nodes in use and booted up in reserve but idle.
  • Provides high availability and load balancing - As transfer loads increase and decrease, nodes are moved from idle to available for client requests, and from available to highly utilized and back again based on user-defined load metrics such as tolerances for low and high transfer throughput and online burst capacity. If the minimum number of available nodes drops below the user-defined threshold, the cluster manager boots up new nodes automatically, and then brings them back down when they are no longer needed. Similarly, unavailable/down nodes are automatically detected and restarted, and client requests are re-pointed to healthy nodes.
  • Works on all major clouds and in conjunction with Aspera Direct-to-Cloud storage - infrastructure independent - All of the Autoscale capabilities are implemented in the Aspera software and thus are portable across cloud providers. The 3.6 release includes support for AWS, SoftLayer, and Azure auto-scale clusters
  • All transfer initiation and management scales across the cluster with ScaleKV technology, a new Aspera created scale out data store for distributed, high throughput collection, aggregation and reporting of all transfer performance and file details across the auto-scale cluster, and transfer initiation across the cluster

Cluster API Capabilities

Key API capabilities provided by the Cluster:

  • Query transfer status across the cluster - The new transfer API (node/v4) returns transfer progress and details across the entire cluster. Transfer statistics are automatically sharded across the memory of all nodes in the cluster, keeping up with very high transfer rates and number of sessions.
  • Transfer using all nodes in the Cluster - The new transfer API (node/v4) allows third party applications to initiate transfers that use multiple / all nodes in the cluster for throughput beyond the capability of a single node, and automatic failover and fault tolerance.
  • Securely Support Multiple Tenants - New mulit-tenant secure access key system (node/v4) allows Aspera administrators of applications such as Files, faspex and Shares to securely support multiple tenants on the same  cluster with private access to separate cloud storage and transfer reporting.
  • Report transfer status and usage by Tenant - The transfer API (node/v4) allows queries of transfer history by access key to securely report history and usage by access key tenant. (Aspera On Demand usage reporting is being enhanced with usage by access key.)

Cluster Management UI

A New web-based cluster management UI manages access keys, cluster configuration including Autoscale policy and in memory data store for transfer statistics.

Key features include:

  • Configuring auto-scale policies and transfer node (aspera.conf) templates and automatically implementing these on new and running clusters
  • Configuring and installing SSL certificates for Aspera nodes
  • Enhanced service monitoring and management to monitor and restart key Aspera services and automatically decommission failed nodes
  • Live cluster upgrade allowing a cluster manager to migrate a live running clsuter to a new machine image version without disrupting running transfers.
  • Enhanced resilience to failure including ability to backup and restore the cluster state, options to back the main state store on a clustered database (AWS RDS, Azure ClearDB, and in failure recovery, the ability to launch new node images with configuration from the backup.
  • Easy duplication of cluster template configurations through the UI
  • Create access keys for all supported Aspera transfer cluster platforms (AWS, Softlayer, Azure in 3.6)
  • View all active and completed transfers with filters by access key