ASPERA TRANSFER PLATFORM

Aspera creates transport software that overcomes the inherent distance problems of traditional transport. Built into every Aspera Transfer Server is the Aspera Transfer Platform, which solves the fundamental problems that exist with data movement over long-haul wide area networks. A truly universal high-speed transfer platform, supporting transfers of any data type or size, leveraging all infrastructure and storage types, regardless of location, with maximum speed and a comprehensive offering for all transport paradigms and deployment models.

SOFTWARE LAYER

Media applications such as editing, transformation,  management, distribution or broadcast  (i.e. Avid, Elemental, Harmonic, Media Beacon, Sony, Zencoder to name a few) need fast and secure access to high-resolution media regardless of location.

Similarly, Life Science and Energy & Petroleum applications that leverage high performance computing for analyzing DNA sequences, proteomic data, or seismic data from remote drilling sites need to be able to move the data quickly and securely from where it is created to where it is needed.

The Aspera Transfer Platform creates the appropriate service level support in the software via a robust set of REST and SOAP APIs for enabling these applications to access large data sets with speed, security and distance neutrality.

INFRASTRUCTURE LAYER

The Aspera Transfer Platform is infrastructure agnostic, offering direct, high-speed access to cloud object storage such as Amazon Web Services S3, Microsoft Windows Azure BLOB and OpenStack Swift, on premise NAS such as EMC/Isilon and NetApp, and Content Delivery Networks such as Akamai, Level3, and Limelight to name a few.

Regardless of the infrastructure type, it is typically located far away from the original source of the data or where the data eventually needs to end up. With the Aspera Transfer Platform, extremely large data sets can be stored anywhere and moved at high speed between storage end-points as needed. Aspera delivers secure, line-speed transfers of big data regardless of the storage type or location (block, object, on-premise, in the cloud, hybrid or embedded) and independent of network types or conditions.  More importantly, access to the different storage types is seamless and uniform, and completely transparent to the client applications. This allows the maximum flexibility in deploying the infrastructure of choice.

CLIENT APPLICATIONS

Aspera clients enable fasp-powered transfers to and from the Aspera Transfer Platform on virtually any device. Native applications are available for Windows, Mac and Linux operating systems, as well as plug-ins for standard web browsers, an add-in for Microsoft Outlook and mobile apps for iOS and Android. Regardless of the selected client option, users can easily initiate secure, high-speed uploads and downloads, as well as monitor and control transfer rates.

Centralized monitoring, management, and reporting of the entire Aspera transfer environment is available via real-time dashboards. New transfers can be initiated ad hoc or automated for specific scheduling. Administrators can remotely configure any connected Aspera server and leverage comprehensive transfer logs to create customized activity, usage and billing reports and notifications.


Aspera is first in the world to offer seamless, line-speed ingest and distribution of very large files and data sets to and from cloud-based object storage such as AWS S3 and Azure BLOB, independent of distance, and with complete security. With digital supply chains now spanning the globe and the complexity associated with transferring ever-larger file sizes over longer distances increasing exponentially, companies can now realize the full benefits of the cloud with the Direct-to-Cloud capabilities built into the Aspera Transfer Platform and available via Aspera On-Demand.

Watch Video

High-speed, direct-to-CLOUD

Aspera Direct-to-Cloud eliminates the two bottlenecks across the WAN and inside the data center to enable direct I/O in and out of Amazon S3 and Azure BLOB object storage from all Aspera clients and servers. Direct-to-Cloud performs high-speed fasp transfers over the WAN, and transparently handles cloud-specific I/O including multi-part uploads delivering unrivaled performance for the transfer of large files, or large collections of files, in and out of the cloud.

Using parallel HTTP streams between object storage and the Aspera On-Demand server running on cloud infrastructure, the intra-cloud data transfer no longer constrains the overall transfer rate. The files are written directly to S3 and Azure, without a stop-off on the compute server, achieving transfer rate improvements of up to 10x per virtual machine instance over the typical cloud transfer solutions.

Auto Scaling Screen Tour

Screen 1: Launching a new server cluster and configuring the Autoscale policy

Using the Autoscale platform technology, administrators can launch a new cluster and set the Autoscale policy to accommodate highly variable and/or unpredictable transfer capacity demands or better monitor and manage clusters with a larger number of nodes. These policies define how the cluster scales up and down based on transfer capacity demands as well as the service and server health status of the cluster and each node.    

Screen 2: Autoscale platform interface with the Description, Infrastructure, and Autoscale policy tabs

These screens provide key information about the cluster with tabs for cluster description, infrastructure, Autoscale policy, keys, and node API. 

Screen 3: Node summary screen (above) and the detail Monitor Node screen (below)

These summary and detail node screens provide key information about the status of the individual nodes in the cluster with expandable views of the Autoscale policy, ScaleDB, and Logs.

Screen 4: Monitor Transfer screen showing a 5 GB upload from AWS S3 

The Autoscale platform provides a unified view of current active transfers and the recent transfer history. 

Screen 5: Auto-provisioning of a new node when a threshold is crossed 

Based on the Autoscale policy that was established for this cluster (a minimum of 2 available nodes), the Autoscale platform automatically provisions a third node after transfer capacity demands cause the first node’s utilization to cross the policy’s high usage threshold.

Screen 6: Third node terminating after the highly utilized node completes a transfer

Following the Autoscale policy (a minimum of 2 available nodes), the platform automatically de-provisions the third node after transfer capacity demands on the first node fall below the policy’s high usage threshold.

The Aspera Transfer Platform includes Aspera Transfer Cluster Manager with Autoscale capabilities for dynamic, real-time scale out of transfer capacity with automatic start/stop of transfer server instances, automatic balancing of client requests across available instances and configurable service levels to manage maximum transfer load per instance, available idle instances for “burst” and automatic decommissioning of unused instances.

The Autoscale functionality is complemented by a new scale out data store for distributed, high throughput collection, aggregation and reporting of all transfer performance and file details across the auto scale cluster.

AutoscalE SERVICE

The service runs on the Aspera Transfer Server stack, and automatically manages the number of instances needed to support client transfer demands based on user-defined criteria. The service determines how many nodes are in use and maintains a defined number of nodes in reserve - booted up but idle.

As transfer loads increase and decrease, nodes get moved from idle to available, and from available to highly utilized and back again based on user-defined load metrics. If the minimum number of available nodes drops below the user-defined threshold, the cluster manager boots up new nodes automatically, and then brings them back down when they are no longer needed.

Take the screen tour

Scale KV

ScaleKV is an in-memory component accessed by all the nodes in the cluster, and supports all the transfer sessions in the infrastructure service, capturing all the statistical data to support the Autoscale function.

Statistics are kept for reporting applications like Console and available to third party applications that can query the interface to get the statistics per session across the entire band of nodes.

The data structure shares the memory of all the nodes to balance the transfer statistics data across them, and allows gathering of transfer statistics at very high rates, ensuring the collection does not impede the transfer rates.

Downloads

Learning Center

VIDEOS

Direct-to-Cloud

Webinars

Customer Success Stories

White Papers

Support

  • Visit our support site for comprehensive knowledge base, community forum, troubleshooting tips and more
  • Contact our support team