The FASP 3.6 Core, first released in December 2015, and now enhanced with version 3.7, is a major new release from Aspera that brings significant new capabilities across all of the following areas:


Advances in Transport Speed and Performance

  • FASPStream transport for high speed, predictable transport of of byte stream ("live") video and data over commodity Internet WANs
    • Powers the new FASPstream Server software providing exceptional quality transport of live video and data streams over commodity Internet networks with negligible start up delay and glitch free playout
    • Powers growing file transfers in the new Aspera watchfolder
    • New filestream API allows remote Aspera clients to do the following:
      • Via API, read and write file data as an input or output stream (stream-file interoperability) using the FaspManager SDK
      • Rewrite sections of the file as required, aka update segments of the destination stream or file during transport. Uses cases include updating the header bytes in the remote video file in a live stream-to-file transfer
      • Automate 'streaming' of files and directories over the WAN using new interfaces to the ascp binary for stdin/stdout, stdin-tar/stdout-tar, and named pipes
      • New object storage support enables stream to object store ingest over WAN at high speed (v 3.7)
        • Received stream output can write directly to object/persistent storage (Hadoop, Swift, S3, etc.) and simultaneously to real time message queue platforms (e.g. Kafka)
        • Ingest, aggregate, store and process data streams from large numbers of clients concurrently
  • Ascp4, a breakthrough in performance for file transfers using parallel IO (i.e. lots of small files over 10 Gbps, parallel/clustered file systems), now ships for all Server and Point-to-Point products
    • Capable of transferring 1 million files per minute or 10 Gbps for DPX files around the world
    • A breakthrough architecture built on FASP with file metadata sent over the fast FASP channel, multi-threaded I/O and optional compression for ultra high speed transfers of small files to block and now object (cloud) object storage.
    • Ships for 3.6 Enterprise Server and Point-to-point products as a configurable option for execution and management by the desktop GUI, Console 3.0 and ascp command line.
    • Latest features (v 3.7) bring even more performance and feature parity with ascp:
      • Options to parallelize all file operations - directory traversal, file attribute checking, in addition to read write further increases performance
      • Resumable transfers
      • Configurable overwrite policies and skip of existing files
      • Direct to cloud storage transfer support
      • Extremely fast small file transfers to cloud storage 10-100X faster than S3 HTTP multipart over WAN (250-500 files transferred per second for throughputs greater than 500 Mbps for files of average size <=10 KB)
      • Full compatibility with FASP manager APIs (transfer control and reporting)
      • Sparse file and symlink support
  • Transfer Cluster Manager with Autoscale ships for Softlayer, AWS and soon Azure, automatically scales up and down transfer server hosts with multi-tenant secure access, high availability and automatic cloud resource management
    • Utilizes Aspera Direct-to-Cloud storage transfer capability and all of the new native clustering capabilities (multi-tenant access keys, flexible storage roots, and scalekv for distributed and cluster wide statistical reporting) to create a self-scaling and highly available cluster of Aspera transfer nodes
    • The latest version includes Enterprise Server 3.6.2 and is enhanced with the following new capabilities:
      • Performance Enhancement
        • Cluster node network drivers have been upgraded to enable AWS "enhanced networking" enabling ~2Gbps transfer throughput per node
      • Improved Support for Running Clusters in a Private VPC
        • Configure a specific private IP address to be used as state-store host and node configuration download when launching clusters
        • Use the Cluster Manager's private IP address for cluster nodes to download the node configuration and connect to the state store and specify private DNS names when launching a cluster
      • Cluster Provisioning Enhancements
        • Custom first boot scripts can be run on the Cluster Manager specified through user data, e.g. to assign an elastic or secondary private IP address for automated failure recovery
        • Cluster nodes can be configured to create and mount a separate "swap volume" when using instance types that do not provide local instance store volumes
      • Cluster Management Enhancements
        • Cluster manager and nodes now include jq and cloud-specific command line utilities
        • Option to configure hosted zone IDs when there are multiple hosted zones with the same name
        • The Cluster Manager console now shows the Public IP and Private IP columns instead of the Hostname column for cluster nodes

New multi-node transfer capability

  • New parallel/multi-host transfer feature ships in Aspera Point-to-Point and Server products allowing mass data transfers to cloud storage clusters and on premise compute clusters at 10 Gbps per job and higher
    • Transfer/replicate mass data sets including large directories and individual large files using multiple computers in a cluster, including direct to cloud storage
    • New configurable options for controlling the intra-file 'splitting' over the parallel sessions for maximum control, even resource usage, and fastest transfer times

Advances in Security and Audit Reporting

  • New expanded at rest encryption with new server side capability for seamless on-the-fly encryption and decryption of content written to Aspera server hosts using a secret supplied by the server
    • Automatically encrypts content as it is written by Aspera server software to storage, and decrypts on download for secure storage of content at rest
    • Adds to Aspera client side encryption at rest whereby content is encrypted with a secret provided by the sender
    • Encryption is performed on-the-fly (with strong cryptography, AES-128, 256, etc.) with no pre or post transfer steps required and without slowing the end-to-end transfer time
    • Encryption at rest (server and client side) is supported with automatic resume of interrupted transfers and HTTP fallback transfers
  • New configurable cipher options for AES-192 and AES-256 in addition to AES-128 for encryption in transit
  • Full cross platform (Linux, Windows, OS X, Solaris on Intel, BSD) support for encryption acceleration using AES-NI CPU support (2X transfer speed for same CPU)
  • On-the-fly data encryption is implemented with AES Galois Counter Mode replacing the previous Cipher Block Chaining (3X transfer speed for same CPU) (v 3.7)
  • Client side encryption cipher and the client-side default private SSH key can be configured in aspera.conf
  • Enhanced policies to maintain full FIPS 140 Certification
    • New configurable encryption ciphers and SSL protocol versions in asperanoded service
    • New enforcement of only strong SSH ciphers used in FASP session establishment 
  • New upgrade of the checksum used in file resume verification from md5 to SHA2
  • New configurable policies for token authorization available throughout the Aspera products and APIs to specify token lifetime and restrict the lifetime by transfer user (i.e. less than the server default)
  • Expanded symbolic link restrictions and configuration allow a user to follow symbolic links or not, or if necessary to follow symbolic links outside of the docroot. By default symbolic links can be followed but only inside the docroot
  • Qualys A+ security review on all Aspera transfer server products when configured using best practices

Advances In ASPERA PROXY 1.4

The Aspera Proxy, Aspera's high performance proxy for secure forward and reverse proxy of Aspera FASP transfer sessions, has major new releases with the following new capabilities:

  • Security enhancements:
    • All reverse proxy subsystems require aspshell by default (rather than when configured)
    • Global rules are no longer permitted
  • New support for chained proxies, two or more in series, for two-tier DMZ configurations
  • Enhanced support for multiple internal servers running Windows and concurrent transfer sessions to Windows servers
  • New load-balancing configuration for HA proxy deployments having multiple internal Aspera servers
  • New reverse proxy support for Aspera Drive sync and for Aspera Sync (v 1.4)
  • New reverse proxy support for concurrent client connections coming from the same IP destined for different Aspera server nodes

NEW NODE V4 TRANSFER MANAGEMENT - NATIVELY CLUSTERED, MULTI-TENANT, AND HYBRID STORAGE COMPATIBLE

With the release of version 3.6, Aspera has introduced a new generation of transfer management that is designed for clustered and multi-tenant environments and supports any hybrid storage solution - block or cloud for all major cloud providers. RESTful APIs ("node/v4") provide transfer and file management covering authorization, access control, transfer initiation, and event reporting, and ship in all Aspera Server products and On Demand and Aspera Files SaaS offerings. Key capabilities include:

Natively Clustered

  • File transfer tasks and management reporting are automatically distributed to all nodes in the cluster. If a node running a task fails another will automatically resume the task. Cluster-wide status reporting is aggregated across all nodes and available on all nodes, so applications need query only one node in the cluster to learn the status of any node and all transfer history

Multi-tenant Access Keys for Aspera Access to Hybrid Storage (Cloud and On Premises)

  • New multi-tenant transfer system for all hybrid storage (cloud and on premise) where each tenant can privately manage its storage available for Aspera transfer and file management via a new access key system
  • Each tenant is designated a master access key and secret. Tenant administrators use the access key and secret pair to fully access and control the storage available for Aspera transfers and management, configuring buckets/directories available and enabling permissions for their users
  • Access keys have a storage type attribute; supported types include all major cloud storage (S3, Swift, BLoB, etc.) and local block storage, which can be changed transparent to end users as needed for maximum storage flexibility, using the access key and secret.
  • All activity - file and directory permission, transfer authorization, and transfer and reporting events are private to the tenant
  • Full backward compatibility for all current generation Aspera clients, Faspex and Shares through secure basic authentication

Bearer Token Authentication and Permission System

  • A new Bearer Token Authentication and Permission System allows tenants to set access permissions to files and directories for their end users using their existing user and group ids and authentication systems
  • End users authenticate via OAUTH2 and are issued a bearer token encoding their access permissions, which is signed with the tenant's private key. The bearer token is presented to the Aspera transfer nodes and the signature verified in any request to manage files and directories, or to upload / download / transfer node-to-node

Manage Files and Transfer from Multi-User Applications

  • Set file and Directory Permissions - read, write and share (grant permission) to any path within the authorized access key storage root by user id and group id
  • Upload and download files and directories from the end client. Transfer files and directories from node to node
  • Set rates, bandwidth priorities and pause/resume transfer sessions (v 3.7)
  • Multi Host Transfer - New API allows one data set to be transferred using multiple computers in a cluster on the source and/or on the destination

Event Reporting

  • New APIs to query all transfer and file sharing events including detailed user, session and file info
  • File modification (delete, rename) events are also logged (v 3.7)
  • Transfer and bandwidth statistics are self cleaning for large numbers of sessions (v 3.7)

Backward Compatibility with node v3 Aspera Applications

  • Fully backward compatible with all Aspera v3 products via HTTP basic authentication (v 3.6), as well as SSH authentication in (v 3.7)
  • Improves the current "node/v3" HTTP APIs for transfer, authorization, node browsing, file management, and node-to-node transfer in several ways:
  • Better Methods for transfer initiation, node browsing, file management, and node-to-node transfer
  • Authorization tokens include a default and configurable lifetime
  • /transfers API includes options to delete source files after transfers
  • /info API returns Aspera Sync (async) capabilities in addition to transfer product capabilities

Advances in file system monitoring and notification

Asperawatchd, a new high-performance file system notification service, ships in version 3.6. It monitors the file system for changes, and updates an internal snapshot, with a novel architecture that scales up for fast change notification in file systems with up to 100 million items and can be deployed across distributed sources. The Aspera Sync and Aspera Watchfolder v 3.6 both use it for super fast change notification and snapshot updates. Key capabilities include:

  • Cross-platform service for Linux, Windows, OS X.
  • Distributed architecture captures changes on any local OR shared storage client host (CIFS, NFS, etc.) and aggregates all in real time in a single snapshot used to determine what content to send.
  • The design is fault-tolerant and allows for any number of asperawatchd instances, with one as master that handles interpretation of changes and a distributed election algorithm to replace the master if it dies.
  • Powers the new WatchFolder service, new release of ascp and the newest release of Aspera Sync.
  • Ultra fast and scales to *unlimited* watch directories and huge file systems: directory watch and snapshot capability allows for watching file systems with up to 100 million items at high speed (over 500 file system changes found and captured per second).
  • Performance benchmarks: computes a differential "snapshot" of the watched file system directories including all changes in the space. Averages >= 500 changes found and applied per second, and computes new snapshot differentials fast for the following benchmarks at scale:

Number of files in watched directory

Number of changes

Time to find changes and compute snapshot differential

1 Million Files 5 Thousand (5%) 9 Seconds
1 Million Files 200 Thousand (20%) 353 Seconds
100 Million Files 5 Million (5%) 3.3 Hours
100 Million Files 20 Million (20%) 13 Hours

 

Advances in File Handling, Automation and Reporting

  • New fast and easy "one-way" sync: Asperawatchd snapshots generate filelist for ascp
  • New delete-before transfer option in ascp identifies deleted files on the source and deletes them on the destination
  • New asdelete binary shipping compares a source and destination directory and deletes removed items at the destination
  • New option to Move files at Source after transfer. Successfully transferred files are moved to a configured alternative directory on the source. Available for both uploads and downloads in ascp command line, GUI hot folder, interactive GUI transfers, and Console.
  • New option to Archive files at Destination before Overwrite. Destination files are saved with a date and timestamp prefix before overwriting with a new version. Available in ascp command line, GUI hot folder, and Console
  • Expanded file name include and exclude filters by glob and regular expression match (same as rsync) (v 3.7)
  • Preservation of access control attributes (Windows ACLs, Mac OS xattrs, posix uid/gid) and access/modification/creation times even when file content is not changed (skipped) in ascp
  • File checksums are reported in file start and end statements in the log file, transfer manifests, Console and the Reliable Query API
  • Transfer session metatags (<TAG>) are now reported in the Reliable Query API
  • New options to configure which file types are logged configurable in aspera.conf, allowing to ignore skipped or error'd files, for example
  • New option in ascmd to cancel connection attempts and commands in progress with the remote endpoint
  • Enhanced reporting of node-to-node and node-cluster transfers including reporting client and server side docroot, client side user and node user
  • FASP datagram size is configurable in aspera.conf in addition to ascp command line (prevents IP fragmentation on networks that don't support path MTU discovery)
  • New Lua interpreter engine (v 3.7)  for "in transfer" post processing, validation and authorization functions through embedded LUA scripts. Aspera Lua Interpreter can be used for anything from authorizing files based on names and attributes, to creating version history and checking transferred files into source code repositories, etc. - without slowing down the transfer pipeline
  • New faspmanager API feature to send an event when individual arguments such as directories in persistent session are completed (v 3.7)

Advances in Cloud Transfer Platform Capability

Aspera's transfer platform is unique in its ability to provide high-speed, distance neutral secure transfer and synchronization capabilities directly to any storage platform - including all on premises SAN, NAS or new cloud object storage, for any size data set - large single files, small individual files and now streamed data and video. Includes high speed large and small file upload/download; secure authenticated file/directory browsing and management; on-the-fly content encryption and encryption of content at rest (both client and server-side) on cloud storage; automatic fallback to HTTP; and automatic resume of transfers. Applies for data sizes up to the cloud storage limits for all platforms. Outperforms all other cloud storage "acceleration" solutions in sustained transfer speed, aggregate transfer throughput, number of concurrent transfers, and CPU/memory efficiency.

The 3.6 release expands our Direct-to-Cloud storage transfer capabilities with the addition of new cloud platform providers and additional security options. The Aspera Transfer Cluster Manager with Auto-scale, a self-managed, self-scaling cluster of Aspera transfer servers, is also available with a fully multi-tenant security model, interfacing with all major cloud storage platforms (over 10 different providers supported). The platform is capable of supporting the smallest to largest workflows for ingest of data to cloud storage, cloud-to-cloud and cloud-to-on premises data delivery, and distribution.

ADVANCES IN DIRECT-TO-CLOUD STORAGE SUPPORT

  • Full functionality in Aspera FASP Direct-to-Cloud Storage transfer capability (FASP transport over WAN with native I/O to cloud storage) for all major cloud storage providers
  • Version 3.6 supports IBM SoftLayer Swift, AWS S3 and GovCloud, Microsoft Azure Blob, Akamai NetStorage, Limelight Orchestrate Cloud Storage, Google Cloud Storage, HDFS, Ceph, CleverSafe and NetApp Cloud Storage
  • Major new features include:
    • New multi-host transfer capability to Autoscale Transfer Clusters support mass data migrations to cloud storage at 10 Gigabits per second and above
    • New ascp4 support for cloud storage enables superfast small file transfers to cloud storage
    • New superfast HVM image types for AWS increase performance to 1.5 Gigabits per second per EC2 host (single or aggregate transfer throughput) and ~50 concurrent transfer sessions to cloud storage
    • New cloud storage to cloud storage transfer support is now possible in ascp command line, Console, and web services APIs. Transfer between AWS S3, Swift, Google Storage and Azure BLoB and between regions
    • Server side (& client side) encryption at rest for FASP and HTTP fallback transfers
    • Automatic adaptive and dynamic determination of proper cloud storage part size to allow arbitrarily large files without special configuration
    • Preservation of file creation and modification times on uploads to cloud storage.
    • File links are now supported for cloud storage (the equivalent of symbolic links) with the following naming convention: .name.asp-trapd-lnk
    • License support is now available for multiple entitlements per computer.
    • Added support for storage to blob pages in Azure
    • Docroots can now be configured using $(home) for a user's home directory. This is also supported by asconfigurator
    • Enhanced logging for HTTP fallback sessions (both client side and server side).
    • Support for Amazon KMS encryption-at-rest
    • AWS cache-control metadata support 
    • AWS infrequent-access storage class support