Select Page


USS-SN 2-Node Shared Nothing Cluster

A true, non-disruptive hyper-convergence solution where there is no single point of failure!

Start with only two nodes. USS-SN shared nothing clusters provide the ultimate in high-availability with as few as just two on-premise nodes.  It eliminates the need for shared external RAID arrays, servers, SAN and NAS.  USS-SN can easily and cost-effectively scale to meet challenges of capacity, high availability, CAPEX / OPEX, data transportability and performance requirements.

Each node is protected with dual parity and redundant cache flash technology. Integrated and embedded SAN (block), NAS (file) shared nothing Storage provisioning for virtual and physical machines with associated data services.

Advanced integrated Storage, server and network virtualization.  The StoneFly USS-SN node operating with shared nothing architecture provides a fully redundant failover and failback configuration.

NO costly and complex Storage arrays for your most demanding applications.

NO single point of failure.


captcha
*All fields with an asterisk are required.

StoneFly has deployed its solutions across the globe, in a most demanding environment where the uptime and high availability is critical, and failover needs to be less than few seconds with low CAPEX/OPEX.

StoneFly offers the most innovative and advanced hyperconverged appliances, the Dual Node Stretch Cluster. The use cases and deployment have been in military environments like Navy ships, tanks, radar, communication, emergency response, video surveillance, financial, retail, healthcare, government, education, and telecom.

The Dual Node Stretch Cluster is for customers that have between 2 to 10,000 sites where the application uptime is critical, and downtime not acceptable. Pre-configured, certified and supported, the Dual Nodes are connected through either LAN(10G preferred) or Metro Connection.

Each node is based StoneFly’s “shared nothing architecture.” Most affordable solution for critical enterprises application, each node hosts some virtual machines or containers with software defined network switches, iSCSI and NAS storage. Each node is independent and self-sufficient and shares no memory, disk storage, network, CPU, power supply etc. all fully virtualized. So if one node fails, the other will pick up the pace, offering always-on, continuous support to your enterprise.

Technology Converged
  • Storage Virtualization
  • SAN – iSCSI Block Storage Protocol Services
  • NAS – File storage Protocol Services
  • Object Storage Protocol Services
  • Hardware Raid Storage per Node
  • Enterprise with Data Survives – Snapshot, Encryption, Deduplication, Thin rovisioning, Async Replication ,  Sync mirroring , Campus mirroring.
  • Server Virtualization
  • Network Virtualization
  • Two-Node Clustering and Failover
  • Transportable Storage Auto Connect & Disconnect
Six Major Challenges that StoneFly USS-SN 2-Node Shared Nothing Cluster Solution Easily Overcomes

Challenge #1
Reducing cost and complexity to keep the IT infrastructure footprint to a minimum 2 nodes) making the option of external SAN, NAS, Sever, Network unnecessary with no single point of failure.

Challenge #2
Delivering application performance. Enable and support of virtualization for entire infrastructure.

 

Challenge # 3
Use little physical space, power, cooling. It is easy to deploy and manage as well. 

Challenge #4
Simplifying and centralizing management. Maintaining IT infrastructure with single and uniform system management which impacts the ability to manage uptime and outages.

Challenge #5 
Ensuring system and application uptime. The ability to avoid outages when there is a critical systems failure. Failover, recovery and self-healing
 in case of any failure.

Challenge #6
Auto and ease of Data Transportability for offsite data mobility, backup
 and restore.

 

Benefits Of Dual Node Stretch Cluster
  • Eliminating the barriers to high availability.
  • Ensure application uptime.
  • Maximize server and Storage utilization.
  • Deliver application performance.
  • Lower capital cost.
  • Lower maintenance.
  • Lower management cost.
  • Storage transportability for data transfer, backup , security.
A Look Inside a Single StoneFly USS-SN (Unified Storage & Server) “Shared Nothing” Node
StoneFly’s Storage Technology Stack

 

How 2-Node Shared Nothing Clusters Work: 
Features of USS-SN “Shared Nothing” Cluster

High Availability:

The main focus is to deliver a hyperconverged appliance that is highly available and resilient. This is accomplished through synchronous mirroring, clustering, failover and failback of two nodes. If any components like network connection or power fail, or there is complete node failure, other nodes takes over for failed node and continues operation till the failed node is restored and joins the cluster. The failover is seemless to the application, virtual machine running on the node(s) or virtual or physical external host using the node as Storge target.

Quotas:

Quotas provide the ability to set restrictions on how much Storage is consumed on a file system. In on-premises systems, physical capacity limits the amount of Storage that can be consumed, unless volumes/shares have been configured logically, in which case quotas are required.

Ease of deployment:

Administrators in the field only have to turn the system on and bring it online for it to be working properly. Easy to use system-wide Graphical Performance and Utilization Reporting.

Multi-tenancy:

Provides the ability to support multiple user groups, multiple network domains and multiple protocols at the same time. A single file share may sit on more than one network via virtual interfaces and be visible to Linux/Unix and Windows-based operating systems.

Centralized Security

Setting file permissions manually is a tedious task, integrated with standard authentication methods like Active Directory in order to enable security settings to be delivered via policy and centrally managed.

I/O Performance

High performance with up to 44 core per node and 1.5TB of memory provide plenty of resource to hosted virtual machines and integrated Storage services. SSD as cache and also as one of Storge tires.

Comprehensive Reporting:

Rich reporting tools are required to show Storage growth over time, as well as usage by file share and by data type.

Single Pane Management:

All the nodes can be accessed from a single pane of management screen.

Fully POSIX compatible:

The appliance works as a standard Storage for virtual machine or server. The IEEE standard for maintaining compatibility between opertaing system.

Protocol Support:

Full NFS protocol with over 30 years of application support, Microsoft CIFS/SMB support, iSCSI , Object.

Encryption:

Full AES -256 at rest for customers with regulatory or other privacy requirements.

Automated Snapshot Schedules:

Create automatic snapshot schedules of one or more Storage volumes and/or network shares automatically. This allows for instant recovery back to a previous point in time to recover lost files or other data.

Active directory:

Integration with active directory or user defined permission control allows for user and security management.

Full Indexing and search:

On file, directory , volume , node

Elasticity:

High performance at lower cost using an elastic hashing algorithm to locate data in the Storage pool (by calculating a hash on the filename), removing a common source of I/O bottlenecks and vulnerability to failure. No metadata server.

Self Healing :

Automatically, healing happens when a volume or node  that went down comes back online.

h

Replication - Remote Data center or Cloud

For complete business continuity and disaster recovery, StoneFly’s optional Asynchronous Replication to a remote site or to the cloud allows your Virtual Machines and data to be stored and protected off-site in case of disaster at the local site. As with the mirror appliance, you can spin up your Virtual Machines, replicate data back to the primary USS appliance (fallback), or perform bare metal recoveries on replacements of the destroyed appliances.

ISCSI target

Provides the ability to support multiple user groups, multiple network domains and multiple protocols at the same time. A single file share may sit on more than one network via virtual interfaces and be visible to Linux/Unix and Windows-based operating systems.

Campus Mirroing

StoneFly’s real-time Synchronous Mirroring of your Virtual Machines and hyper-converged Storage to a StoneFly third node  appliance (campus mirroring) on your premises over the Local Area Network (LAN) allows you to have a full and identical copy of everything on your primary Storage.
This third node appliance not only allows you to do everything that you could already do on the primary appliance, but also allows you to spin up one or all of your VMs on the mirror appliance in the event that a VM on the primary appliance has failed or if the entire primary appliance is down. When the primary appliance is repaired, a bare metal recovery of the primary appliance can be performed from the secondary appliance to restore the mirror.

Tiered Storage Architecture

Support for SAS 7200, 10k, 15k, SSD and NVme provide muti-tier Storage and ability to provision directly to specific workload.

f

NAS support for CIFS/SMB and NFS

Full support for standard protocol for connecting  windows, Linux, Mac.

Cloud connect - Public Aure

Cloud Connector provide an easy way connecing to most popluar public cloud such as microsoft Azure. One click and you can start replicating or expanding and extending your cloud Storage as local Storage to the appliance.

d

Deduplication

Setting file permissions manually is a tedious task, integrated with standard authentication methods like Active Directory in order to enable security settings to be delivered via policy and centrally managed.

SNMP, UPS, Nagios, Raid monitoring , Call home

SNMP Trap and monitoring all activity from network and bandwidth to process. Provide you a great picture as graphical data to optimize for pour workload.

VSS Support

Allows to quiesce the  database in product such as sql, exchange with window operating system when backup or snapshot is performing.

Stretched Clusters

Each node can be installed on different location at any distance in LAN or high speed WAN network. This is ideal for campus type of installation that site failure and uptime is key requirement. Resiliency with geographically distributed nodes.

backup and Disaster Recovery

Full backup and diaster recovery is available as option for this appliance. It Backups all your setting, virtual machines and applications to remote data center or cloud of your choice. The appliance can also function as backup and disater recovery for other physical and virtual hosts and act as failover from them.

Scale out

Scale out to as many as nodes you need for capacity and performance. There is no limitation to the number of the nodes in Scale Out configuration.

Pin It on Pinterest