Your address will show here +01 (414) 230 - 5550
Decrease costs while guaranteeing the same SLAs

Use Case Descriptions

Integration Environment

SDS solutions scale across many nodes and have high redundancy and self-healing data. E.g. Redhat Ceph, Scality.

Customer / Partner Types
Private, Hybrid, and Public cloud. Customers benefit from decoupling storage related tasks from physical storage hardware, allowing to use commodity hardware with scaled out storage architecture.

Company and Solution Background
Enterprises and cloud providers are facing ever-growing demand on storage capacity and are in need of flexible and scalable solutions with a controllable expense. Many deploy cost-efficient SDS solution with commodity hardware for private/hybrid/public cloud infrastructure. More and more critical business applications or customer-facing applications with SLA compliance now run on an SDS solution and rely on it to deliver the reliable services.

Solution Architecture

Requirements and Challenges


To implement an SDS solution with reliability, many customers design their solution with 3+ copies of data, which might not be budget in a friendly way for some applications. The higher latency by cross-node traffic and even performance degradation during data re-balancing can cause SLA compliance challenges. It is also difficult to manage or troubleshoot issues due to correlations between hardware components, software modules, and OS/applications being unknown.

Solution


By applying patented technologies with machine learning, Federator.ai® generates predictive analytics on hardware failure, degradation and software errors from collected vital data from hardware components, OS metrics, and SDS data. It also correlates the component in each layer to provide overall visibility of the SDS solution. With the foresight capability and holistic view of the solution, Federator.ai® helps cloud operator run a reliable and cost-efficient SDS solution.  

Solution Benefits

Cost reduction by having n-copy architecture, as reliable as n+1
Protect SLA performance against hardware failures
Disk failure and degradation prediction
Suggest optimal replacement time for failing disks
Load and event anomaly detection
Impact prediction and correlation analysis
Share