Print this page

Estimated reading time: 2 minutes, 57 seconds

Avoid Supply Chain Disruption by Automating Capacity Provisioning Featured

Avoid Supply Chain Disruption by Automating Capacity Provisioning "Shot with @expeditionxdrone"

The Institute for Supply Management found that nearly 75 percent of the companies it contacted in late February and early March reported some kind of supply-chain disruption due to the coronavirus. Supply chain and logistic challenges due to the global pandemic are still making it difficult for companies to address their expanding big data capacity requirements, and purchase and provision more servers as needed.

The typical response when you are reaching system capacity has been to expedite the use of cloud services or provision more on-premises servers. But in the current circumstances, this solution is less than ideal. Why? Because for some companies, the budgets that would have been allocated to this extra provisioning have been terribly squeezed by the COVID-19 pandemic. 94% of Fortune 1000 companies are experiencing supply chain disruptions affecting revenues and budgets. By the time you buy new hardware, it could be many months until it arrives due to disruptions in the supply chain. Once you receive it, do you have someone in your data center to install it? 

For companies looking at ever-increasing cloud and on-premises infrastructure spend to meet the increased application and workload demand there is a better option. Instead of just paying for more infrastructure, they can get more out of their existing systems, whether in the cloud or on-premises.

How? With automated and continuous tuning for their big data analytics stack.

Even the most experienced IT operations teams can’t manually tune every application and workflow. The scale – thousands of applications per day and a growth rate of dozens of nodes per year – is too large for manual efforts. However, machine learning can help with this tuning, and the upside is huge. Automatic capacity optimization typically allows companies to run 30-50% more jobs on their existing Hadoop or Spark clusters, which equates to an overall better customer experience.

Big data performance management can ensure that companies get the most out of their existing big data infrastructure. By monitoring the entire infrastructure in real time and leveraging machine learning and active resource management, the system can automatically identify where more work can be done, and adds workload to servers with available resources. In addition, such a system can enable users to constantly monitor their infrastructure for performance issues, bottlenecks, and other problems that can impede performance. 

This approach enables companies to automatically improve their existing infrastructure performance by as much as 50%. This means companies can gain the same performance as if they’d added 50% more hardware, but without the spend or wasted time.

In the world of big data, capacity requirements continue to grow. However, provisioning more servers is always a costly option. The better route is to do more with what you already have, by leveraging tools that can improve and optimize

 the performance of your existing infrastructure. Pepperdata can automatically tune and optimize cluster resources, recapture wasted capacity, and improve your big data analytics stack ROI – on-premises or in the cloud. This approach will help enterprises weather the Covid-19 storm. It will also give them digital maturity and business resilience for years to come.


 

Ash Munshi, CEO of Pepperdata. Before joining Pepperdata, Ash was executive chairman for Marianas Labs, a deep learning startup sold in December 2015. Prior to that, he was CEO for Graphite Systems, a big data storage startup that was sold to EMC DSSD in August 2015. Ash also served as CTO of Yahoo, as a CEO of both public and private companies, and is on the board of several technology startups. Ash attended Harvard University, Brown University, and Stanford University.

Read 1132 times
Rate this item
(0 votes)