Blogs

Moving to a Cloud-First Network

By nmittal posted Jun 06, 2016 08:59 PM

  

Big data and an explosion in network traffic are putting pressure on today’s data center networks. Our increasing dependence on digital technologies as part of our work and business processes, as well as the expected pace of enabling new business apps, is placing heavy demands on legacy data center infrastructures.  As the private and secure information that businesses rely on moves to digital, it’s critical that data center networks stay strong and offer the highest level of reliability.

 

At Hewlett Packard Enterprise, the common theme we are seeing across the board is the need for flexibility for customers to innovate and get into new businesses faster than ever before. This could be achieved by moving businesses and its resources to digital technologies as quickly as possible. Customers’ expectations from the network are simple: minimize downtime, avoid data loss, and eliminate application disruption.

 

As businesses invest in next generation, high-performance connectivity to support increased data center application traffic, they also need to reduce operational and capital expenses.

The cloud-first infrastructure is not a single product or solution, but a holistic ecosystem of many components.  At HPE, we believe that data center decision makers need to consider these five requirements when moving to a cloud-first network.

 

requirements.png

 

  1. Automate network operations: Manual procedures require resources to linearly scale with the network while automation tools amplify the network operations. This means provisioning new policies and services faster in the network. As virtual machines and containers become the norm for hosting workloads, networks have to be virtualized and automated to deliver the same level of flexibility.
  1. Future-proof network capacity: There is an exponential growth in traffic moving through data centers – across servers, containers, and virtual machines. Organizations’ top priority will be to future-proof the network and ensure “disruption-free” high availability.
  1. Reduce operational complexity: At HPE, we’re seeing many of our customers realize up to 50% reduction in operational expense by consolidating their edge infrastructure with converged switching platforms. This creates operational flexibility with a cost-effective approach to allow organizations to invest in more profitable digital applications.
  1. Manage multiple global data centers: As companies grow, they need to expand to multiple data centers that are often geographically separate. That means that IT requires a disruption free way to migrate data and prepare for disaster recovery at the same time. As companies connect multiple sites across different regions via Internet / broadband WAN, they’re able to significantly reduce capex.
  1. Program underlying infrastructure on-demand: As companies deploy an increasing number of in-house apps within their private data center, there will come a point where your app developers will want to program the underlying infrastructure on-demand, instead of waiting for the infrastructure vendors to provide their next software release.

 

endtoend.png

 

While the reference architecture is based on these components, using our complete portfolio of servers, storage, and switches, companies can tailor the best solution that meets their specific business needs. Multi-vendor software capabilities also give IT the ability to innovate and migrate to new best of breed solutions on their own terms and allows our portfolio to easily integrate with existing data center infrastructures.

 

To learn more about our cloud-first approach, go to https://www.hpe.com/us/en/networking/data-center.html

 

 

 

0 comments
0 views