Do you know how big a zettabyte is? Take my word for it, its big. It’s a one, followed by seven sets of zeros. The worldwide amount of data is expected to be around 175 zettabytes in the next 500 days. 90 zettabytes will be unstructured data. Think of unstructured data being anything that does not look like an excel spreadsheet. Emails, videos, MP3s, and streaming analytics. The lion’s share of this data will be on the edge. It's no mystery that data holds value. Another widely known fact is data has a shelf life. It can be fleeting. If I am an autonomous vehicle, the color of the traffic light in the last intersection I drove through, has no value to me. The data needs processed at the edge. You will need GPUs, applications, servers, storage, security, and telemetry, just like the hyperscalers, on the edge where the data gets created. There is one more thing you will need, a Micro Data Center network. Let’s talk about that!
The nucleus of the Micro Data Center (MDC) network is the collapsed core. For this discussion, I will start with the HPE Aruba Networking CX10000 Distributed Services Switch (CX10K). To deploy a collapsed core, two switches are virtualized into a single L2 domain. This provides a high availability network for dual attached devices. Any switch vendor can do this. By adding the HPE Aruba Networking Fabric Composer (AFC), and the AMD Pensando Policy Services Manager (PSM), you will have a complete MDC network, capable of delivering 800G of stateful services, right next to the workloads. One of those services can be an 800G stateful firewall. One of the things that makes this firewall so unique, is it can also be deployed BETWEEN workloads on the same layer two segment! This is called microsegmentation.
The Micro Data Center network
This technology is made possible due to a pair of DPU chips tied directly to the Trident3 chipsets on the CX10Ks. Policies are created in the AFC, and they are automatically sent to the PSM, which in turn, programs the DPUs. Traffic directed to the DPUs will be subject to any policies programmed in it. The DPUs can also be leveraged for sending IPFIX flows.
Network telemetry is more important than ever. For the longest time, network operators were limited to what kinds of telemetry they can collect. Probes and sensors were deployed in the most troublesome parts of the network or on uplink ports, to see as much information as possible. Technologies like NetFlow and sFlow, still required strategic positioning. Collecting telemetry with legacy solutions means, you are not seeing the whole picture. It’s hard to make informed decisions when you’re missing pieces of the puzzle.
With the CX10K, the packet flows that are redirected at the DPUs, can be forwarded to the IPFIX collector of your choice. This tiny, yet mighty, MDC network, supports the virtualized environment necessary for your applications, implements stateful macro and microsegmentation, deploys NAT or IPsec, and collects unsampled IPFIX flows.
Network Visibility with ELK
In the picture above, unsampled IPFIX flows are sent to an ELK stack for observability. ELK is a collection of open-source applications running in containers on a Linux VM. The solution combines Elastic Search, Logstash, and Kibana.
IPFIX is widely supported by many network observability vendors. MDC networks once installed, will usually stay operational for years. The tools that we use for network observability change like the weather. The DPU’s can send IPFIX flows to Kafka, writing data into a Kafka topic. Many legacy network observability tools can become subscribers of that information, allowing continued use of expensive legacy tools. This is one of the advantages with the full visibility that the HPE Aruba CX10000 Distributed Services Swicth.