Technical

 View Only

ZTNA Packet Handling in HPE Aruba Networking SSE

By Gobo posted Oct 25, 2024 12:28 PM

  

ZTNA Packet Handling

in HPE Aruba Networking SSE

Matt Hum (matt.hum@hpe.com)

Packet handling for Zero Trust Network Access (ZTNA) varies significantly based on the nature of the traffic and the type of application being accessed. This chart is designed to guide you through the different packet handling scenarios available through HPE Aruba Networking SSE.

Overview of ZTNA packet handling

Agentless Access Requests

Let’s start with an agentless ZTNA web (self-hosted) application [1]. In agentless web, one of the PoPs is the URL destination — either an XXX.axisapps.io domain where we own the cert to sign for *.axisapps.io, or a CNAME of a vanity URL (customer provided server name) redirected to a known axisapps.io address, which requires the PoP to have the corresponding signed certificate’s private key.

The DNS lookup then resolves to an anycast IP, which directs the initial request to the closest on-ramp onto a cloud provider’s backbone. The request travels along the backbone until it reaches the destination PoP, at which point the request terminates and a TLS exchange happens using a valid certificate. A cookie/session key check ensures that the user session is valid, and if it’s not valid, it redirects to the IdP associated with the destination for authentication. Upon successful completion (and with a valid SAML assertion) or if a valid session cookie was presented, the request is processed and then run through the policy engine to find a matching rule to permit access to the destination. After finding a matching rule, the session is authorized and the associated policy is attached to the session, which determines the level and traffic inspection.
After being authorized, a preexisting TCP flow to a connector within the destination connector zone is identified, and a management message is sent to a connector in the connector zone based upon the destination information. The connector will then do a DNS lookup if a hostname is specified and open a TCP transaction with the destination server. If that DNS lookup from the connector fails, a management message is sent back to the PoP and an error message is presented to the user.

If a TLS session is defined in the destination, the connector uses its user certificate to establish a TLS tunnel and report that a connection was successfully established. (For HTTP, a connection is just opened and reported.) In addition, now that a TCP flow is designated for use by this request, the connector opens a new flow to the PoP and keeps it in a waiting pool to be used by a later request.

While this is happening, the request is processed through the appropriate policy returned by the rule and passes through any associated L7 checks. After it completes, the request is forwarded to the connector via the reserved TCP flow, and the connector gets the L7 data and forwards it through the established server TLS tunnel.

After the server sends its response, the L7 data is sent back, received, and processed by the PoP through the TCP tunnel. The associated policy checks happen now. If the destination was an XXX.axisapps.io address, then the L7 data is checked for references to the destination server and replaced with the associated axisapps.io address. Also, any additional domains specified in the destination are replaced with new dynamic axisapps.io addresses to identify other replacements needed in the return data to the client. After policy and URL replacements are complete, the modified response is sent to the client.

Upon completion of the TCP session, the reserved TCP flow between connector and PoP is torn down.

Agentless SSH and RDP are handled in multiple ways depending on the method of access. If the webclient is used [2], then the client to PoP transaction and authentication happens as illustrated above. In this case, the PoP takes that request and fully terminates the L7 session. A backend session to the destination SSH or RDP server is created in the same manner as an agentless web application. The return response, however, terminates and renders within the PoP. Screen data is then passed to the client via HTML5.
If a local client is used [3], then the rendering does not happen in the PoP. The connection is still terminated in the PoP, but the L7 data is copied directly to another existing session established between the connector and the destination server.

Agent-based Access Requests

If, however, an agent is used, then there are two different paths of handling application generated traffic, DNS redirection [4], or route forwarding [5]. Here, when a tunnel is established, the agent sets the PoP as the primary DNS of the system OS such that all DNS lookups from the system are resolved in the PoP. All DNS requests sent by the client are processed by the given user policy within the PoP, and IPs are returned to the client to direct traffic as specified.

In DNS redirection [4], a specific host or domain is targeted, and if an appropriate lookup happens to a given destination, a synthetic IP is returned to the client OS in the CGNAT range. This CGNAT range exists within a unique namespace in the PoP and, therefore, the PoP is the termination point for all connections for that destination. The synthetic IP is unique and fully dynamic for that specific user; they can be reused after the flow closes and the appropriate TIME_WAIT timer expires.

Network route forwarding [5] is a little different and can be more intrusive into client operations. When the tunnel startup initiates, the agent compiles all IP route destinations and then injects them into the route table. This only happens when the tunnel interface is created, so when a policy updates with a specific client on an IP range, a tunnel IF termination happens, which removes all injected routes and reinitiates the tunnel IF to modify the route table. This can interrupt existing open flows. DNS destination modifications do not impact the route table, and therefore get pushed to the PoP without any impact on the client’s route table or on the tunnel IF.

By redirecting specific IP destinations to the PoP, they can be inspected for policy and then forwarded to the appropriate destination connector zone, where they are SNAT-ed so the return traffic is then processed through the same connector. This does not take into account Server Initiated Flows (SIFs) or smart routing, which has similar but separate handling.

SIFs have a specific IP pool associated to a given connector in a connector zone to be distributed to clients. Each connector in the connector zone must have a different non overlapping IP pool that does not already exist in the infrastructure’s route table. Additionally, the upstream routers of the connectors must have a route for the SIF client IP pool to indicate that the appropriate connector is the next hop for each respective SIF client IP range. This route must be propagated through the connector zone’s local infrastructure. This route can be injected with additional manual configuration on the connector via a routing protocol such as BGP or OSPF.

In SIF, an end client is directly associated with a single connector in the connector zone. In version 3.64, a second connector within the connector zone can be specified for failover. There can only be one SIF application destination set for that client; if there are two different SIF applications in two different connector zones with different IP spaces, the policy engine will not know which IP to assign to the agent tunnel. Additionally, there is no longer any HA within that connector zone, as all network traffic from that client is associated with the connector that has the respective IP pool only. 

Keep in mind that everything in and out of the PoP is a separate network namespace, so even though there may be networks with the same subnet or IP range, they are still separate, and an agent generally cannot communicate directly with the connector. Even with the same subnet, sessions are run against the policy and then brokered in the PoP. The only exception to this is when Local Edge is run, at which point the only policy applied is the application definition at the agent. In this case, the agent identifies traffic to be sent to a specified connector and then forwards that traffic to the connector directly. There is no filtering or inspection of that traffic; it is egressed directly at that connector.

Understanding the differences in packet handling for various ZTNA traffic types can provide valuable for getting the most out of your HPE Aruba Networking SSE service. For those who find this topic fascinating and others like it, check out my ebook, Advantages of Multiple Proxied Connections Over Long Routes, and see the continued impact of architecture on network performance and security.
#SSE #ZTNA
0 comments
42 views

Permalink