I've spent most of my career extolling the virtues of defense in depth. Firewalls at the edge of your network. Antivirus and personal firewall software on your servers and desktops. ACLs and IDS/IPS where it makes the most sense. Layers upon layers of protection for all the juicy data flowing around in the network. All these fancy devices serve to keep the bad things out and the good things in. That is, until the whole thing gets flipped on its head.
Now, network engineers have to worry about bad things in the network. Data loss prevention (DLP) keeps the good stuff from walking out the network door. We also have to start looking at the traffic patterns of our users. Are they doing things they aren't supposed to be doing? Can we contain those behaviors with proxies and filters? If not, why? How can we keep up with the advancements in technology like IPv6 when we have to put globally routable addresses on the majority of our internal devices? How will our security postures change?
Companies like Aruba have already been looking forward to these questions. The key lies in the location of the perimeter of the network. It's always been assumed the the edge of the network is the point at which the trusted LAN touches the untrusted WAN. In today's world, I think that distinction needs to shift toward the core. The "trusted" portion of your network now only includes the devices under your control. Your switches, routers, and access points are the safe areas. The "untrusted" network now not only includes the big bad Internet but your user-facing devices as well. Tablets, phones, laptops, and all manner of client connectivity should be treated as suspect until proven safe. That's where products like ClearPass and the firewall integrated into Aruba's APs comes in very handy. Because we can interrogate the clients before they access our networks, we can classify their traffic before they can cause harm or do something they aren't supposed to. I really loved this short discussion from the Tech Field Day roundtable at Airheads earlier this month around deep packet inspection:
The ability to crack packets open and restrict them from entering our trusted network at the client edge not only serves to keep things more secure but also reduces the amount of bandwidth being used by clients. Even with advances like 802.11ac, we're going to want as much bandwidth as we can muster now that every device seems to have wireless-only connectivity.
I think that shifting the perimeter conversation from a typical "us versus them" mentality to something more akin to a sandwich cookie model makes the most sense. All the gooey, sticky good stuff is in the middle, safely protected by two crunchy edges. The perils of the Internet lie on one side and the unknown quantity of users on the other. Only when those edges are safely admitted through our firewalls and authentication servers and deep packet inspection engines will we grant access to the what they're after.
I also think this fits very well with things like IPv6 to allow data access up to the client device without compromising security. By forcing user traffic to pass through our perimeter when bound for the Internet (and vice versa), we eliminate the possiblity that the traffic will be erroneously marked as acceptable based solely on the fact that it originated inside the network. We can ensure that no shortcuts exist to bypass our policies preventing users from going to Facebook on company time or watching Netflix and monopolizing critical network resources. Of course, the users may not like the network admins very much after making changes such as these. Thankfully, they'll be on the outside looking in.