The Demise of Network Security

by XCM

Network security has had a fair share of relevance in the last 20 years or so.

Traffic pattern types used to be few and predictable, software exposed on the Internet was relatively simple.  So a stateful firewall for a long time was good enough of a safeguard.

Then things changed.

Software and attacks turned more complex and it became clear that blocking traffic on TCP ports associated with malware suddenly was not enough.

So anti-virus and Intrusion Prevention Systems (IPS) were bolted on top of stateful firewalls.  Of course, these systems come with their limitations:

Traditional pattern-based antivirus solutions do not scale.  On the wire, antivirus scanning can also be evaded by using different techniques such as packet fragmentation or by abusing specific protocol options or methods.

Pattern-based Intrusion Prevention Systems also do not scale, with the addition of acting on the assumption of what could happen on a given host based on what is observed on the wire, which is a pretty big assumption.

On top of that, encryption has become prevalent (finally and thankfully), so these security products must now perform a Man-in-the-Middle attack on TLS encrypted sessions in order to gain visibility.

This is potentially bad as the end user does not have any way to verify which certificate is being presented by the server they are connecting to.  Sure, they could trust the security device to verify it for them.  Good luck!

Additionally, middle boxes might downgrade the encryption in order to increase compatibility.

This aside, decryption is clearly not the answer in the long run.  Newer TLS versions might make things more complex for security vendors/criminals/government agencies.

Another problem is that when hosts use pinned certificates or mutual authentication, there is no known way to successfully decrypt, inspect and re-encrypt traffic.

What can we deduce from all this?  Well, it seems to me that the days of network security could be doomed.

It might not happen tomorrow nor in five years time, but network inspection devices will slowly lose the visibility they need to do their job.

Even with encryption aside for a moment, a traditional stateful firewall historically would allow Host A to communicate with Host B on TCP port 80, for instance.

So called Next-Generation Firewalls might do the same, but while identifying the traffic as actual HTTP.

Still, do we really know who initiated the traffic and where it is going to?

Sort of.  We know the IP address, the URL/FQDN or the user associated with that machine.  All of these in the end translate to an IP address, not to a specific device.

Besides, can I verify that HTTP traffic was initiated by a browser under the control of the user?  Nope.  It could be from a shell spawned by a piece of malware running with some other privilege.

So what's the solution?

There are hosts of vendors who promise the panacea in the form of: "Traditional <insert technology here> do not work any longer.  That's why at <insert vendor here> we offer unparalleled protection based on <insert buzz words here, such as machine learning / Al / magic>."

The reality is that security is hard and the challenge increases exponentially if the attack is targeted.

In my opinion, a promising approach is somewhere towards total network abstraction.

Rather than focusing on decryption, URL filtering, firewalls, sandboxes, and the like, treat the network between Host A and Host B as an untrusted, non-securable medium - regardless of network topology, distance between the endpoints, or "trust" level of the network equipment.

Reduce the security boundary to the only area where we still have a decent level of control: the endpoint.

A bit like in European medieval warfare where the keep was protected but the village outside the castle was not considered worth defending nor defendable.

So rather than trying to regulate protocols and applications on the wire, restrict them on the endpoint at a process level and for a specific user.  So, for example, process X on Host A can generate HTTP traffic (real HTTP, not data over TCP 80) towards process Y on Host B, but only if these processes belong to determinate users.

Some security solutions already offer a similar level of micro-segmentation.

Additionally, instead of using IP addresses and hoping they corresponds to the hosts they should represent, we could use certificates.

A certificate exchange would be a strong way to ensure the endpoints in the conversation are those we expect, assuming we trust the current certificate validation system.

As an additional measure, the initial TLS exchange could be used to negotiate an encrypted channel leveraging other technologies such as IPsec, OpenVPN, WireGuard, or others.

Even without anti-virus, IPS, and sandboxes, having this kind of control could potentially limit the scope of successful attacks at various stages in the life cycle.

The level of visibility we get on a host under our "control," however, is not unparalleled.  We still depend on closed-source OSes, hypervisors, proprietary hardware, and a host of obscure firmware with high privileges.

So what's the takeaway here?

There might not be an easy solution to the problem and probably there will never be, but I doubt that trying to pump new features in zombie technologies will bring us any closer to our goal.

The goal post shifts continually and, as my favorite author Edgar Allan Poe once said: "It may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve."  (Graham's Magazine, July 1841)

Whereas this has not been proven correct yet for most modern ciphers, it offers a glimpse on why the struggle between attackers and defenders is unlikely to end anytime soon.

Return to $2600 Index