Enterprise networks are in a state of Defense in Doubt. The bad guys are winning due in large part to four core weaknesses of on-premises security solutions:

Insufficient network coverage: Enterprises cannot scale on-premises hardware, especially as networks are becoming mixed and complex. This issue is becoming exacerbated with the advent of the Internet of Things and the proliferation of connected devices.

Targeted attacks are going undiscovered for months: Most attacks go undiscovered for months if not years. According to the most recent Mandiant M-Trends report, the average time a threat dwells on a network before detection is 205 days.

Solutions are mostly myopic: Most endpoint security solutions are concerned with very narrow windows of time--they can only make a judgment based on a small set of information on hand at the moment. This is complicated by the fact that many enterprises must manage a complex array of daisy-chained point solutions that don't integrate well, are purchased separately, operated separately, managed differently, and have to be refreshed in regular cycles. It's costly and time consuming, often straining already overburdened security teams.

The Enterprise is an island: Security lessons learned within one organization tend to stay within that organization; there is no safety in numbers and no way to quickly leverage information from one organization's network to stop the proliferation of attacks.

To get the kind of visibility and depth required to solve these issues, enterprises have to collect, store and analyze more data than ever to cover an increasingly complex ecosystem. Traditional approaches consisting of hardware deployed on-premises can't scale to meet the expanding challenges faced by modern enterprise networks. As these networks become wider, more complex and dynamic, the viability of rerouting or backhauling traffic in order to funnel it through an appliance stack is increasingly at odds with an organization's growth and accessibility goals.

In addition to this, the reality is that most security teams do not have the manpower or analytical capacity to carry the load presented by these massive data collection efforts. And for the few who do, it is a tremendous expense that could be better applied to tactical and strategic efforts.

Shifting Network Security to the Cloud

That's why we're shifting network security to the cloud with the ProtectWise Cloud Network DVR, a new full-fidelity platform for enterprise network security. It merges advances in the modern computational landscape with advances in cloud technologies to address the weaknesses of point solutions. Creatively advancing these technologies in new and innovative ways has allowed us to optimize, replay, and securely store massive amounts of full-fidelity network traffic while at the same time promoting our capacity to process this massive data set in an efficient manner at great scale.

This is really the foundation of our entire service. It has allowed us to build a Time Machine for threat detection, where we automatically go back in time and retrospectively analyze all that historic network data whenever new threats emerge. And it powers our Visualizer which makes sifting through this data intuitive and efficient. We will go into details on our Time Machine and our Visualizer in future posts. In this post, I want to focus on the Cloud Network DVR.

The ProtectWise Cloud Network DVR

Our Cloud Network DVR consists of two major components: a software "camera" which can be deployed on all enterprise network segments and a petabit-scale cloud platform. This is a huge number, and this is how we create a memory for the network.

This "camera" is a lightweight software sensor that can be deployed in any location on any network segment. It executes highly optimized, full Layer 2 packet capture and streams this data to our cloud platform in a near real-time manner. One of the first problems we had to solve was lossless packet capture in a software form factor, but commodity advances in virtual and physical network cards made this fully achievable. And then we had to figure out how to intelligently optimize the replay of this traffic to our cloud so that we could preserve a high-fidelity recording while not burdening the network with massive bandwidth requirements.

platform-markitecture-v0.1a.png

In terms of bandwidth and network efficiency, we've achieved a broad optimization target of 80 percent which is to say that we require only 20 percent of a network load which would otherwise be unachievable without our patent-pending Optimized Network Replay technology. Not only can bandwidth consumption be easily controlled, these sensors are also policy-enabled to selectively provide visibility into only the hosts, network segments, and applications and protocols desired.

Of course, we have to ingest this continuous stream of data coming into our platform at gigabits per second without skipping a beat. Once these streams hit our Secure Ingest service, they are immediately deconstructed through a proprietary process we call Network Shattering and handed off to a suite of threat detection capabilities which operate in a near real-time manner.

Additionally, we index and store all of this data into our Secure Vault in a way that allows us to go back in time automatically to efficiently retrieve and process even petabytes of data. There is no upper boundary on how many sensors are deployed and no upper boundary on how much data is stored.

In terms of fundamental characteristics of the underlying technology, this platform is a highly distributed, asynchronous, concurrent and parallel system with strong fault tolerance properties organized around low latency, massive I/O and linear scale. That is quite a mouthful and is no small undertaking and these are things I'll explore in more depth on this blog at a later date.

Everything we've built is also exposed as set of rich APIs that are available in both a streaming, real-time event stream as well as traditional RESTful historic and batch-oriented varieties. These APIs can ship anything from raw PCAP to netflow to any and all observations and events within our system. This is well positioned for integration into any existing security and analytics infrastructure.

Once we'd developed the Cloud Network DVR, we were able to leverage it for automated smart retrospection of traffic--basically creating a Time Machine for threat detection. I'll explore this in my next blog post.

Next blog post