Zero trust requires network visibility
In a zero-trust environment, trust is not static. Behavior has to be visible for trust to persist.
One of the most important differences between old thinking on networking and the zero-trust mindset is the inversion of thinking on trust. Pre-ZT, the assumption was this: Once you get on the network, you are assumed to be allowed to use it any way you want until something extraordinary happens that forces IT to shut you down and remove your access. You are assumed broadly trustworthy, and confirming that status positively is very rare. It is also very rare to have that status revoked.
Post-ZT, the assumption is flipped: Use of the network is entirely contingent on good behavior, and you are strictly limited as to what you can communicate with, and how. You can only do what the organization allows in advance, and any significant misbehavior will automatically result in you being pushed off the network.
The “automatically” part is important. A ZT architecture includes as an integral component a closed loop between ongoing behavior on the network and ongoing permission to use it (as manifest in the trust map that drives the environment’s policy engine). That is, ZT by definition requires that there be feedback, automated and preferably real-time, from observable network behavior to enforced network permissions.
Spotting ‘significant misbehavior’ requires deep visibility
So, a robust zero trust implementation requires seeing data on how every entity on the network is using (or trying to use) the network.
This translates to logging information from network infrastructure at every level, from the core switches all the way “out” to the edge switches in the branch networks and all the way “in” to the virtual switches in the data center. Of course, it’s not just switches but also routers, and application delivery controllers and load balancers, firewalls and VPNs, and of course SD-WAN nodes. All should be reporting on entity behaviors to some central system.
Beyond that, in any host-based aspects of the architecture (such as a software-defined perimeter deployment), the agents running on network entities (PCs, virtual servers, containers, SDP gateways, whatever) will also be supplying event streams to some central database for analysis.
Ultimately, myriad streams of behavioral data must be brought together, filtered, massaged and correlated as needed to feed the core decision: Has that node (or the user or system on it) gone rogue?
Data volumes and event diversity will drive use of AI for analysis
Just looking at that list of data streams is exhausting (notwithstanding that it is not exhaustive). In a network of any size, it’s been more than a decade since any of those data streams was something an unaided human could keep track of on even a daily basis (never mind doing so in near real time). And the first several generations of aid brought to bear, as exemplified by legacy SIEM applications, are proving inadequate to the scale and scope of this kind of analysis in a modern environment. The continuing evolution of the threat universe to include more multi-channel, slow-then-fast attack models, coupled with the explosion in numbers of applications, devices, VMs, and containers, makes old-style SIEMs steadily less able to make the normal-versus-anomalous evaluation at the heart of what ZT needs.
Zero-trust environments’ need for ongoing behavioral threat analytics (BTA) can only be met through the application of AI techniques, usually machine learning. BTA systems have to be able to track network entities without relying solely on IP addresses and TCP or UDP port numbers, and to have some sense of different classes of entities on the network – e.g. human, software, hardware – to guide their assessment of normal and their thresholds for anomaly. For example, it should be able to flag as anomalous anything requiring a human or a physical device like a laptop or an MRI machine to be in two places at once, or in two physically distant places in very short succession.
So, at the core of every ZT environment lies the need for deep visibility into the behavior of every device, person, or system using the network. Without that visibility, ZT environments cannot achieve the dynamic, conditional trust maps that underlie their promise to radically reduce risk.
READ MORE HERE