An Observability Pipeline Could Save Your SecOps Team
Traditional monitoring approaches are proving brittle as security operations teams need better visibility into dynamic environments.
Security analysts are struggling with two opposing challenges: too much data and too little of the right data. According to a recent survey from the Ponemon Institute, 71% of respondents cite information overload as a key stress factor in their work; 63% also call out a lack of visibility into the network and infrastructure as a stressor.
Conventional concerns, like the growing complexity of distributed denial-of-service attacks and negligent insiders, complicate today’s security operations center (SOC) environment. In addition, cloud-native applications deployed on containers and other transient infrastructure are another factor in these challenges. Applications, and the infrastructure they run on, are more dynamic and ephemeral than before, and that comes with a level of complexity that traditional monitoring hasn’t grappled with.
Shifting to Observable Systems
Over the last 18 months, operations teams, including security operations personnel, are talking about the shift from static monitoring to dynamic observability. While monitoring focuses on the health of individual components, observability provides fine-grained visibility into why systems behave the way they do. Observability is the characteristic of software, infrastructure, and systems allowing questions about their behavior to be asked and answered. Contrast this with monitoring, which forces predefined questions about systems into a set of blinking dashboards that may or may not tell you what’s going on in your environment.
However, observability isn’t a thing you can buy. No single tool provides all the benefits of observable systems. Teams must build observable systems, starting with embedding the concept into applications and infrastructure in the form of logs, metrics, and traces. Combining that data with change logs, IT service management data, and network traffic gives teams the big picture but also enables drilling down into details. Some early implementations of observability also include social media feeds to uncover customer problems with applications before those signals propagate into metrics-based dashboards.
Changing Cultures
Complexity is only one facet pushing observability. Beyond the shortcomings in traditional monitoring, observability is becoming important as security operations teams work cross-functionally. Today’s SOC teams interact with infrastructure and operations as well as DevOps groups, each with their own tooling and analytics platforms. This interaction style is something security operations teams may not have done in the past. It introduces friction between these teams around what various data sets mean or what a correct outcome even looks like. Observability helps resolve these tribal views of data by delivering the right data to respective platforms.
Implementing Observability Infrastructure
With instrumented systems, delivering data to the right platforms becomes a challenge. The concept of an observability pipeline, popularized by Tyler Treat, is becoming a critical element in implementing observability because it decouples the sources, like applications and infrastructure, from destinations, like log analytics and SIEM platforms. Most organizations have 10 or more tools for security and analytics, and nearly half believe they need more. Abstracting data analysis and use from how it’s collected gives teams flexibility in delivering data. It also allows for fine-grained optimization of the data sources through redaction, filtering, and reducing data volumes.
The last component, after observable instrumentation and the pipeline, is exploring data. Coming from the data and analytics space, I equate traditional monitoring to data warehousing. In both data warehousing and monitoring, you know what data you’re ingesting and the reports or dashboards you’re creating. You have a collection of known questions over known data. It may be expensive and inflexible, but it’s also reliable and well understood.
Observability is more like a data lake. With a data lake, you don’t know what questions you’ll ask, but you fill the lake with data and organize it to prepare for future questions. If a data warehouse is for known questions over known data, a data lake is for unknown questions over unknown data. It’s often helpful to think of a data lake as a question development environment since you’re creating the questions you want to ask as you’re exploring the data. Unlike a conventional data lake supporting data scientists optimizing for SQL and Python, an observability data lake optimizes for search.
Security analysts have too much data to manage and analyze, and they still don’t have all the data they need to get visibility into their environment. Traditional approaches such as monitoring may have solved some problems in the past, but they are quickly being outpaced by changes in the evolving IT ecosystem led by shifts like cloud-native or container-based infrastructure. Teams need a fresh approach when tackling complexity. This is where observable systems come into play. By building modern systems with observability in mind, you can better future proof your systems when questions arise and evolve over time. An observability pipeline is a critical piece of the puzzle. It gives you the flexibility to capture the universe of data you need and deliver it cleaned and formatted to the myriad of tools your teams need.
This column was written with Bryan Turriff, Cribl’s Director of Product Marketing.
Nick Heudecker is the Senior Director of Market Strategy at Cribl. Prior to joining Cribl, he spent over seven years as an industry analyst at Gartner, covering the data and analytics market. With over 20 years of experience, he has led engineering and product teams across … View Full Bio
Recommended Reading:
More Insights
Read More HERE