7 Sins of Security Metrics
August 03, 2017
Last week, I was delighted to be back learning and presenting on the Ground Truth track at BSidesLV. Part of the talk I gave on ‘How to make metrics and influence people’ covered the 7 sins of security metrics. The video of the full presentation should be released soon, but in the meantime, here’s a blog sharing lessons I’ve learned as a data scientist working with CISOs to create meaningful security metrics on risk exposure and next best actions to reduce it. Do share any thoughts in the comments section below!
You’re at the water cooler muttering “But that’s EXACTLY the graph they asked for.” Enter Sin#1…
“Get me a plot of x versus y, color-coded by z!” They sounded so sure when they asked you, so you created what they wanted, showed it to them, and they hated it.
Ok, a bit melodramatic. But in my experience, building the metrics people ask for rarely delivers the insight they want. Why? Often, when someone asks for a metric, they are in the process of working out if there’s value in a question they’d like to ask of their data – and until they see the result, they don’t know if the output will give them what they’re after; AKA the “I’ll know it when I see it” problem.
As data scientists / analysts, we need to build metrics that address the questions our stakeholders need answering. If they aren’t entirely clear on either what those questions are, or what questions are most valuable to answer, or whether the metric they’ve asked for is the best way to answer a question, the process of iterating through analysis in the hope of striking gold will be excruciating for everyone involved.
If stakeholders don’t have enough definition around the problem they are trying to solve (this is more common than you’d think!) we need to help them. Because if we just build the plot they ask for, we’re essentially crossing our fingers that the work we do will be valuable.
“Personally, I find this fascinating.” Oh, the woe. It’s Sin#2…
Ah yes. The discovery of really interesting stuff that no one can do anything about.
If we don’t produce metrics that are engaging for our audience and useful from their perspective … If a team can’t take our analysis, act on it, and see an improvement … Well, then our charts will be disheartening. And no one likes a metric that makes them miserable.
As people who love analyzing data, it can be very easy to run down metrics rabbit holes, digging around in data indefinitely, exploring things that look like they could uncover some new level of understanding in the information we have. (This is also true when you have done the hard work to create a great set of metrics, but mountains of possible analysis options remain.)
We always need to keep the goal of a metric in mind when we spend time picking data apart. This means both avoiding things that, in retrospect, were pet projects – as well as knowing when we’ve reached ‘good enough for now’ on the level of resolution we have on a problem.
The people funding our efforts will have patience if they can see progress, but not if they end up with 30 plots that may be intellectually fascinating, but fail to provide high value insights they can act on.
“It’s ‘actionable insight’, so the team will find it really useful.” Because it’s not like security teams have enough stuff on their to-do list already, it’s Sin#3…
A problem with the over-abused word ‘actionable’ in security marketing is there’s a big difference between something that’s actionable, and something that’s worth acting on.
Good security metrics don’t enumerate all the possible things that could be changed to make an estate more secure. They get stakeholders engaged around problems they have, that they have the power and budget to solve. Ideally, they also show a clear set of actions that can deliver the greatest improvement to security performance or risk exposure.
If metrics deliver a prioritized list of 1000 actions, it’s likely there will be no buy-in from departments already swamped with lists of things to do. (Sure, your 1000 things may be added to their list… just right at the bottom). A single action that deals with 1000 problems will get far more traction. And yes, developing metrics that do this is far from trivial. More to come in a future blog post on this topic…
“I think a decrease in this percentage means that thing we did was good … right?” Welcome to the ambiguity of Sin#4…
Ok so we’ve got a high value, actionable metric that addresses something it’s important to change! Hooray! But will our metric track the full impact of our actions? Can external factors affect the data and make things look better (or worse) than they are?
For example, a good performance metric should clearly reflect action we’ve taken to improve it. If the scope of such a metric is too broad, a change in its value may be ambiguous and, therefore, hard to attribute. Example: if we’re using the total number of vulnerabilities on our estate as a proxy for our patching rate, Patch Tuesday will boost this number and make our performance look like it’s gotten worse, even if the number of vulnerabilities patched per week has remained constant. Note: This is not a good metric for this scenario!
If we’re not measuring something that changes predictably when we make progress, we’ll find ourselves having to endlessly explain metrics to people, and the whole point of a metric is to give stakeholders clarity on the situation.
“Our operations teams use these metrics, the CISO’s metrics focus on something else.” Beware the divergence of Sin#5…
Sure, a metric can be broken down differently for different stakeholders, but the metrics themselves cannot be ‘different’. Metrics will need to be tailored for different stakeholders, particularly in terms of their granularity and scope, but there must be a common thread running through them.
There are two aspects to this. The first is a shared view all the way from the Technology Risk Committee to IT Operations teams of what a set of metrics relating to a risk or performance measure tell them about options for actions or priorities they need to act on.
The second is what we call “data lineage” within this shared view. Data lineage is, essentially, the ability to drill down from a high-level metric (i.e. that Execs have on their dashboard), all the way to the raw records metrics are built from (i.e. where actions are taken at operational level).
Unless you nail this, you end up with a disconnect between the metrics Executives are given to make budget and resource decisions, the actions that are taken at operational level, and the ability to link the two from one reporting period to the next.
“We’re confident that the data is complete.” But of course you are! It’s Sin#6…
A tendency to ‘trust not verify’ data sources that are curated by someone (a data base that has stripped out ‘irrelevant’ fields from an API, the CMDB that is considered a golden source of truth), can lead to dangerous assumptions. And we know what assumptions make out of you and me…
The thing is that people often have very strong feelings about data they either own or curate. It’s personal to them, and they’ll often balk at suggestions that it may not be accurate. However, if we don’t triage assumptions about a data source’s accuracy and trustworthiness, we can end up fundamentally undermining our analysis. At best, this leads to arguments about accuracy from people affected by a metric, and subsequent re-analyzing that takes up valuable time. At worst, it leads to a collapse in confidence of all future analysis.
“I think this data would look lovely in a Pie Chart” AAAAARGHHHH! Avert your eyes! It’s Sin#7…
You did all this great analysis and then presented it in a pie chart?! Pies are for eating, not for charting. If you ever want to demonstrate this, here’s a great graphic.
With that outburst over, there is a serious point here.
Everyone has a preference for how they like to receive information. Stacked bar charts, doughnut charts. The list of visualizations people ask for that make data scientists grit their teeth is lengthy.
To communicate risk or security performance with clarity, we have to be willing to fight our corner about why a particular visualization is poorly suited to communicating the information decision makers need, whether at operational or strategic level. We also need to select visualizations and construct data journeys that give people the insight they need.
But if we don’t help our stakeholders understand the visualizations they are looking at, if we don’t show them how they link to decisions, if we don’t give them the context for our analysis and how we’re presenting it, we’re expecting our audience to make leaps in understanding that we often take for granted after staring at the data for weeks.