What makes a good security metric?
July 07, 2021
‘I think everyone who looks into the whole world of risk management will come to the point that it’s all about controls, and how to measure their effectiveness’, said cyber and risk expert Andreas Wuchner in our first live Metric of the Month panel discussion.
To put this article into context, that very panel – made up of seasoned CISOs and Metric of the Month collaborators Andreas Wuchner, Andrew Jaquith, David Fairman, Jim Doggett, and Raffael Marty – discussed their favourite security metrics and some of the best metrics to use when protecting a large enterprise. Straight off the bat, we asked: ‘what makes a good security metric?’
Andreas spoke about a simplistic, three-step approach to security measurement:
- What to measure.
- Data quality and automation.
What to measure.
‘First you need to agree on a set of measurements’, Andreas noted. Many organisations define their security metrics and controls by relying on key performance indicators. Andreas disagrees with this approach, though. In discussion with other CISOs, many don’t realise that KPIs only make up a small fraction of the controls space. There are also key risk indicators, assessments, and much more.
An alternative route would be to use frameworks like NIST, ISO or SCF for inspiration. But you should start small. ‘You don’t need to heal the world’, Andreas said, ‘you can start with a handful of NIST controls. You don’t need to go through 500 controls to get an end-to-end view, but rather get an overview to cover what you need.’
In that regard, frameworks are valuable, but… ‘You may have those controls defined, but how do you really measure them? How can you say, with a good feeling, whether a control is effective or not?’, asks Andreas.
Data quality and automation.
‘The second thing is to really look at your data sources.’ During the panel discussion, many of our speakers noted the importance of automation when creating security metrics and measuring the effectiveness of your controls. If it’s dependent on people to collate all that disparate data together, the trust and integrity of the data may come into question. Not because you don’t trust those people, but because it is a lengthy, error-prone process.
Jim Doggett said: ‘You’ve got to find a way to automate this. It’s one of the oldest and most basic areas, but each time, like I used to do, you take the output from different scanners throughout the world, try to consolidate them, eliminate duplication, make sure I know where the gaps are and where we didn’t get coverage, then enrich that, and the months already over and I’m reporting again before I’ve even done all that work. So automation, I think, is key in this area.’
Andreas: ‘Use automation. Get out of that game where you have people sending you Excels or PowerPoints which you manually integrate. Nonsense. It’s a road to failure.’
That way, the underlying data for your metrics is more reliable, and you can therefore make meaningful decisions on these metrics, with confidence.
‘If it’s automated and we know what to measure, what is the threshold? What defines ‘good’?’. ‘Good’ is hard to define, though. It depends upon many factors. Who are the stakeholders? What is the risk appetite? How mature is your programme? What are your priorities?
Andreas put it into a bit of perspective: ‘That’s where often things go ballistic.’ An IT operations perspective will probably prefer a less harsh threshold, so it’s still workable. Whereas a risk team might push for tighter controls because they want to run a tight ship.
You need to find a balance that works for your organisational priorities.
The last word.
Security metrics are all about control effectiveness. A good security metric measures how effective a control is. To do that, according to Andreas, you need to get clarity on what you measure, data quality, automation, priorities, and thresholds. ‘With that, you have a really good start. If you follow these basic standards, you get to a very good start. Less is more. Grow over time.’
In the aforementioned panel discussion, once we had established what makes a good security metric, our speakers went on to talk about some of their favourites. Andreas himself spoke about measuring cyber awareness culture because it’s so hard to do: ‘Technology is often relatively easy to measure, but the human factor is very, very difficult’.
Andrew Jaquith spoke about toxic combinations – an important area of risk laying between security and finance. Andy likens privileges to medications – in that a bad combination can kill the host.
David Fairman discussed the importance of security controls coverage: ‘You can have a control that’s designed adequately and operating effectively but if it’s only implemented in a subset of the environment you need to control, clearly you have a gap. So for me it’s a direct indicator of a, of a risk exposure area, or of residual risk.’
Jim Doggett spoke about vulnerability outlier analysis as a form of risk prioritisation. ‘We’ve been dealing with vulnerabilities and patch management forever… but it comes down to four things: risk, efficiency, automation, and coverage.’
And finally, Raffael Marty spoke about perhaps one of the most crucial security metrics: asset inventory. ‘How many assets do you have right now on your network? It’s a very, very hard question… but it’s also one of the most fundamental metrics because you use it as context to almost all your other metrics.’
If you’re enjoying our Metric of the Month content, subscribe below.