Skip to main content

Principles for implementing and maturing a security metrics programme

March 02, 2020

Leila Powell

When you’re building a security metrics programme, it’s important to establish and commit to some guiding principles early on in the process.

As I’m working with customers, I try to avoid implying there are hard-and-fast rules that should be followed. Every organisation is unique and every metrics programme is different. What works for some will not work for others. Instead, it is better to agree on core principles that can help you make better decisions. In this blog, we’ll take a look at some of these principles. We’ll start by looking at deciding what to measure, then getting started, seeing what you need to do as you go, and improving over time.

How do you decide what to measure?

To ensure you really understand what success looks like, you need to ask yourself a few important questions.

Why are you doing this?

Think about your objectives and the programme’s purpose. Metrics are the means, not the end. In order to develop the right metrics, you need to be clear about what your objectives are and how your metrics will help achieve them. 

Metrics aren’t going to help you fix everything at once but they can help you measure the security performance of key areas, or identify areas you might want to look into. The trick is deciding on those key areas and focusing your measurements efforts on those, rather than spreading yourself too thinly.

If an auditor has provided you with a list of areas for improvement, that gives you a good starting point. The board may also have highlighted high priority issues they want you to investigate – maybe the thought of a rogue superuser is keeping them awake at night and they want you to tighten up your privileged access management programme.  

If you’re starting from scratch and are creating a baseline for your performance, you might look to a framework, like NIST, as a starting point. But bear in mind that frameworks cover large swathes of the security domain – there’s no way you should start by trying to measure all that. You need to prioritise and choose the areas that are the most important.

You don’t want to rush in, ingesting everything and seeing what happens. It’s essential to understand the value that your data is going to give you. If you can’t say what you need to measure and why, you’re just measuring stuff for the sake of it.

Maybe you want to gain more visibility of your IT estate? Maybe you need a way to track and report on some recent audit points? Maybe you have multiple objectives?

If so, be clear on the relative importance of these and try to figure out if you may be taking on too many competing priorities at once. 

There is no ‘one metric to rule them all’, so proceed with caution if you think one metric will serve multiple different objectives! More objectives usually mean more metrics!

The important thing is to avoid trying to measure everything right away. 

What data do you have?

Once you have your proposed list of priority areas to measure, and an idea about what metrics you’d like to use, you need to review your data. In reality, the choice of metrics should be about the interplay between the data and the objectives – you can’t figure out an effective list of metrics without considering both of these ingredients.

Look at the data sources you think you want to use. Who owns them? Are they cloud or on-premise? How easy will they be to access?

If there’s a set of data that might be useful, but it will take six months to get access because you don’t know who owns it and it’s on another part of the network, take that into account. 

The more preparation and due diligence you undertake, the more likely you are to set off on the right foot. In our experience, it makes sense to start by looking at the intersection of what’s important and what you can get data for quite easily. 

Basically, if you’re just starting out, pick the low hanging fruit. If you start small and build, it will help you learn as you go. Plus it makes it easier for stakeholders to sign off!

Who is your audience?

The final ingredient in figuring out what to measure is considering your audience. Different stakeholders have different reporting needs. You will need to alter your approach depending on whether these metrics will be shown to the security team, the board, GRC or external auditors.

This usually ties in quite closely with your objectives. For example, if you’re tracking progress on audit points, you’ll show your metrics to the auditor. But, your internal teams will likely be using the same metrics to track their improvement. You may want to repackage the same data differently for these two audiences.

There are two main facets to choosing metrics for your audience – the granularity of information required and the lens through which the information is viewed.

In terms of granularity, my rule of thumb is that as someone’s remit becomes broader they need less depth in each area. Going back to our audit point example, if one of these was related to your vulnerability management process, the vulnerability manager will want very granular information to know how to make the right improvements,  whereas the auditor who is reviewing across all security verticals may just want to see that your status is satisfactory when they return.

When you think about the lens through which the data is viewed, think about what matters most to this audience. How can I make them care about the information my metrics are conveying? Or, rather, how can I change my metrics and presentation so they can’t not care about it? (Hint: if they don’t care you’re doing it wrong). 

The strongest example of this is in the need to translate technical, operational level metrics into metrics that are meaningful to the board. They need to see something that tells them about the impact on their most business-critical infrastructure or processes. That being said, this one is a challenge that probably deserves a post all of its own.

 

How do you get started?

What is your current maturity?

It’s unlikely that you’ll be starting completely from scratch. If you have some metrics in place, think about how effectively they answer the questions you are asking. If they aren’t meeting requirements, why not and what can you learn from that? Hold a retrospective – try to identify something to stop doing, start doing and continue doing. You are the experts on what has and hasn’t worked in your organisation when it comes to metrics!

What tools do you need?

If you don’t have an automated data platform of some sort, you’ll need three key tools.

  1. Somewhere for the data to live, whether that’s in spreadsheets, a data lake or a good old SQL database. 
  2. Some way of processing the data (often closely linked to the data store you’ve chosen) and ideally one that is repeatable and can be scripted in some way.
  3. A tool for visually representing that data, which could be any number of data visualisation tools or just the charts built into your spreadsheet programme.

In my opinion, everyone should be moving towards automation – this is just good practice for any data analysis you want to repeat on a regular basis. But, if automation is not an option for you at this stage, it is still much better to produce a small, select set of metrics manually than to have none at all.

 

What do you need to do as you go along?

Once your metrics programme is up and running, here are some things to keep in mind. 

Maintain open communication between teams

Security metrics can sometimes create tension. The security team may be reporting on something that another team is responsible for, like patching vulnerabilities, which is often managed by IT. That team might not want their work presented in a certain way. Ultimately, nobody wants to feel like their work is being misrepresented.

You need to communicate effectively with anyone whose work is featured in the data. It should be displayed in a way that is an accurate portrayal of what’s really going on. This is another area where having good QA and a robust process helps – everyone should be looking at the ‘same version’ of the data.

Equally, the first time someone sees a metric that relates to their team should never be in an important meeting with senior stakeholders! Everyone should have access to the metrics on a regular basis. If the metrics assessing their work are not also informing their work, how will they ever move the needle?

You may also need the help of other people. Different tools or systems may produce data that needs aggregating and de-duplicating (that process may actually be the toughest step to getting reliable and accurate metrics, but we’ll get onto that). 

That’s where communication comes in. You need to find the best way to interpret, understand and work with each data set. The best way to do that is to speak to the people who are already familiar with them. For example, why not go and speak to your vulnerability management team about their recommendations for extracting and processing data from their scanner? If you involve other teams in the process as early as possible, you can not only utilise their expertise, but also improve buy-in to avoid any potential disagreements down the road about how data is presented.

Keep your processes and reporting consistent

Once you have sign-off on your metrics and the support of other teams, you need a consistent way of producing analysis.

In an ideal world, that would be some kind of automated solution (whether Panaseer or otherwise). But if not, there needs to be a clear set of procedures and processes to follow. To upkeep trust in the data sets, you need auditability and traceability.

You don’t want data sets emailed around among multiple teams, with random people applying filters, adding stuff here or losing things there. Someone removes a line, tidies this, modifies that… 

This kind of curation may be done with the best intentions (or with no particular intention at all) but it breaks the data lineage and can lead to an erosion of trust when numbers don’t add up across different reports. It’s essential to ensure quality control so you’re making the same calculation every time.

Speaking of ‘every time’, as your security programme matures, you will be able to figure out the most effective cadence. This is particularly important if the metrics are being produced manually.

If there’s no budget to invest in automation, the burden falls on your team. You need to find the sweet spot between the time taken to produce metrics and how timely those metrics are. A study found that security teams spend around a third of their time reporting on security rather than actively improving it – you really don’t want this getting out of hand.

How up-to-the-minute can you make your reporting without overburdening your team?

 

How can you improve over time?

As I mentioned in my last post, it can be tough to change a metrics programme once it is in place. Getting stakeholder approval on objectives and specific metrics takes time, so rolling back on those decisions can undo a lot of hard work.

But, we need to keep in mind that the point of running a metrics programme is to better understand your security posture and then improve it. If the remediation actions that you take based on your metrics are successful, your priorities will change. Eventually, the metrics that used to be your focus will become less important and you will need to move onto other things.

For example, one of your highest priority metrics might be the roll-out of new endpoint protection technology – it’s really important to get this to 100% (or whatever your accepted tolerance is), not least because you’re likely paying for a license per device!

As you achieve this goal, the metric becomes less important. Not to say you stop measuring, it should still be monitored for drift, but it is no longer front and centre on your dashboard.

The key metrics should always be the key problems or questions you are addressing now. With this in mind, it is important to establish an ethos of iteration and evolution, baking in this concept from the beginning so all stakeholders are aware that changes will happen. Scheduling a regular metrics review, perhaps yearly, is an excellent way to help improve your metrics programme over time and signal to people that change is the norm.

 

As I said at the start of this piece, no two metrics programmes are the same. Every organisation will have different priorities. But no matter what your goals are, asking fundamental questions is important. What are we setting out to achieve? Are we doing everything we can to make that happen? How can we improve?

The principles I’ve covered should help with this. If you have any others that I haven’t thought of, I’d love to hear them. And if you’ve got questions about anything in this post, feel free to get in touch with me on Twitter or Linkedin.