Skip to main content

The 5 Stages of Metrics Maturity

March 25, 2020

The Panaseer Team

What are the five stages of metrics maturity, and how does your programme measure up?

You see a man in the street. He looks like he’s lost something. Wanting to help, you go over and ask him what he’s looking for.

“I’ve lost my watch,” he tells you.

“Where did you lose it?”

“Over there, on the other side of the road.”

“So why are you looking on this side of the road?”

“Because the light is better.”

Working with Security Metrics can be a bit like this.

Metrics programmes are often limited by what they can realistically achieve and that’s fair enough. As we’ve mentioned before, measuring something is still better than measuring nothing.

But like the person looking for their watch, if you only look where you can see clearly, you’re never going to know if you are addressing your biggest risks. Perhaps this is why, according to our research, 89% of large enterprises have concerns about the validity of their own security data?

A few weeks ago, we explored why iterating and improving your metrics programme over time is so important. The more mature and developed your programme becomes, the more it can tell you and the fewer areas of poor-to-no visibility you’re left with. Improving your metrics programme highlights problem areas, and focussing on these areas improves your overall posture.

In this piece, we’re going to look at five different stages of metrics maturity. These range from 100% manual and subjective questionnaires to fully automated and predictive Continuous Controls Monitoring.

But before we do that, let’s quickly discuss why metrics maturity is so important.

 

If you don’t trust your data, who can?

Enterprise organisations are fielding more requests for metrics than they used to, from regulators, risk and compliance, internal audit, external audit and the Board. These requests are more complex and require more detailed answers than ever.

As mentioned, these are not simple questions with simple answers, so there is often a period of back-and-forth as the people involved try to understand the request, look for a solution and ensure that all of the necessary regions, business lines, users, processes, locations, assets, applications and datasets have been taken into account.

This back-and-forth can involve quite a few different people. The data may be passed between different groups and stakeholders. Manual alterations may be made.

Eventually, the data has been handled by so many people that it’s hard to commit to its accuracy. And if the regulators have questions or require further substantiation, the data will be very hard to reproduce.

Manual processes aren’t just time-consuming, they’re unreliable, error prone and irreproducible. Metrics maturity is focussed on automating metric production, so that it requires less time, fewer people and leaves a clear audit trail.

With that in mind, let’s look at the five stages of metrics maturity.

 

Basic 

These are entirely subjective and manual risk assessments that provide a point-in-time view of security posture. The findings are based on Q&As conducted with relevant staff.

All of the answers from the surveys are recorded in a massive spreadsheet. There’s no way to reproduce the data or dig further into it, because it’s really just a matter of opinion.

Because of the significant amount of time and effort that this approach requires, assessments are conducted as infrequently as possible.

Thankfully, this is pretty rare these days…

 

Elementary

Data is pulled from security tools manually on a periodic basis. This data is then aggregated by security staff to provide a view of security posture at that point in time – but only that point in time.

If something catastrophic happened between your most recent assessment and the last one, it could be missed, unless it was still having an affect on the metric.

For instance, a quarterly report might find that a metric has improved slightly compared with last quarter’s report. What it can’t show you is that it plummeted for a week about a month ago and nobody noticed.

Plus, the process is manual, which means that there may be countless human errors hidden in the data analysis.

 

Intermediate

At this point, the reporting moves from siloed metrics that are limited to single security areas to unified metrics that are the result of correlation across tools and security areas.

These unified metrics are much better able to communicate the posture of the organisation as a whole, as opposed to just measuring the effectiveness of specific tools or the organisation’s ability to manage specific security areas.

 

Upper Intermediate

This is the automatic and continuous aggregation and correlation of security data to produce unified, up-to-the-minute and reproducible metrics. This is what we call Continuous Controls Monitoring.

This data can be mapped to any framework and segmented however you like. It can also be mapped to the structure of the business, showing you the security posture and risk of specific locations, departments and processes.

Because it’s a tool-based solution rather than a manual one, GRC can be given access to the tool, so they can view and explore the data themselves, speeding up the process and reducing the workload for everyone involved.

 

Advanced

Sadly, cybersecurity hasn’t reached this stage of metrics maturity yet.

But when we do, it will mean the integration of predictive capabilities into CCM, so that it can predict where control gaps or policy failures will occur.

Eventually, it may even be able to automate some of the remediation as well.

 

What level of maturity is acceptable?

Regulators want to see that sufficient security measures are in place, at any point in time.

The metrics you provide need to stand up to scrutiny and be reproducible. Plus, multiple stakeholders are likely to be asking for a wide range of metrics at once.

For these reasons, levels one, two and three in this scale are no longer viable. They don’t give stakeholders the insight they need, they create too much work for the people involved and the data they produce isn’t reliable.

Even if they do work for you right now, they won’t in a year.

If you’re looking for more information on maturing a metrics programme, check out our recent post, Principles for implementing and maturing a security metrics programme.

If you’d like to learn more about how Continuous Controls Monitoring can help you automate and improve metrics production, request a demo with a member of our team. And if you have questions about anything in this post, feel free to get in touch on LinkedIn, Twitter or by email.