Skip to main content

Cybersecurity Controls Scorecard: Behind the scenes

August 29, 2024

Rob Campbell

In this blog, I’ll take you through why we made the Cybersecurity Controls Scorecard, what it does, and how we built it, including the philosophy behind the choices we made along the way.

Why do you/we need a Cybersecurity Controls Scorecard?

The Scorecard was made to help security leaders deal with these two main problems. Communicating security and prioritising action.

Communicating complex cybersecurity topics is hard.

This is doubly true when the audience aren’t technical and is pressured, such as a board. Security leaders need to summarise the risk from cyber threats and how they’re being mitigated. It’s a big and complicated topic, but they only have a short slot in the middle of a bunch of other topics.

Other departments have single or compound metrics that they share often and are common knowledge in business. Think of sales leaders talking about pipeline or marketers talking about leads. In security we’re still asking a lot of our audiences. They may not know off the top of their head what “critical severity exploitable vulnerabilities on external facing devices outside of patch SLA” means. Or its relative importance compared to phishing test failures. Or sales pipeline conversion rates changing.

Comparing security across initiatives and the business is hard.

It can be almost impossible to build a picture of what exists from individual tools, and once that picture is built it’s out of date. Even with the picture it can be hard for leaders to prioritise where their overall plan needs focus. Should it be on risk from phishing? Or on a particular business unit? Or on external facing infrastructure? And how will we convince others this is the right place to focus once the decision has been made?

Our answer is the scorecard.

So, what does the Scorecard do?

The Scorecard summarises the performance of key security initiatives into one overall score. We can show you how that score changes over time (from a week up to a year) and then how different breakdowns of your business are performing overall and per initiative. This allows you to quickly summarise change, trends, and hot spots to focus effort on.

The overall score is the average of your security initiative scores.

Initiative scores are the pass percentage of sets of metrics in a dashboard. You control both what metrics are included in each initiative dashboard and where the pass/fail threshold is set.

How did we build it?

Design philosophy

We have three key design statements for the scorecard:

Simple. CISO’s are busy. They need something fast and simple. So it should be immediately obvious what is on the page and why.

Illustrative. We should provide new value by aggregating the metrics that make up the scorecard, and we should show it to make the view as compelling as possible

Trusted. The scorecard leans on best practice we’ve learned from discussions with our customers and advisors, conference talks, examples shared, and other documented ideas. We should also show our working at every stage.

Our goal was to make a tool for security leaders to summarise security programme performance, understand and prioritise areas to improve, and communicate up to their managers and across to their peers.

Scoring

The Scorecard’s goal is to quickly summarise sets of metrics and overall program performance. Any time you get away from the core metric, you immediately get questions about how scores are calculated. So we wanted to find the simplest algorithm that allowed us to summarise. This was always likely to be the most critiqued part of the scorecard. We had our own set of paradigms for the score:

Transparent. We must be clear how the score calculated.

Configurable. Different companies have different priorities, and priorities change.

Unified. We should provide a single score.

Panaseer’s data team outlined six of the simplest algorithms we could find, and workshopped them to see if they worked. In the end we decided it would be easier to add complexity in later based on feedback rather than starting complex stripping it back so we chose the simplest algorithm we could come up with.

You can see our simple pros and cons list for each of our ideas. Once we had our proposal our data team led mapping of the algorithm results, eventually drawing the calculation formula.

 

Metric calculation flowchart for the Scorecard

The algorithm diagram and a couple of worked examples became our props to test the algorithms with users. Running our customers through the options made it clear we would need to allow users to add weighting to the scoring, and allow flexibility in the algorithm for the scorecard to be used as an executive reporting dashboard.

Conversely the research validated our assumption that simpler was better. More complex scoring algorithms are harder to understand, and we found that the complexity immediately reduced interest in the scorecard in general.

To respond to the need for complexity we decided to start simple with the scoring and then allow users to bring in complexity in the form of weighting In future releases.

Trending and the timeline

The scorecard is the first part of Panaseer to show long term trends and trend lines. To allow us to do this we’re sampling data points rather than representing every day in the trend chart, which allows the page to load much faster.

Allowing users to represent change quarter by quarter was a big demand from our users and we knew that our existing day by day loading wouldn’t handle the data volumes in a performant way. Sampling the first day of the week or month allows us to show a much bigger date range, while retaining a meaningful trend line.

Our research and build philosophy

Panaseer is an agile company, we really try to embody the atomic, releasable philosophy. For this we went both agile and lean, trying to understand, hypothesise, and test the problems we found as quickly as possible.

Starting with open generative research we can learn, hypothesise, and test designs that give us a jump round the “Build, Measure, Learn” loop.

Our process had dozens of ideas and iterations that were replaced and honed during our research process. This gave us a clear picture of what our initial version should be.

The first scorecard release was a pilot that could demonstrate the value in a sales demo and was used extensively by the internal teams to help explain the value of Panaseer in general.

Luckily our teams and customers are keen to get involved and provide constructive feedback to help us refine and continue extending the capability to GA functionality level.

The final word

There’s still more to come. The biggest questions since release have been “what next?” and “how do I give more weight to one metric over another?”. So, there will be more features to solve these problems. Imminently!