Phishing simulation tests have a ton of interesting data points behind them. You can track things like who they get sent to, how often they're clicked, how difficult the test was, and even what type of content was in the email.
All of this data is suddenly at your fingertips when you move to a recurring phishing test program supported by an automated platform—but all this data can seem daunting. Where do you even start?
In this post, we explain the foundational metrics related to phishing tests and break down how you can look at them in different ways to understand the risks your organization faces and effectively measure and report on the success of your training program. We'll also talk about how you can turn those data stories into actions to improve your cybersecurity awareness program.
Three main phishing test metrics
When it comes to measuring a specific phishing campaign, there are three metrics that matter the most: the open rate, click rate, and report rate. These tell the high-level story of how "effective" your phishing template was in your test group—was it engaging and successful at convincing your staff to click, or did they see the bait and report it to your IT team?
Let's break down what these metrics are and what they tell you.
The open rate is the percentage of recipients who actually opened your phishing test email.
One technical detail to note about email open rates: an open is measured when your user opens an email with images downloaded. This is because emails are tracked using tracking pixels, which are 1-by-1 pixels containing a unique tracking code. This means that open rates aren’t 100% accurate (not all employees will download email images), but they’re still important to monitor for trends over time.
The story that this metric tells you is whether or not the email as it looks in the inbox was engaging or enticing enough to get someone to open it and learn more.
What's interesting about the open rate is that it is not influenced by the contents of the email, rather it's created by the information that is seen exclusively in the inbox. This includes:
- the email subject line,
- sender name and email address,
- some preview text, and
- any other pre-email factors like a notice that flags whether an email is external or not.
For example, you could use the same phishing template and have it sent from two different sources: one being a generic external name and email address, and one sent from your CEO's name and email. Regardless of what's inside the email, the latter is more likely to be opened because the sender name and email address are more trustworthy.
The click rate is the percentage of recipients who clicked on the phishing link inside the email.
Click rate is the most commonly used metric and the one that most teams providing training look at when measuring performance. That's understandable—this metric is the most direct way of seeing how many of your employees are doing The Bad Thing (falling for your test.)
The click rate really is influenced by the contents of the email itself, framed around the context of the pre-email factors that we talked about when unpacking the open rate.
There are many experiments you can run to see what influences email clicks in your organization: personalization (like using someone's name in an introduction), click objects being buttons instead of links, the number of typos, and the emotional strings (like fear or anxiety) that you pull with the writing—just to name a few.
Not all phishing emails are designed to get their recipient to click a link inside an email. Depending on the type of phishing campaign you're running, you might have additional or different metrics, but they will likely perform the same function as the "click rate" for email phishing tests.
Some examples include:
- Email attachment downloads
- Email replies (with important information like login credentials)
- Phone call responses
- SMS replies
- USB installations or downloads
The report rate is the percent of recipients who reported the phishing email to their IT team or help desk, commonly done through a Report-a-Phish button implemented in their email client.
When it comes to phishing tests, this is the most important standalone metric, even more so than click rate.
Why do we think that? Picture this example: a phishing attack hits five of your employees, and each employee sees that email an hour apart (first employee sees the email at 9 am, the next at 10 am, and so on.) Let's assume the first employee identifies that the email is a phishing attempt at 9 am. There are two ways this can play out.
In an organization with a low report rate, the first employee identifies the email as malicious but immediately deletes it. While this is good for them as an individual, this action puts the other four employees at risk later in the day.
In an organization with a high report rate, the first employee identifies the email as malicious but tells their IT team about it. The IT team now has an hour to intervene before the next employee engages with the email (through a communication, or blocking the malicious link's domain through something like a DNS Firewall). This action benefits both the individual and the organization at large.
Having a high report rate is what makes IT and the organization proactive, instead of reactive. That's what a healthy cybersecurity culture represents, and that is the primary goal of a cybersecurity awareness training program
Moving from phishing test metrics to phishing program measurement
While understanding the above metrics is fundamental, they alone do not help you understand the success or impact of your phishing program at large. They're great for explaining how your users interact with a single test or template, but to measure your holistic program you need to look at how those metrics change in a variety of different circumstances.
In this next section, we break down how you can look at your phishing test rates in different ways to understand the risks in your organization and how you're educating about those risks through your program.
Individual vs group metrics
If your phishing platform allows you to sync users by department, you will get phishing metrics at the individual, department, and organization levels.
The organization-level metrics are the baseline for all of your users. If you send monthly phishing tests to all of your employees, these are the metrics that will tell you the overall risk to your organization.
The department-level metrics are the same, except for a specific department. These metrics are interesting because you can compare various departments or groups against each other to see which are more or less at risk. For example, your Finance team might have a higher click rate than your Sales team, or your Toronto office might have a better report rate than your Vancouver office. You can compare these group metrics against the organizational average to see which areas deserve more focused training and testing from you.
The individual-level metrics showcase which specific employees are at higher or lower risk relative to their department and the organization at large. One powerful training exercise is to keep a list of your repeat offenders who click on phishing tests and put them into their own training bucket, with your goal being to reduce the number of employees in that bucket over time.
We should highlight here: just because you have per-person metrics, it doesn’t mean you should use that data to name-and-shame employees publicly. This is unfortunatelya common practice, but we believe it creates a negative cybersecurity culture. If you’re wanting to build a healthy cybersecurity culture, we wrote a whole post on that.
Measuring rate changes over time
Any phishing test metric is a representation of a single test at a single point in time. Where these metrics get interesting is how they change over time.
The most common example is understanding how your click and report rates for the organization change month to month. Your goal is to reduce click rates and improve report rates for the entire organization over time until they plateau at a comfortable level relative to your training investment.
If you're running a focused training program for a specific high-risk group or individual, you will also watch how their metrics improve over time as well, until they reach some threshold where they are no longer considered high risk (such as their phishing click rate falling within a reasonable deviation from the organization's average.)
Watching how metrics change over time can also help you understand which seasons or time-based events impact your phishing metrics. One unique example we saw this year was how phishing click rates changed as employees started working from home during COVID-19. We saw the phishing simulation click rates jump for most of our customers between March, April and May, then start to return to original levels throughout the summer as fear around the pandemic and the uncertainty around remote work normalized. You could potentially see similar changes during tax season and the winter holidays, which are two seasonal events that are often associated with an increase in phishing activity.
Comparing different templates
CIRA's Cybersecurity Awareness Training platform sends out completely randomized phishing tests to all employees every month. This means random templates to random people at random times, on a monthly cadence.
After having a monthly phishing test program in place for a few months, you start to see which types of phishing emails are more or less effective in your organization.
First, there's the category of phishing template. For example, you might find that shopping-related templates are more likely to be clicked on by your employees than delivery-related templates.
But you can also go right down to the template itself. Maybe Canada Post templates are more likely to be clicked on than FedEx templates, even though they are both delivery related.
Going beyond phishing test rates
By no means do any of these types of metrics need to be looked at in a vacuum. You can combine all three of these reports to uncover some really interesting stories.
For example, you might find different types of templates become more or less risky over time. You may find that while phishing clicks generally increase during the winter holidays, shopping-related templates are the ones that disproportionately impact your users during that time.
Or you may find that regional differences in the email content impact which departments or offices are more susceptible to phishing; for example, your office in British Columbia would obviously never fall for a phishing email that talked about renewing your Ontario health card.
The point of talking about what stories these metrics can tell is to go beyond one-time phishing test metrics and to move towards understanding the unique risks your organization faces and applying focused training and phishing testing to mitigate those risks.
The benefit of adopting an automated phishing tool is to take the heavy lifting of sending campaigns and gathering data off of your shoulders so you can spend more time reflecting on these data stories and building out your plans to improve and target your awareness training program.
Ready to start phishing tests in your organization?
CIRA Cybersecurity Awareness Training makes it easy to automate a monthly phishing program and measure the phishing risks in your organization.
Phishing flow chart
View all of the different ways a phishing simulation program helps reduce cyber risk for your organization.