Skip to main content

What metrics do we use to measure the performance of the GriffonAV Solution.

· 4 min read
Mabille Raphaël
GriffonAV co-founder

report n°004 | 2026-01-04

Overview

In this post we will talk about what metrics we use and why we chose them to measure the performance and health of our software through each update.

  1. Shared metrics

  2. Module specific metrics

  3. User specific metrics

Our definition of metric

A metric is a quantifiable measure used to track, evaluate, and compare performance, behavior, or status of a system, process, or activity over time, it turns something we care about into a number we can monitor and act on.

Shared metrics

Griffon is a project composed of multiples modules that does a number of different things that each call for separates metrics and monitoring style. But there are still some metrics used universally for every app, one of such category is performance.

Performance

In our use case we chose to monitor performance through 3 metrics for each of our modules :

  • CPU usage
  • Memory consumption
  • Disk I/O impact

We chose those 3 because they are the main ressources that can cause bottleneck in the usage of a software, they are also what the user usualy care the most about.

We measure those in two distinc case, when the module is loaded but inactive and when the module is active.
The first case is to make sure the module stay healthy, by compairing its performance usage before and after its use. This can also help catch undefined behavior or memory leaks.
The second one is the most important, it can help us indentify bottlenecks and set goal for improvement for future version of our software.

Module specific metrics

This section is dedicated to listing all the metrics used in our different modules, for an in depth explaination on the why and how refer to the specific document dedicated to each specific module (docs/modules/xxx_module/metrics).

AV_module

The anti virus module is by far the one which needs the most mestrics.

Here are the metrics we use to measure the effectiveness of the av_module :

  • Detection & Preventionq
    • Malware detection rate
    • Blocked threats count
    • heuristic detections
  • False Positives
    • False positive rate
    • Number of files/processes incorrectly quarantined
  • Missed Threats
    • Post-infection detections
    • Number of scan failures
  • Update & Configuration
    • Time since last update of the signature database
    • Average number of new signatures each update
    • % of unscanned files due to exclusions
  • Threat Response & Operations
    • Mean Time to Detect (MTTD)
    • Mean Time to Contain (MTTC)
    • Mean Time to Remediate (MTTR)
    • Number of files quarantined
    • Number of files deleted
    • Number of processes terminated
  • Rollback / remediation success rate

The combination of all those metrics help us understand better where our AV is weak, because a good AV is one who does everything perfectly.

Cleaner_module

The main metrics of the cleaner module are pretty straighforward :

  • Execution speed
  • space freed
  • % of the computer scanned

Other_modules

Here is a list of other modules in developement that should appear in a future report :

  • TBD

User specific metrics

Monitoring how the user interact with our software is as important as monitoring the performance of the software. Because those metrics are dirrectly link with the usage of our software and its modules we can apply the same separation of shared and specific metrics.

shared metrics

  • How often does the user close the application
  • % of on startup disabled

module specific metrics

  • AV_module

    • % of scan stopped
    • Average time of a scan
    • Frequency of on demand scan
    • Number of whitelist event
  • Cleaner_module

    • % of scan stopped
    • Average time of a scan
    • Number of whitelist event
    • % of clean proposition executed
    • Reaction to clean alert

Conclusion

For a more in depth explaination of how we collect and use the metrics cited above refer to each module own documentation : docs/modules/xxx_module/metrics