CISO • AI

    Anomaly Detection Confidence

    Tracks model confidence, drift indicators, and the quality of anomaly detections over time.

    What it shows

    AI only helps if it stays trustworthy. This panel shows whether anomaly models are stable, calibrated, and aligned with current operational behavior.

    How it’s calculated

    • Confidence scores reflect agreement across models and rule-based checks.
    • Drift monitoring detects when operational patterns change (new firmware, new shifts, new suppliers).
    • Feedback loops incorporate analyst outcomes (true positive/false positive).

    What to do next

    1. 1
      Review confidence drops
      and check for environmental changes (patches, new devices, network re-segmentation).
    2. 2
      Recalibrate thresholds
      to keep false positives manageable without losing critical sensitivity.
    3. 3
      Run targeted model retraining
      for sites or zones with consistent drift.
    4. 4
      Use confidence in automation
      : auto-contain only when confidence is above policy threshold.

    KPIs to watch

    Confidence
    %
    Drift alerts
    count
    FP rate
    %

    Why this matters to a CISO

    AI only works if it’s trustworthy
    If models drift or confidence drops, you’re flying blind. This keeps the AI layer honest.
    Drift is normal in OT
    New firmware, new shifts, new processes—all cause drift. You need to detect it early before it erodes detection quality.
    Confidence drives automation
    You can’t let AI auto-contain based on shaky confidence. This metric ensures automation stays aligned with risk appetite.
    Feedback loops improve accuracy
    Every analyst decision sharpens the models. This closes the loop between human intelligence and machine learning.
    Reference UI Screenshot
    Anomaly Detection Confidence screenshot