Introduction

Most DSP failures in production are not caused by “bad math”.

They are caused by unverified assumptions.

Engineers approve a design because:

  • “the spectrum looks cleaner”
  • “the notch looks deep”
  • “the plot seems fine”

But visual plots are not verification evidence.

This pillar defines a verification-first approach:

  1. define what must be proven
  2. measure metrics robustly under noise and drift
  3. define pass/fail criteria that survive regression
  4. reject designs that look good but fail numerically or statistically

Why Visual Spectra Are Not Verification

Spectra lie in noisy environments because:

  • estimator variance creates phantom peaks
  • averaging can hide intermittent interference
  • leakage reshapes peak magnitude and width
  • different parameter choices produce different conclusions

A focused explanation is here:
Why Visual Spectra Lie in Noisy Environments

The engineering takeaway:

Verification must be defined in metrics, not in plots.


The Four Classes of Verification Metrics

A production-grade verification set must cover:

1) Suppression Metrics

Did we remove the interference?

Examples:

  • tonal suppression at a target frequency band (dB)
  • stopband attenuation for a designed region

2) Integrity Metrics

Did we preserve what must be preserved?

Examples:

  • protected band ripple
  • passband distortion
  • main-tone protection (if relevant)

3) Stability Metrics

Will this remain stable after deployment?

Examples:

  • impulse response decay behavior
  • coefficient sanity margins
  • sensitivity to quantization (embedded)

Stability foundations are here:
Fixed-Point DSP Filter Stability

4) Repeatability / Regression Metrics

Does the pipeline remain deterministic?

Examples:

  • rerun variance bounds
  • stable classification decisions
  • consistent coefficient generation

Deterministic pipeline structure is here:
Designing DSP Pipelines for Deterministic Outputs


Robust Noise Floor Measurement (Percentiles > Means)

Many verification failures come from a single mistake:

using mean-based noise estimates in non-Gaussian, transient-rich signals

Robust practice:

  • estimate noise floor using median / percentiles
  • estimate signal level using upper percentiles
  • ignore spikes that should not define noise

A complete guide is here:
Measuring Noise Floors Robustly Using Percentile Statistics


Pass/Fail Criteria That Survive Real Signals

A useful pass/fail rule must be:

  • measurable
  • robust under estimator variance
  • stable across reruns
  • defensible in review

Bad rules:

  • “largest peak must drop”
  • “spectrum looks smooth”
  • “average SNR improved”

Better rules:

  • tonal suppression ≥ X dB at target band
  • protected band ripple ≤ Y dB
  • no new tonal artifacts above threshold
  • verification computed from robust statistics

Verification Under Drift (Don’t Validate Against Stationary Assumptions)

Drift breaks naive verification:

  • suppression at a single bin is meaningless
  • the “tone” moves across frequency
  • a narrow notch can miss the real interference

A drift-aware verification approach validates across a drift envelope.

Drift-aware suppression architecture is here:
Filter Drifting Tonal Noise in DSP Systems

Drift tracking specifics are here:
How Drift Tracking Improves Notch Filter Robustness


Verification Under Low SNR (Don’t Validate Phantom Detections)

Low SNR breaks naive detection, which breaks verification.

If you verify a filter designed from phantom peaks, you will “prove” nonsense.

Root causes and fixes:


Verification Acceptance Checklist (Copy/Paste for Engineering Use)

Use this as a minimal acceptance protocol:

  1. Inputs are valid

    • sampling rate correct
    • no clipping/DC issues beyond defined limits
    • analysis windowing parameters fixed (for repeatability)
  2. Interference detection is evidence-backed

    • candidates from PSD
    • validated by STFT/presence
    • drift envelope estimated if needed
  3. Synthesis is constraint-compliant

    • bounded Q / stability margins
    • complexity limits respected
    • protected bands preserved by design
  4. Metrics prove the intended outcome

    • suppression metric meets target (dB)
    • integrity metric within allowed distortion
    • stability checks pass (embedded/quantized if needed)
  5. Results are regression-stable

    • reruns produce consistent classification
    • coefficient output stable within tolerance
    • no “random pass/fail” behavior
  6. Artifacts are auditable

    • metrics are traceable to inputs and settings
    • outputs are reproducible
    • evidence plots support (but do not replace) metric decisions

Series Map — Verification Pillar and Supporting Articles


Conclusion

Engineering verification is the difference between:

  • “a filter that looks good”
  • and “a filter that can be shipped”

A verification-first workflow:

  • measures suppression and integrity robustly
  • models drift and low-SNR uncertainty
  • enforces constraints that prevent fragile designs
  • defines pass/fail criteria that survive regression
  • produces auditable evidence for review

That is how DSP becomes reliable engineering instead of repeated tuning.