Introduction
Most DSP failures in production are not caused by “bad math”.
They are caused by unverified assumptions.
Engineers approve a design because:
- “the spectrum looks cleaner”
- “the notch looks deep”
- “the plot seems fine”
But visual plots are not verification evidence.
This pillar defines a verification-first approach:
- define what must be proven
- measure metrics robustly under noise and drift
- define pass/fail criteria that survive regression
- reject designs that look good but fail numerically or statistically
Why Visual Spectra Are Not Verification
Spectra lie in noisy environments because:
- estimator variance creates phantom peaks
- averaging can hide intermittent interference
- leakage reshapes peak magnitude and width
- different parameter choices produce different conclusions
A focused explanation is here:
Why Visual Spectra Lie in Noisy Environments
The engineering takeaway:
Verification must be defined in metrics, not in plots.
The Four Classes of Verification Metrics
A production-grade verification set must cover:
1) Suppression Metrics
Did we remove the interference?
Examples:
- tonal suppression at a target frequency band (dB)
- stopband attenuation for a designed region
2) Integrity Metrics
Did we preserve what must be preserved?
Examples:
- protected band ripple
- passband distortion
- main-tone protection (if relevant)
3) Stability Metrics
Will this remain stable after deployment?
Examples:
- impulse response decay behavior
- coefficient sanity margins
- sensitivity to quantization (embedded)
Stability foundations are here:
Fixed-Point DSP Filter Stability
4) Repeatability / Regression Metrics
Does the pipeline remain deterministic?
Examples:
- rerun variance bounds
- stable classification decisions
- consistent coefficient generation
Deterministic pipeline structure is here:
Designing DSP Pipelines for Deterministic Outputs
Robust Noise Floor Measurement (Percentiles > Means)
Many verification failures come from a single mistake:
using mean-based noise estimates in non-Gaussian, transient-rich signals
Robust practice:
- estimate noise floor using median / percentiles
- estimate signal level using upper percentiles
- ignore spikes that should not define noise
A complete guide is here:
Measuring Noise Floors Robustly Using Percentile Statistics
Pass/Fail Criteria That Survive Real Signals
A useful pass/fail rule must be:
- measurable
- robust under estimator variance
- stable across reruns
- defensible in review
Bad rules:
- “largest peak must drop”
- “spectrum looks smooth”
- “average SNR improved”
Better rules:
- tonal suppression ≥ X dB at target band
- protected band ripple ≤ Y dB
- no new tonal artifacts above threshold
- verification computed from robust statistics
Verification Under Drift (Don’t Validate Against Stationary Assumptions)
Drift breaks naive verification:
- suppression at a single bin is meaningless
- the “tone” moves across frequency
- a narrow notch can miss the real interference
A drift-aware verification approach validates across a drift envelope.
Drift-aware suppression architecture is here:
Filter Drifting Tonal Noise in DSP Systems
Drift tracking specifics are here:
How Drift Tracking Improves Notch Filter Robustness
Verification Under Low SNR (Don’t Validate Phantom Detections)
Low SNR breaks naive detection, which breaks verification.
If you verify a filter designed from phantom peaks, you will “prove” nonsense.
Root causes and fixes:
PSD peak instability at low SNR:
Why PSD Peak Detection Fails in Low SNR SignalsSTFT evidence validation:
How STFT Cross-Validation Improves Low-SNR Tone Detectionpresence-based classification:
How Presence Metrics Prevent False Tonal Detection
Verification Acceptance Checklist (Copy/Paste for Engineering Use)
Use this as a minimal acceptance protocol:
Inputs are valid
- sampling rate correct
- no clipping/DC issues beyond defined limits
- analysis windowing parameters fixed (for repeatability)
Interference detection is evidence-backed
- candidates from PSD
- validated by STFT/presence
- drift envelope estimated if needed
Synthesis is constraint-compliant
- bounded Q / stability margins
- complexity limits respected
- protected bands preserved by design
Metrics prove the intended outcome
- suppression metric meets target (dB)
- integrity metric within allowed distortion
- stability checks pass (embedded/quantized if needed)
Results are regression-stable
- reruns produce consistent classification
- coefficient output stable within tolerance
- no “random pass/fail” behavior
Artifacts are auditable
- metrics are traceable to inputs and settings
- outputs are reproducible
- evidence plots support (but do not replace) metric decisions
Series Map — Verification Pillar and Supporting Articles
- Verification Pillar (this page): metrics, acceptance, regression stability
- Measuring Noise Floors Robustly Using Percentile Statistics
- Quantitative Verification: Proving Filter Performance in Noisy Systems
- Why Visual Spectra Lie in Noisy Environments
- Why Over-Optimization Breaks DSP Filters in Production
- Designing DSP Pipelines for Deterministic Outputs
Conclusion
Engineering verification is the difference between:
- “a filter that looks good”
- and “a filter that can be shipped”
A verification-first workflow:
- measures suppression and integrity robustly
- models drift and low-SNR uncertainty
- enforces constraints that prevent fragile designs
- defines pass/fail criteria that survive regression
- produces auditable evidence for review
That is how DSP becomes reliable engineering instead of repeated tuning.