Deliverability tests
Deliverability tests let you validate that a set of inboxes or phone numbers receive expected inbound traffic during a load or campaign run.
They are designed for operational verification after you trigger sending from your own systems.
Why use this feature
Use deliverability tests when you need to answer questions such as:
- Did each selected inbox receive enough emails?
- Did each selected phone number receive enough SMS?
- Are messages arriving from the expected sender?
- Are subjects matching the expected pattern for campaign verification?
This gives your team a deterministic pass/fail view across many entities, instead of manually checking individual inboxes and phones.
How it works
The feature follows a simple lifecycle:
- Create a test with a start time, scope, selector, and expectations.
- Start the test (or let it start from its schedule).
- Run your own traffic generator or production-like send process.
- Poll test status and results while messages arrive.
- Review which entities matched or did not match expectations.
- Stop, pause, or let the test complete automatically.
Example: create an inbox-scoped test
Loading...
Example: start a test run
Loading...
Scope options
You can run a test in one scope at a time:
- Inbox scope: evaluates inbound email delivery per inbox.
- Phone scope: evaluates inbound SMS delivery per phone number.
This keeps evaluation logic predictable and allows focused reporting.
Selector options
Selectors define which entities are included in the run:
- All entities in scope (all inboxes or all phones in the account/team context).
- Pattern-matched entities (for example, address or number patterns).
- Explicitly selected entities (specific inbox IDs or phone IDs).
Use pattern or explicit selection when you need tight control over test blast radius.
Expectations
Each test can define one or more expectations per entity. Typical expectations include:
- Minimum message count received.
- Optional sender match (for example a specific from address).
- Optional recipient match.
- Optional subject match.
Deliverability tests are optimized for metadata checks (counts, sender, recipient, subject) rather than deep body parsing.
Progress and status
During execution, status and progress are tracked continuously:
- Completion percentage across selected entities.
- Matched vs unmatched entity counts.
- Current run state (scheduled, running, paused, completed, failed, stopped).
Result polling is designed for safe refresh behavior and may use short-lived caching windows to avoid excessive backend load.
Time limits and completion behavior
Tests can define an optional maximum duration:
- If all expectations are met before timeout, the test completes successfully.
- If timeout is reached with unmet expectations, the run is marked failed.
- Polling after timeout reflects terminal status and final results.
Example: check for completion
Loading...
Reviewing results
Result views typically include:
- Entity-level pass/fail status.
- Expectation-level details (which check passed or failed).
- Filtering by matched/unmatched entities for triage.
This makes it easy to identify exactly which inboxes or phone numbers did not meet requirements.
Example: fetch test results
Loading...
Recommended workflow
- Define narrow selectors first (small canary cohort).
- Validate expectations and sender filters.
- Scale selector scope to broader load cohorts.
- Use unmatched-only filtering to debug delivery gaps quickly.