Simulations
The Test Center in Synthflow.ai helps you safely test and refine your voice agents. Instead of waiting for real customer calls to uncover mistakes, you can simulate conversations, check performance against measurable goals, and improve agents with confidence.
💡 Pricing: Simulations consume the same minutes included in your workspace plan—each run counts against the allowance defined by your current pricing tier.
Why Run Simulations?
Simulations let you:
- Confirm that your agent follows the right flow (greetings, script compliance, resolution).
- Catch weak points such as repetitive phrases, incorrect answers, or interruptions.
- Measure performance on real-world goals like issue resolution or user satisfaction.
- Track improvements over time by re-running the same scenarios after making changes.
Think of simulations as your practice ground before real customers get involved.
Step 1: Create Test Cases
To run a simulation, you first need a test case — a scenario that defines what to test and what counts as success.
You can build test cases in two ways:
- Custom Test Case: Write your own scenario. Give it a name, describe the situation in natural language, and set success criteria (e.g., “Agent confirms booking”). You can choose if all criteria must be met or if meeting any one is enough.
- Generated Test Case: Automatically create 5–10 scenarios from a target agent’s prompts. You review each suggested case, approve the useful ones, and save them.
⚠️ Note: Generated cases are not yet supported for advanced flow designer agents.
Step 2: Run Simulations
Once you have test cases, you’re ready to simulate:
- Open Run Tests.
- Select your target agent (the assistant you want to test).
- Choose up to 50 test cases. Agent-based cases are auto-selected but you can add/remove custom test cases as well.
- Fill custom variables (not required).
- Click Run Tests.
At this point, Synthflow:
- Creates a session — a batch of simulations run together.
- Pairs your agent with a persona agent (a simulated customer, created automatically in the same language).
- Runs each test case as a simulation call.
- Records audio, generates transcripts, and evaluates results against your criteria.
You’ll see live progress as runs complete.
Step 3: Review Results
After simulations finish, you can drill into results:
Session Details
Within a session, each simulation shows:
- Transcript of the conversation
- Audio recording of the call (play back to hear both sides)
- Success criteria analysis (pass/fail reasoning for each criterion)
- Status (completed, failed, in progress)
You can see the list of test sessions in Run History page:
Run History
A list of sessions showing:
- Agent tested
- Date/time
- Number of cases
- Status (In Progress, Completed, Failed)
You can filter by agent and open any session.
Sessions auto-refresh until all runs are done.
Understanding Results
Simulations end with one of three statuses:
A common error message when trying to start a simulation is Insufficient credits | Workspace doesn’t have enough minutes
, which means you need to purchase credits.
In a rare case you see server error in the run details page, try again the simulation later.
Best Practices
- Keep test cases focused: One scenario to verify.
- Define clear success criteria: Prefer “Agent confirms appointment” over “Agent handled call well.”
- Use stable variable values: Keep runs reproducible.
- Maintain a regression suite: Core scenarios should be re-run after every agent update.
- Listen to recordings: Audio reveals tone, interruptions, and naturalness better than transcripts alone.
Glossary (Quick Reference)
- Test Case: Scenario to test your agent (name + prompt + success criteria).
- Generated Test Case: Case auto-created from an agent’s configuration.
- Simulation: Running one test case against your target agent. Produces transcript, recording, and evaluation.
- Session: A batch of simulations run together.
- Target Agent: The agent being tested (must be INBOUND or OUTBOUND).
- Persona Agent: Simulated customer created automatically to interact with your agent.
- Success Criteria: Rules that define success (e.g., “Agent confirms booking”).
- Recording: Audio playback of the simulated call.