GPT-5.1 Prompting Guide
GPT-5.1 Prompting Guide
What changes when you run a Synthflow agent on GPT-5.1
GPT-5.1 introduces several meaningful differences from the GPT-4 model family. Read OpenAI’s GPT-5.1 prompting guide for the full reference. This page summarizes what changes when you run an agent on GPT-5.1 in Synthflow.
You can change the LLM at any time from General configuration in the agent editor.
Verbosity and persistence
GPT-5.1 can be more verbose than previous models. If you notice overly long responses, add explicit constraints to your system instructions, for example:
“Be concise. Respond in under two sentences because you are on a phone call.”
You can also reinforce persistence expectations using a dedicated instruction block:
Efficiency
Being explicit about efficiency helps GPT-5.1 reduce unnecessary steps, retries, and tool usage.
Continue until fully resolved
To reduce latency and avoid back-and-forth turn-taking, you may include a directive such as:
“Continue working until the task is fully resolved.”
GPT-5.1 typically adheres closely to this instruction.
Prompt format guidance
Structuring your prompts clearly is especially important with GPT-5.1. We recommend sectioned prompts:
- Role
- Instructions
- Tools and constraints (e.g., parallel reads, sequential writes)
- Output format and length requirements
- Edge cases (e.g., what to do if data is missing)
Overly strict wording
GPT-5.1 follows instructions with extremely high precision. Overly rigid or contradictory language can lead to unexpected outcomes.
In addition to Markdown formatting, many teams have found that structured XML-like blocks, for example:
This helps improve instruction clarity and makes sections easier to reference and maintain.
Repetition cleanup
Before finalizing a prompt, paste it into GPT-5.1 and ask it to check for:
- redundancy
- conflicting instructions
- opportunities to simplify wording
Because GPT-5.1 adheres so strictly to instructions, small contradictions in your prompt can have outsized effects.
Prompt optimizer
OpenAI provides a built-in prompt optimizer. If you’re encountering issues with your prompt, you can use the following template to request optimization: