***

title: LLM Prompt Engineering in Synthflow
subtitle: Comprehensive prompt engineering support across low-code and pro-code interfaces
slug: prompt-engineering-support
description: Synthflow provides extensive LLM prompt engineering capabilities through dedicated low-code/no-code modules, embedded functionalities, RAG architecture, and content guardrails for building reliable conversational AI.
---------------------

For a complete page index, fetch https://docs.synthflow.ai/llms.txt. For full documentation content, fetch https://docs.synthflow.ai/llms-full.txt.

Synthflow provides comprehensive **LLM prompt engineering support** through multiple interfaces and tools. Whether you're a non-technical user configuring agents through GUI forms or a developer implementing advanced prompt strategies via API, Synthflow offers the capabilities you need to optimize LLM behavior.

## What is Prompt Engineering?

**Prompt engineering** is the discipline of providing inputs (e.g., text or images) to GenAI models to specify and confine the set of responses the model can produce. Effective prompt engineering:

* Defines the agent's role, personality, and objectives
* Provides context and background information
* Guides conversation flow and decision-making
* Handles edge cases and objections
* Ensures consistent, on-brand responses

In Synthflow, prompt engineering happens at multiple levels—from simple no-code text fields to sophisticated RAG architectures and runtime prompt injection.

***

## Low-Code/No-Code Prompt Engineering

Synthflow provides **dedicated modules** for prompt engineering that require no coding. These interfaces make prompt optimization accessible to all users.

### Single-Prompt Agents: No-Code Prompt Engineering

**Single-Prompt Agents** represent the most accessible prompt engineering interface. Configure entire agent behaviors using a single natural language prompt.

#### How It Works

Write a prompt that defines:

**Background Information**:

* Company overview and mission
* Target audience characteristics
* Value proposition

**Agent Profile**:

* Name and role
* Goals and objectives
* Personality and tone

**Conversation Instructions**:

* Key questions to ask
* Information to collect
* How to handle common scenarios

**Objection Handling**:

* Common concerns and how to address them
* Edge cases requiring special responses

#### Example Prompt Structure

```text
BACKGROUND INFO
Company Overview: Majestic Estates, a modern real estate firm, excels 
in delivering unique and effective home selling solutions. By integrating 
AI technology and market analysis, we ensure our clients' listings garner 
notable attention and competitive offers.

Target Audience: Homeowners looking to sell their properties, from small 
homes to large estates.

Value Proposition: We provide expert home valuations, customized marketing 
plans, and an extensive network of buyers for rapid, lucrative sales.

AGENT DETAILS
Name: Jessica
Role: AI Real Estate Agent
Goal: Initiate dialogue with potential sellers and arrange valuation 
meetings with our agents.

SCRIPT INSTRUCTIONS
1. Introduce yourself and inquire if they've considered selling
2. Discuss their interest in a professional valuation
3. Propose arranging a valuation meeting
4. Confirm appointment details
5. Express gratitude and reassure ongoing support
```

#### Best Practices

**Be Clear and Specific**: Ambiguous instructions lead to unpredictable results

* ✅ Good: "Greet customers warmly, introduce yourself as a real estate agent, and ask if they're looking to buy or rent."
* ❌ Vague: "Talk to the customer about real estate."

**Provide Context**: Agents perform better when they understand who they're speaking for and to

**Guide Conversation Flow**: Outline key questions and information to collect

**Anticipate Objections**: Prepare the agent for common concerns

**Keep It Concise**: Avoid overloading with excessive details (under 300 words recommended)

**Test and Iterate**: Review interactions and refine prompts based on actual performance

#### Co-Pilot: AI-Assisted Prompt Generation

The **Co-Pilot** feature uses GenAI to automatically generate optimized prompts:

1. Describe your use case in plain language
2. Co-Pilot generates a complete, structured prompt
3. Refine the auto-generated prompt to match your specific needs
4. Deploy and iterate

This GenAI-assisted approach accelerates prompt engineering and provides best-practice examples.

#### Documentation

* [Single-Prompt Agents](/about-prompting) - Complete no-code prompt engineering guide with best practices

***

## Embedded Prompt Engineering in Flow Designer

For users building with the **Flow Designer**, prompt engineering and context engineering is **embedded directly into visual nodes**. Each node type provides dedicated prompt engineering interfaces.

### Global Settings Node: Agent-Wide Prompt Engineering

The **Global Settings** node provides low-code/no-code interface for configuring agent-wide personality and behavior.

#### Personality & Background

Define your agent's core identity through a structured prompt interface:

**What to Include**:

* Agent name and role (e.g., "You are Maya, a friendly restaurant host")
* Company context and mission
* Agent goals and objectives
* Tone and style (friendly, professional, empathetic)
* Target audience information
* Legal disclaimers or compliance requirements

**Example**:

```
You are Maya, a friendly restaurant host helping callers book tables. 
You work for Foody, a popular restaurant known for excellent service 
and fresh ingredients. Your goal is to help callers find the perfect 
reservation time and ensure they have a great dining experience. 
Always be warm, patient, and helpful—even when we're fully booked.
```

**Best Practices**:

* Keep under 300 words for optimal results
* Be specific to prevent inconsistent behavior
* Use variables with `{variable_name}` syntax for personalization
* Focus on essentials: who, what, and how

#### Objections & Special Cases

Handle common objections and edge cases with dedicated prompt engineering:

**When to Use**:

* Common objections ("Too expensive", "I need to think about it")
* Compliance scenarios (legal disclaimers, data privacy)
* Edge cases ("User asks if you're an AI")
* Situations requiring consistent responses

**How to Configure**:

1. Define the **Case**: Trigger phrase or scenario
2. Write **Instructions**: How the agent should respond

**Example**:

* **Case**: "The user asks if you are an AI"
* **Instructions**: "Confirm that you're an AI agent and highlight the benefits: you're available 24/7, can provide instant responses, and never have hold times. Then redirect back to helping with their request."

This low-code interface makes sophisticated prompt engineering accessible without writing code.

#### Documentation

* [Global Settings Node](/configure-global-settings) - Low-code/no-code agent-wide prompt engineering

### Conversation Node: Variable Collection Prompts

The **Conversation Node** provides prompt engineering for collecting specific information from users.

#### How It Works

Write natural language prompts that guide how the agent asks questions and collects data:

**Prompt Field**: Define what to say and what information to collect

**Example**:

```
Ask the caller for their full name and email address. 
Be friendly and explain we need this to send their confirmation.
```

**Variable Configuration**: Define variables to collect (name, email, phone, etc.)

**Validation Rules**: Set requirements without code:

* Required vs. optional fields
* Format validation (email, phone number)
* Value constraints (date ranges, number limits)

**Multi-Variable Collection**: Collect multiple pieces of information in a single conversational exchange with instructional prompts

#### Best Practices

* Be conversational and natural in your prompts
* Explain why you're collecting information
* Use validation rules to ensure data quality
* Provide examples when asking for complex information

#### Documentation

* [Conversation Node](/configure-step-nodes) - Low-code/no-code prompt engineering for variable collection

### Greeting Message Node: First Impression Prompts

The **Greeting Message Node** provides a dedicated module for prompt engineering the agent's opening message.

#### Purpose

Configure the first impression and tone-setting prompt with:

* Welcome message
* Agent introduction
* Context setting
* Variable personalization (e.g., "Hi \{lead\_name}, this is Maya from Acme Corp")

#### Example

```
Hi there! Thanks for calling Foody. I'm Maya, and I'm here to 
help you book a table. How many people will be dining with us?
```

This specialized prompt engineering ensures consistent, engaging conversation starts.

#### Documentation

* [Greeting Message Node](/configure-the-greeting-message-node) - Dedicated low-code/no-code module for opening prompts

### Message Node: Scripted Statement Prompts

The **Message Node** allows prompt engineering for one-way communications:

**Use Cases**:

* Confirmations ("Perfect! I've booked your table for \{party\_size} people")
* Instructions ("Please have your order number ready")
* Scripted statements that don't require user response

Configure the exact message using natural language with variable injection.

***

## Model-Specific Prompt Engineering

Synthflow provides **model-specific prompt engineering guides** for optimizing prompts per LLM.

### GPT-5.1 Prompting Guide

The **GPT-5.1 Prompting Guide** covers advanced techniques for optimizing low-latency, high-quality conversational responses with GPT-5.1.

#### Key Differences from GPT-4

**Verbosity Control**: GPT-5.1 can be more verbose. Add explicit constraints:

```
Be concise. Respond in under two sentences because you are on a phone call.
```

**Persistence Instructions**: Reinforce task completion expectations:

```xml
<persistence>
You are an agent. Continue working until the user's query is completely 
resolved before ending your turn. Only yield when you are confident the 
problem is fully solved.
</persistence>
```

**Efficiency Directives**: Reduce unnecessary steps:

```xml
<efficiency>
Efficiency is critical. You have a time limit. Plan carefully, make 
tool calls deliberately, and verify outputs to avoid waste.
</efficiency>
```

**Structured Prompts**: Use sectioned format:

* Role
* Instructions
* Tools and Constraints
* Output Format and Length Requirements
* Edge Cases

**XML-Like Blocks**: Improve instruction clarity:

```xml
<instruction_spec>
Your specific instructions here
</instruction_spec>
```

#### Prompt Optimization Tools

**Repetition Cleanup**: Use GPT-5.1 to check your prompts for:

* Redundancy
* Conflicting instructions
* Simplification opportunities

**Prompt Optimizer**: OpenAI's built-in optimizer helps refine prompts:

```
When asked to optimize prompts, explain which specific phrases should 
be added or removed to elicit the desired behavior.

Prompt: [YOUR PROMPT]
Desired behavior: [WHAT YOU WANT]
Undesired behavior: [WHAT TO AVOID]

What minimal edits would help the model reliably produce the desired behavior?
```

#### Documentation

* [GPT-5.2 Prompting Guide](/gpt-5-2-prompting-guide) - Model-specific prompt engineering techniques

***

## Retrieval-Augmented Generation (RAG): Composite Prompt Engineering

**RAG** represents Synthflow's most sophisticated prompt engineering architecture, combining retrieval with generation for grounded, accurate responses.

### What is RAG?

**Retrieval-Augmented Generation** is a composite prompt engineering architecture that:

1. **Retrieves** relevant information from external knowledge bases
2. **Injects** retrieved content into the LLM prompt
3. **Generates** responses based on both the original prompt and retrieved facts

This approach grounds LLM responses in your specific documents and data, reducing hallucinations and ensuring accuracy.

### How RAG Works in Synthflow

#### Knowledge Base Setup

**Low-Code/No-Code Configuration**:

1. **Create Knowledge Base**: Navigate to Knowledge Bases → New
2. **Add Documents**: Upload PDFs, create text documents, or paste URLs
3. **Set Trigger Conditions**: Define when the agent should search the knowledge base
4. **Attach to Agent**: Link the knowledge base in General Configuration

No coding required—entirely GUI-based setup.

#### Runtime RAG Process

When a user asks a question:

1. **Semantic Search**: System searches knowledge base for relevant content using embeddings
2. **Chunk Retrieval**: Most relevant document chunks are identified
3. **Prompt Injection**: Retrieved information is injected into the LLM prompt:
   ```
   [Original Agent Prompt]

   Relevant Information:
   [Retrieved Chunk 1]
   [Retrieved Chunk 2]
   [Retrieved Chunk 3]

   User Question: [User's question]

   Generate a response using the relevant information above.
   ```
4. **Grounded Generation**: LLM generates response based on retrieved facts

#### Benefits of RAG Prompt Engineering

**Accuracy**: Responses grounded in your specific documents

* Prevents hallucinations by constraining the LLM to factual content
* Ensures up-to-date information without model retraining

**Source Attribution**: Responses based on verifiable sources

* Agents can reference specific documents or policies
* Audit trails for compliance scenarios

**Dynamic Knowledge**: Update information without changing prompts

* Add new documents to knowledge base
* Agents automatically reference updated content

**Scalability**: Handle large knowledge bases efficiently

* No need to include all information in prompts
* Retrieve only relevant content per query

### Best Practices for RAG

**Document Organization**:

* Keep files focused on single topics
* Split large PDFs into smaller, topic-specific uploads
* Use clear, descriptive filenames

**Trigger Conditions**:

* Be explicit about scenarios that should trigger knowledge base searches
* Example: "When users ask about return policies, refund procedures, or warranty information"

**Content Quality**:

* Write clear, well-structured documents
* Use headings and bullet points
* Avoid prompting instructions in documents (they won't work)

**Regular Updates**:

* Review content regularly
* Remove outdated information
* Test agent responses after updates

### Documentation

* [Knowledge Base](/create-a-knowledge-base) - RAG architecture explanation

***

## Pro-Code Prompt Engineering

For developers and advanced users, Synthflow provides **API-based prompt engineering** capabilities.

### Dynamic Prompt Injection: Runtime Customization

**Dynamic Prompt Injection** allows programmatic modification of agent behavior via API requests, enabling context-aware prompt engineering at scale.

#### How It Works

Use the Synthflow API to inject custom prompts when initiating calls:

```javascript
POST /api/v1/calls
{
  "agent_id": "agent_123",
  "phone_number": "+1234567890",
  "dynamic_prompt": "This is a VIP customer. Prioritize their requests and offer premium support options. Their account value is $50,000 and they've been a customer for 5 years."
}
```

The injected prompt is added to the agent's base prompt for that specific conversation.

#### Use Cases

**Personalization at Scale**:

* Inject customer-specific context from CRM
* Customize agent behavior based on customer tier
* Add account history or preferences dynamically

**A/B Testing**:

* Test different conversation strategies programmatically
* Rotate prompts for experimentation
* Measure outcomes across variants

**Context-Aware Customization**:

* Adjust prompts based on time of day
* Modify behavior for different customer segments
* Apply regional or language-specific instructions

**Dynamic Business Rules**:

* Inject current promotions or offers
* Update pricing information per call
* Apply real-time business logic

#### Example: VIP Customer Handling

```javascript
// Retrieve customer data from CRM
const customer = await crm.getCustomer(phoneNumber);

// Inject VIP handling if applicable
let dynamicPrompt = "";
if (customer.tier === "VIP") {
  dynamicPrompt = `
    This is a VIP customer with lifetime value of $${customer.ltv}.
    - Offer expedited service options
    - Mention their loyalty rewards balance: ${customer.rewardsPoints} points
    - Prioritize their requests and be extra attentive
    - Offer to connect them directly with a senior specialist if needed
  `;
}

// Initiate call with dynamic prompt
await synthflow.createCall({
  agent_id: "support_agent",
  phone_number: customer.phone,
  dynamic_prompt: dynamicPrompt
});
```

#### Best Practices

**Keep Injected Prompts Focused**: Add only relevant, call-specific context

**Combine with Base Prompts**: Dynamic prompts augment, not replace, base configuration

**Test Thoroughly**: Ensure injected prompts don't conflict with base instructions

**Monitor Performance**: Track outcomes across different prompt variations

### Documentation

* [Dynamic Prompt Injection](/dynamic-prompt-injection) - API-based runtime prompt engineering

***

## GenAI Content Guardrails

Synthflow provides multiple **content-anomaly guardrails** to ensure LLM outputs remain appropriate and on-brand.

### Filter Words: Rule-Based Content Blocking

**Filter Words** provide a simple, rule-based mechanism for blocking specific terms the agent should never speak.

#### How It Works

**Configuration**: Low-code interface in General Configuration

1. Add terms to block list
2. System prevents LLM from speaking those terms
3. Works across all conversations automatically

**Use Cases**:

* Sensitive information placeholders (e.g., "INSERT\_API\_KEY")
* Inappropriate language
* Competitor names
* Confidential terms
* Placeholder text that shouldn't reach customers

**Example**:

```
Blocked terms:
- [CONFIDENTIAL]
- [PLACEHOLDER]
- CompetitorName
- internal_code_name
```

This rule-based guardrail acts as a safety net, catching unwanted content before it's spoken.

### LLM as Judge: Quality Evaluation

**Analyze Calls** uses an "LLM as Judge" system to evaluate conversation quality against defined criteria.

#### How It Works

After calls complete, a separate LLM evaluates the conversation against criteria such as:

* **Persona consistency**: Did the agent maintain its defined personality?
* **Style adherence**: Did responses match the specified tone?
* **Step compliance**: Did the agent follow the conversation flow?
* **Repetition avoidance**: Were responses varied and natural?
* **Objection handling**: Were concerns addressed appropriately?

#### Benefits

**Quality Assurance**: Detect conversational anomalies and prompt compliance issues

**Continuous Improvement**: Identify patterns in agent performance

**Guardrail Mechanism**: Flag conversations that deviate from expectations

**Prompt Optimization**: Understand which prompts produce best results

This meta-evaluation approach helps ensure prompt engineering efforts produce desired outcomes.

### Webhooks Security: Preventing Prompt Injection Attacks

**HMAC signature validation** for webhooks provides security guardrails against malicious data injection.

#### How It Works

**Challenge**: Webhooks could be exploited to inject malicious prompts via spoofed payloads

**Solution**: HMAC (Hash-based Message Authentication Code) validation ensures:

* Webhook payloads are authentic
* Data hasn't been tampered with
* Requests come from legitimate sources

**Implementation**:

```javascript
const crypto = require('crypto');

function validateWebhook(payload, signature, secret) {
  const hmac = crypto.createHmac('sha256', secret);
  const digest = hmac.update(payload).digest('hex');
  return digest === signature;
}

// Reject invalid webhooks before processing
if (!validateWebhook(req.body, req.headers['x-signature'], SECRET)) {
  return res.status(401).send('Invalid signature');
}
```

This security guardrail protects LLM-based systems from prompt injection attacks via webhooks.

### Documentation

* [General Configuration](/general-configuration) - Filter Words configuration
* [Analyze Calls](/logs) - LLM as Judge quality evaluation
* [Webhooks Security](/webhooks-security) - HMAC validation for prompt injection prevention

***

## Prompt Engineering Best Practices

### 1. Start with Structure

Use a consistent prompt structure:

* **Role**: Who the agent is
* **Context**: Company and business information
* **Goal**: What the agent should accomplish
* **Instructions**: How to conduct conversations
* **Edge Cases**: Special scenarios and objections

### 2. Be Specific and Concise

* Clear instructions prevent unpredictable behavior
* Keep prompts under 300 words when possible
* Focus on essentials, avoid fluff
* Use bullet points for readability

### 3. Provide Examples

Show the LLM what good looks like:

* Example conversations
* Sample responses
* Desired tone and style

### 4. Use Variables for Personalization

Reference collected information with `{variable_name}` syntax:

```
Hi {customer_name}, I see you're calling about order #{order_id}.
```

### 5. Test and Iterate

* Review actual conversations
* Identify areas for improvement
* Refine prompts based on real performance
* Use Simulations for regression testing

### 6. Layer Your Prompts

Combine multiple prompt engineering techniques:

* Base personality in Global Settings
* Node-specific prompts in Conversation nodes
* RAG for factual information
* Dynamic injection for runtime context
* Filter Words for content control

### 7. Use Model-Specific Guidance

Different models respond differently to prompts:

* Follow GPT-5.1 specific guidelines for that model
* Adjust verbosity instructions per model
* Test prompts when switching models

### 8. Implement Guardrails

Protect against unwanted outputs:

* Use Filter Words for blocked terms
* Implement LLM as Judge for quality checks
* Secure webhooks against injection attacks
* Set up escalation triggers for edge cases

***

## Summary

Synthflow provides comprehensive **LLM prompt engineering support** across multiple interfaces and skill levels:

### Low-Code/No-Code Modules

| Module                    | Purpose                      | Interface                     |
| ------------------------- | ---------------------------- | ----------------------------- |
| **Single-Prompt Agents**  | Complete agent configuration | Natural language prompt       |
| **Global Settings Node**  | Agent-wide personality       | Structured forms              |
| **Conversation Node**     | Variable collection          | Prompt fields with validation |
| **Greeting Message Node** | Opening message              | Text field with variables     |
| **Co-Pilot**              | Auto-generation              | GenAI-assisted                |

### RAG Architecture

| Component               | Purpose              | Configuration |
| ----------------------- | -------------------- | ------------- |
| **Knowledge Base**      | Document storage     | GUI upload    |
| **Semantic Search**     | Content retrieval    | Automatic     |
| **Prompt Injection**    | Context augmentation | Automatic     |
| **Grounded Generation** | Factual responses    | LLM synthesis |

### Pro-Code Capabilities

| Feature                      | Purpose               | Interface              |
| ---------------------------- | --------------------- | ---------------------- |
| **Dynamic Prompt Injection** | Runtime customization | API                    |
| **Webhooks**                 | Data integration      | API with HMAC security |
| **Platform API**             | Programmatic control  | REST API               |

### Content Guardrails

| Guardrail           | Purpose              | Type          |
| ------------------- | -------------------- | ------------- |
| **Filter Words**    | Block specific terms | Rule-based    |
| **LLM as Judge**    | Quality evaluation   | ML-based      |
| **HMAC Validation** | Security             | Cryptographic |

### Key Benefits

* ✅ **Accessibility**: No-code interfaces for non-technical users
* ✅ **Flexibility**: Multiple prompt engineering approaches
* ✅ **RAG Support**: Grounded, factual responses
* ✅ **Guardrails**: Content control and security
* ✅ **Model-Specific**: Optimized guidance per LLM
* ✅ **Runtime Customization**: Dynamic, context-aware prompts
* ✅ **Quality Assurance**: Built-in evaluation tools

Whether you're writing your first prompt or implementing sophisticated RAG architectures with dynamic runtime injection, Synthflow provides the tools and support you need for effective LLM prompt engineering.

### Complete Documentation

* [Single-Prompt Agents](/about-prompting) - No-code prompt engineering
* [GPT-5.2 Prompting Guide](/gpt-5-2-prompting-guide) - Model-specific optimization
* [Global Settings Node](/configure-global-settings) - Agent-wide prompts
* [Conversation Node](/configure-step-nodes) - Variable collection prompts
* [Greeting Message Node](/configure-the-greeting-message-node) - Opening message prompts
* [Knowledge Base](/create-a-knowledge-base) - RAG architecture and setup
* [Dynamic Prompt Injection](/dynamic-prompt-injection) - Runtime API customization
* [General Configuration](/general-configuration) - Filter Words guardrails
* [Analyze Calls](/logs) - LLM as Judge evaluation
* [Webhooks Security](/webhooks-security) - Security guardrails