Custom Prompt Engineering
Forge is powerful because it injects the right data into the right model — but the final output depends entirely on how the prompt is written. Prompt engineering is where you define how an agent thinks, what it focuses on, and how it speaks.
This page explains how to write and customize prompt templates inside Forge, so your agents generate better responses, tailored to your goals.
What Is a Prompt Template?
A prompt template is a structured block of text sent to the language model. It includes:
Real-time on-chain context (e.g. LP size, wallet behavior)
A system instruction (what to do with the data)
A formatting preference (style, tone, output type)
Agents use these templates to speak in different voices depending on who’s asking — traders, analysts, devs, researchers.
Prompt Structure
Every prompt in Forge follows a consistent structure:
This allows the language model to reason based on clear, factual context with a specific communication style.
Customizing a Prompt
Inside your agent’s logic, you can rewrite the prompt template:
You can include:
Context formatting (bullet points, JSON, paragraphs)
Output type (summary, full breakdown, warning alert)
Tone (formal, casual, trader-style)
Tips for Better Prompts
Be specific — define exactly what the model should analyze
Avoid ambiguity — structure data into clean sections
Limit scope — include only relevant facts, avoid overload
Use natural instructions — talk to the model like a smart intern
Examples:
✅ “Summarize this token’s safety based on deployer risk, LP status, and wallet buys.” ❌ “Tell me everything about this.”
Multi-Agent Prompt Chains
Each agent in Forge can run its own prompt, but sometimes one agent’s output becomes another agent’s input.
For example:
TokenAgent
scans LP + metadataWalletAgent
adds deployer historyQueryHandler
merges both into one composed prompt
You can engineer this merge behavior and even apply a final instruction like:
“Merge all findings into a single reply for a degen trader on mobile.”
Prompt Debugging
To troubleshoot or refine prompt output:
Log the full prompt in dev mode
Test same prompt in OpenAI Playground or Claude
Adjust tone, remove redundant data, or refine instructions
Compare output between different model backends
Small tweaks in phrasing can drastically improve results.
Last updated