LogoLogo
  • 📘Introduction
    • What is Forge?
    • Why We Built Forge
  • 🧠How Forge Works
    • Core Architecture
    • Agent System Overview
  • Model Context Protocol (MCP)
  • On-Chain Data Indexing
  • AI Query Handling
  • ⚙️Using Forge
    • Setting Up Forge
    • How to Ask Questions
  • Supported Use Cases
  • Interacting with Agents
  • Limitations and Data Scope
  • 🛠️Advanced Features
    • Agent Personalities and Prompt Logic
  • Creating Custom Agents
  • Integrating External APIs
  • Running Multi-Agent Workflows
  • Token Behavior Tracking
  • Suspicious Wallet Detection
  • 📀Forge Modules
    • Liquidity Pool Scanner
  • LP Burner Tracker
  • Telegram Sniper Detector
  • Contract Creator Profiler
  • Whale Movement Watcher
  • ⚙️Developer Tools
    • Custom Prompt Engineering
  • 📃Appendix
    • Glossary
    • Security and Privacy
    • Roadmap and Vision
Powered by GitBook
On this page
  • What Is a Prompt Template?
  • Prompt Structure
  • Customizing a Prompt
  • Tips for Better Prompts
  • Multi-Agent Prompt Chains
  • Prompt Debugging
Export as PDF
  1. Developer Tools

Custom Prompt Engineering

Forge is powerful because it injects the right data into the right model — but the final output depends entirely on how the prompt is written. Prompt engineering is where you define how an agent thinks, what it focuses on, and how it speaks.

This page explains how to write and customize prompt templates inside Forge, so your agents generate better responses, tailored to your goals.


What Is a Prompt Template?

A prompt template is a structured block of text sent to the language model. It includes:

  • Real-time on-chain context (e.g. LP size, wallet behavior)

  • A system instruction (what to do with the data)

  • A formatting preference (style, tone, output type)

Agents use these templates to speak in different voices depending on who’s asking — traders, analysts, devs, researchers.


Prompt Structure

Every prompt in Forge follows a consistent structure:

yamlCopyEditToken Context:
- Name: $DUCK
- LP: 18 SOL, unlocked
- Ownership: Not renounced
- Deployer: 7AuCty3w..., 3 rugs in last 24h

Wallet Context:
- Known sniper 8fjNqk... bought 5% supply
- 2 large sells within first 3 minutes

Instruction:
Summarize risk profile in plain language. Include deployer risk, LP safety, and buyer pattern. Respond as if explaining to a cautious trader.

This allows the language model to reason based on clear, factual context with a specific communication style.


Customizing a Prompt

Inside your agent’s logic, you can rewrite the prompt template:

tsCopyEditprompt: (ctx) => `
Token ${ctx.name} launched ${ctx.minutesAgo} ago with ${ctx.lp} SOL liquidity.
Deployer wallet ${ctx.deployer} has launched ${ctx.previousLaunches.length} tokens before, ${ctx.rugCount} of which rugged.

Buyers include ${ctx.sniperCount} sniper wallets.

Give a clear risk assessment and explain red flags to a human reader.
`

You can include:

  • Context formatting (bullet points, JSON, paragraphs)

  • Output type (summary, full breakdown, warning alert)

  • Tone (formal, casual, trader-style)


Tips for Better Prompts

  • Be specific — define exactly what the model should analyze

  • Avoid ambiguity — structure data into clean sections

  • Limit scope — include only relevant facts, avoid overload

  • Use natural instructions — talk to the model like a smart intern

Examples:

✅ “Summarize this token’s safety based on deployer risk, LP status, and wallet buys.” ❌ “Tell me everything about this.”


Multi-Agent Prompt Chains

Each agent in Forge can run its own prompt, but sometimes one agent’s output becomes another agent’s input.

For example:

  1. TokenAgent scans LP + metadata

  2. WalletAgent adds deployer history

  3. QueryHandler merges both into one composed prompt

You can engineer this merge behavior and even apply a final instruction like:

“Merge all findings into a single reply for a degen trader on mobile.”


Prompt Debugging

To troubleshoot or refine prompt output:

  • Log the full prompt in dev mode

  • Test same prompt in OpenAI Playground or Claude

  • Adjust tone, remove redundant data, or refine instructions

  • Compare output between different model backends

Small tweaks in phrasing can drastically improve results.

PreviousWhale Movement WatcherNextGlossary

Last updated 12 days ago

⚙️