Agent Personalities and Prompt Logic
At first glance, Forge feels like a single unified assistant. But under the surface, each agent has its own personality, tone, and logic for how it processes data and answers questions. This design choice is intentional. It makes responses more accurate, more useful, and more human.
This page explains how agent personalities work, how prompt logic is structured, and how both can be customized depending on who’s using Forge.
What Is an Agent Personality?
An agent personality is a set of traits that shape:
How the agent interprets data
How it speaks to the user
What kind of assumptions it makes
How it frames its answers
For example:
A TokenAgent may act like a cold risk auditor — precise, structured, and cautious.
A WalletAgent may act like a detective — connecting dots, identifying patterns, and flagging suspicious activity.
A SniperAgent may act like a sentry — fast alerts, short and blunt.
Each personality is tuned to match the kind of task the agent is doing and the user’s expectations.
Prompt Templates
Behind every Forge response is a prompt template — a structured format that includes:
Context (what just happened on-chain)
Instruction (what the agent is supposed to do)
Voice (how the answer should sound)
Filters (what to include or leave out)
An example prompt for the TokenAgent might look like:
The model receives this as a single input and generates a focused answer.
Tones and Styles
Forge supports multiple response tones depending on user setting or agent type:
Formal – for analysts and dashboards
Casual – for degen traders and fast checks
Alert-style – short, emoji-tagged, fast warnings
Long-form – multi-paragraph breakdowns with context
For example, the same context can return:
Formal:
"This token exhibits several red flags including an unlocked LP, a deployer with multiple past rugs, and early sell activity. Caution is advised."
Casual:
"Looks sketchy. LP not burned, same guy rugged 4 tokens last week. Big sells already hitting."
Alert-style:
"⚠️ Unlocked LP, 4x rug deployer, $20k in, early sells. High risk."
Customizing Prompt Logic
Admins or self-hosted users can change how prompts are built by modifying:
What data points are included
How much weight is given to each red flag
What format the prompt uses (list, paragraph, JSON block, etc.)
The instruction sentence (e.g. "Summarize risk" vs. "Give trading advice")
This means Forge can be tailored to different users:
Traders who want quick reads
Analysts who want breakdowns
Developers who want structured data
You don’t need to touch the language model — just change the prompt logic, and the behavior changes instantly.
Multi-Agent Prompt Strategy
For complex questions, Forge may run multiple agents and merge their responses. When this happens, each agent uses its own prompt logic, but the final output is smoothed into one answer.
You can still view each agent’s reply separately, especially if you want raw data or deeper insight into how the model reasoned.
Prompt Tokens and Cost Control
In self-hosted Forge, you can control how much text is sent to the model by:
Limiting the number of wallets or tokens included
Compressing historical context
Removing optional metadata
This helps reduce token usage and latency if you’re running Forge with your own OpenAI or Claude key.
Last updated