AI Query Handling
Forge isn’t just about raw data. It’s about how that data gets turned into language — answers that feel like they’re coming from a real analyst, not a robot. This is where the AI query handling layer comes in.
It’s the part of the system that takes the prompt built by MCP, shaped by agents, and delivers a response that makes sense, sounds human, and actually helps the user make a decision.
🧠 What Happens When You Ask a Question
Let’s say a user types:
“Is $SLAP safe?”
Here’s the full path from input to response:
Intent is detected – This is a token safety query
MCP builds the prompt – It injects current context: deployer, LP, volume, wallet flows
Agent executes its logic – TokenAgent filters risk signals
Prompt is formatted – The final prompt might say:
"The user is asking if token $SLAP is safe. It launched 3 minutes ago with 20 SOL LP, no ownership renounced, no burn, deployer has rugged 2 tokens before, volume spiked to $90k in 2 minutes. What’s the risk profile?"
Prompt sent to model – This could be GPT-4, Claude, or another LLM
Response is returned – Forge formats it and sends it back in chat:
“$SLAP shows clear risk signals. LP is unlocked, deployer history is suspicious, and sell pressure is already building. Trade with caution.”
This process usually takes 1 to 2 seconds.
🤖 Language Model Layer
Forge is model-agnostic. It’s designed to run with:
OpenAI (GPT-4 or GPT-3.5)
Anthropic (Claude)
Local models (for private deployments)
Any future LLM that supports plain text prompts
This means the “brain” of Forge can evolve as better models come out.
But what makes Forge different isn’t the model — it’s how it gives the model the right context, phrased the right way, every time.
📄 Prompt Construction
A Forge prompt is not a giant wall of text. It’s engineered to be:
Minimal – only relevant info is injected
Structured – context is labeled and grouped
Guided – a final instruction is included to tell the AI what kind of response to give
Example:
This structure makes responses consistent, fast, and deeply informed.
🧠 Multi-Agent Query Resolution
Sometimes, Forge splits the question across multiple agents, each with their own micro-prompt and reply. The query handler then stitches those together into one smooth answer.
For example:
“Who deployed this and is it safe?”
WalletAgent checks deployer history
TokenAgent checks LP and contract setup
LPAgent checks liquidity burn status
Each returns a part, and the query handler assembles the final response with proper phrasing.
🧪 Response Types
Forge can return answers in many formats depending on context:
Plain text (chat-style)
Risk score breakdowns
Tables or bullet points
Alerts or warnings
Interactive follow-ups (e.g. “Trace wallets involved?”)
This flexibility is what makes it feel more like talking to a human — because the output matches the user’s intent, not just the data.
🔧 Configurable Behaviors
Admins or devs can customize:
Prompt tone (casual, formal, technical)
Detail level (summary or deep-dive)
Response types (compact for mobile, full for analysts)
Follow-up suggestions or automated triggers
This makes Forge usable by traders, teams, devs, or researchers — all from the same core system.
📌 Why It Matters
AI without context is just guessing. Blockchain data without interpretation is just noise. Forge’s query handler fuses both, giving you instant feedback that actually helps.
You don’t have to know what to ask, how to phrase it, or where to click. You just ask. Forge does the rest.
Last updated