LogoLogo
  • 📘Introduction
    • What is Forge?
    • Why We Built Forge
  • 🧠How Forge Works
    • Core Architecture
    • Agent System Overview
  • Model Context Protocol (MCP)
  • On-Chain Data Indexing
  • AI Query Handling
  • ⚙️Using Forge
    • Setting Up Forge
    • How to Ask Questions
  • Supported Use Cases
  • Interacting with Agents
  • Limitations and Data Scope
  • 🛠️Advanced Features
    • Agent Personalities and Prompt Logic
  • Creating Custom Agents
  • Integrating External APIs
  • Running Multi-Agent Workflows
  • Token Behavior Tracking
  • Suspicious Wallet Detection
  • 📀Forge Modules
    • Liquidity Pool Scanner
  • LP Burner Tracker
  • Telegram Sniper Detector
  • Contract Creator Profiler
  • Whale Movement Watcher
  • ⚙️Developer Tools
    • Custom Prompt Engineering
  • 📃Appendix
    • Glossary
    • Security and Privacy
    • Roadmap and Vision
Powered by GitBook
On this page
  • 🧠 What Happens When You Ask a Question
  • 🤖 Language Model Layer
  • 📄 Prompt Construction
  • 🧠 Multi-Agent Query Resolution
  • 🧪 Response Types
  • 🔧 Configurable Behaviors
  • 📌 Why It Matters
Export as PDF

AI Query Handling

Forge isn’t just about raw data. It’s about how that data gets turned into language — answers that feel like they’re coming from a real analyst, not a robot. This is where the AI query handling layer comes in.

It’s the part of the system that takes the prompt built by MCP, shaped by agents, and delivers a response that makes sense, sounds human, and actually helps the user make a decision.


🧠 What Happens When You Ask a Question

Let’s say a user types:

“Is $SLAP safe?”

Here’s the full path from input to response:

  1. Intent is detected – This is a token safety query

  2. MCP builds the prompt – It injects current context: deployer, LP, volume, wallet flows

  3. Agent executes its logic – TokenAgent filters risk signals

  4. Prompt is formatted – The final prompt might say:

    "The user is asking if token $SLAP is safe. It launched 3 minutes ago with 20 SOL LP, no ownership renounced, no burn, deployer has rugged 2 tokens before, volume spiked to $90k in 2 minutes. What’s the risk profile?"

  5. Prompt sent to model – This could be GPT-4, Claude, or another LLM

  6. Response is returned – Forge formats it and sends it back in chat:

    “$SLAP shows clear risk signals. LP is unlocked, deployer history is suspicious, and sell pressure is already building. Trade with caution.”

This process usually takes 1 to 2 seconds.


🤖 Language Model Layer

Forge is model-agnostic. It’s designed to run with:

  • OpenAI (GPT-4 or GPT-3.5)

  • Anthropic (Claude)

  • Local models (for private deployments)

  • Any future LLM that supports plain text prompts

This means the “brain” of Forge can evolve as better models come out.

But what makes Forge different isn’t the model — it’s how it gives the model the right context, phrased the right way, every time.


📄 Prompt Construction

A Forge prompt is not a giant wall of text. It’s engineered to be:

  • Minimal – only relevant info is injected

  • Structured – context is labeled and grouped

  • Guided – a final instruction is included to tell the AI what kind of response to give

Example:

yamlCopyEditToken Context:
- Name: SLAP
- Launched: 3 minutes ago
- LP: 20 SOL, unlocked
- Ownership: Not renounced
- Deployer: History of 2 rugs

Wallet Context:
- Early buys from sniper wallet cluster
- Sell activity already started

User Query:
“Is $SLAP safe?”

Instruction:
Analyze and return a human-readable risk profile.

This structure makes responses consistent, fast, and deeply informed.


🧠 Multi-Agent Query Resolution

Sometimes, Forge splits the question across multiple agents, each with their own micro-prompt and reply. The query handler then stitches those together into one smooth answer.

For example:

“Who deployed this and is it safe?”

  • WalletAgent checks deployer history

  • TokenAgent checks LP and contract setup

  • LPAgent checks liquidity burn status

Each returns a part, and the query handler assembles the final response with proper phrasing.


🧪 Response Types

Forge can return answers in many formats depending on context:

  • Plain text (chat-style)

  • Risk score breakdowns

  • Tables or bullet points

  • Alerts or warnings

  • Interactive follow-ups (e.g. “Trace wallets involved?”)

This flexibility is what makes it feel more like talking to a human — because the output matches the user’s intent, not just the data.


🔧 Configurable Behaviors

Admins or devs can customize:

  • Prompt tone (casual, formal, technical)

  • Detail level (summary or deep-dive)

  • Response types (compact for mobile, full for analysts)

  • Follow-up suggestions or automated triggers

This makes Forge usable by traders, teams, devs, or researchers — all from the same core system.

📌 Why It Matters

AI without context is just guessing. Blockchain data without interpretation is just noise. Forge’s query handler fuses both, giving you instant feedback that actually helps.

You don’t have to know what to ask, how to phrase it, or where to click. You just ask. Forge does the rest.

PreviousOn-Chain Data IndexingNextSetting Up Forge

Last updated 12 days ago