LogoLogo
  • 📘Introduction
    • What is Forge?
    • Why We Built Forge
  • 🧠How Forge Works
    • Core Architecture
    • Agent System Overview
  • Model Context Protocol (MCP)
  • On-Chain Data Indexing
  • AI Query Handling
  • ⚙️Using Forge
    • Setting Up Forge
    • How to Ask Questions
  • Supported Use Cases
  • Interacting with Agents
  • Limitations and Data Scope
  • 🛠️Advanced Features
    • Agent Personalities and Prompt Logic
  • Creating Custom Agents
  • Integrating External APIs
  • Running Multi-Agent Workflows
  • Token Behavior Tracking
  • Suspicious Wallet Detection
  • 📀Forge Modules
    • Liquidity Pool Scanner
  • LP Burner Tracker
  • Telegram Sniper Detector
  • Contract Creator Profiler
  • Whale Movement Watcher
  • ⚙️Developer Tools
    • Custom Prompt Engineering
  • 📃Appendix
    • Glossary
    • Security and Privacy
    • Roadmap and Vision
Powered by GitBook
On this page
  • 🔍 What MCP Does
  • 🧱 Why MCP Is Needed
  • 🧠 Context Types
  • 🧩 MCP Pipeline: Start to Finish
  • 🛠 How It’s Built
  • 🤖 Human-Like, But Informed
Export as PDF

Model Context Protocol (MCP)

Forge would not work without context. Asking “is this token safe” or “who’s buying this meme” means nothing if the AI doesn’t know what you’re pointing at, what’s happening on-chain right now, or what kind of response you expect. That’s where the Model Context Protocol comes in.

MCP is the foundation that turns Forge from a generic chatbot into a specialized on-chain assistant. It’s the layer that injects real-time data, filters intent, and builds structured inputs for the AI to reason with. Without MCP, Forge would just guess. With MCP, it becomes sharp, focused, and useful.


🔍 What MCP Does

When a user asks a question in Forge, MCP kicks in instantly. It:

  • Identifies what type of question is being asked

  • Checks what on-chain data is relevant

  • Gathers facts from Forge’s indexers and agents

  • Builds a prompt with the current context injected

  • Sends that structured input to the AI model

It’s not just about having data. It’s about knowing which data matters, and how to phrase it so the model gives the right kind of response.


🧱 Why MCP Is Needed

AI models like GPT can sound smart, but they don’t “know” anything about the blockchain unless you tell them exactly what to look at. They have no memory of block height, LP status, or sniper wallet movements.

Forge solves that by using MCP to inject context like:

  • “Token X just launched 2 minutes ago with 30 SOL LP”

  • “Deployer has created 4 tokens this week, all rugged”

  • “Wallet Y bought at launch and sold in 45 seconds”

By writing this into the prompt behind the scenes, the model no longer has to hallucinate. It now works with clean, factual, up-to-the-second context.


🧠 Context Types

MCP supports several types of context injection depending on what the user is asking:

  1. Token Context – Market cap, LP size, age, deployer reputation, holder count

  2. Wallet Context – Balance, buy/sell behavior, known sniper or not, related wallets

  3. LP Context – Burned or not, depth, lock duration, LP-to-MC ratio

  4. Telegram Context – What group a deployer or buyer recently joined

  5. Historical Context – Previous events tied to the address or token

Each type is optional, and only injected when relevant. This keeps the AI focused and avoids cluttering the prompt with noise.


🧩 MCP Pipeline: Start to Finish

Here’s what happens from the moment a user types a question:

  1. Message Received User types something like “Is $DUCK safe?”

  2. Intent Classification MCP tags this as a token safety request

  3. Target Identification It looks at recent token launches, matches $DUCK to its address and data

  4. Context Injection Pulls from indexer: LP info, deployer, buys/sells, prior launches

  5. Prompt Assembly Builds a structured input like: “User is asking about token $DUCK which launched 3 mins ago with 20 SOL LP. Creator has deployed 3 tokens before, 2 of which had sudden LP burns. Wallets A, B, and C bought in. Here’s the context…”

  6. Model Execution The prompt is sent to the AI model

  7. Response Returned Forge returns a clean, structured reply in chat, like: “$DUCK appears high risk. LP is not burned and deployer has a 66% rug rate.”

This whole flow happens in less than a second.


🛠 How It’s Built

MCP is written as a protocol layer that sits between:

  • The user chat interface

  • The Solana indexers and agent system

  • The AI language model

It can run with different backends, not just GPT. The important part is that it knows how to inject structured data into a natural-language prompt.

It’s also modular. Developers can build their own MCP extensions with:

  • New data sources (e.g. price feeds, news APIs)

  • Custom prompt logic (e.g. compress data for faster responses)

  • Specialized templates (e.g. red flag reports, sniper profiles)


🤖 Human-Like, But Informed

Most AI assistants in crypto are still guessing. They give fake confidence because they’re built without structured context.

Forge changes that. MCP is the reason Forge sounds like an on-chain researcher. It gives the AI the context a human would Google, check, cross-reference, and summarize — but it happens instantly, every time.

PreviousAgent System OverviewNextOn-Chain Data Indexing

Last updated 12 days ago