Core Architecture
Forge is designed to feel like you're chatting with an AI analyst, but behind the scenes, it’s an interconnected system that combines real-time Solana data, modular agent logic, and the Model Context Protocol (MCP) to deliver responses that are accurate, fast, and actionable.
This page explains how the entire system is structured, how the components talk to each other, and why this setup allows Forge to do things that typical AI tools or analytics dashboards can’t.
🧩 Components Overview
At its core, Forge is built around 4 major architectural pillars:
Model Context Protocol (MCP)
Agent System
Real-Time On-Chain Indexer
Prompt Resolver and Execution Engine
Each of these components plays a key role in turning user questions into real insights.
1. Model Context Protocol (MCP)
MCP is the messaging layer that powers Forge’s intelligence. Think of it as the translator between user intent and on-chain execution.
Every question goes through context injection using MCP.
Forge evaluates who is asking, what they’re asking, and what type of data is needed.
Based on that, it builds a custom prompt containing the real-time context from Solana, and routes it to the correct agent.
This is how Forge stays relevant to the latest blocks, wallet states, memecoin launches, and sniper movements.
2. Agent System
Forge runs on modular agents — each is like a specialized on-chain analyst. Some focus on wallets, others on liquidity, or tokens, or even Telegram activity.
Each agent has:
Its own prompt memory
Defined skills (e.g. LP tracking, buy/sell monitoring, wallet scoring)
Ability to listen to events and update its internal state
Agents are stateless by default unless extended to persist certain memory across conversations.
Forge chooses which agent(s) to engage based on the user’s question — and sometimes runs them in parallel for composite queries.
3. Real-Time Solana Indexer
Forge doesn’t rely on slow APIs or outdated snapshots.
It has a custom real-time Solana indexer that tracks:
Token deployments
Wallet movements
LP creation/destruction
Market caps and volume spikes
Telegram-linked deployer actions
This indexer ensures that every piece of context given to the AI is accurate up to the most recent block.
It’s also what lets Forge answer questions like:
“What wallets are buying tokens with a 1-minute lifespan and over 1000 SOL LP?”
4. Prompt Resolver & Execution Engine
Once the agent(s) and data are selected, Forge compiles everything into a final structured prompt.
It includes:
Latest relevant facts
Summarized history or metadata
A task objective (what the user wants to know)
Reasoning logic (e.g. highlight anomalies, find connections, detect intent)
The engine then routes this to the language model — which interprets, explains, and responds with clarity.
🔁 Data Flow (End-to-End)
Here’s a simplified walkthrough of a Forge interaction:
User: “Why did this token just spike in volume?”
MCP: Classifies intent as token analysis, selects the correct agent
Agent: Pulls from the indexer the LP info, wallet flow, deployer history
Prompt Builder: Injects real-time facts + analysis rules into a clean prompt
Model: Returns an answer explaining the event like an analyst
Forge Chat: Displays response, gives button to “Trace Wallets Involved”
🧱 Modular by Design
Each part of Forge can be extended or swapped:
Developers can create new agents (e.g. NFT mint tracker)
You can inject third-party APIs (e.g. sentiment feeds)
The model layer is abstracted — Forge can run on OpenAI, Claude, or even local models
This modularity means Forge isn’t just a product. It’s a framework.
🛠 Why This Architecture Matters
Most analytics dashboards are limited to predefined queries. Most chat AIs don’t know anything about Solana. Forge bridges both.
Because of this architecture:
Forge feels like a crypto-native analyst, not a chatbot
It can answer complex, multi-step questions
It’s fast, accurate, and always up to date
Last updated