Beginner

Learn what the Model Context Protocol is and how gengine implements it to connect AI models to the Unreal Editor.

Understanding MCP

The Model Context Protocol (MCP) is the open standard that makes gengine possible. Understanding it helps you get the most out of the tool and diagnose problems when they occur.

What is MCP?

The Model Context Protocol is an open specification created by Anthropic that defines a standard way for AI language models to interact with external tools and data sources. Think of it as a universal adapter: any AI model that speaks MCP can use any tool that implements MCP — without custom integration code for each pair.

Before MCP, connecting an AI to a tool required bespoke integration work for every combination of model and tool. MCP solves this by defining:

  • How tools advertise their capabilities (tool definitions with JSON Schema parameters)
  • How AI models request tool execution (structured tool call messages)
  • How tools return results (structured response messages)
  • How servers expose resources and prompt templates (optional extensions)

gengine is an MCP server. AI models — Claude, GPT-4o, Ollama, and others — are MCP clients. The MCP Bridge in gengine's Resources/mcp-bridge/ directory handles the translation between these two sides.

Note: MCP is an open specification. The full spec is available at modelcontextprotocol.io. gengine implements MCP version 1.0 with extensions for streaming and async task management.

MCP primitives: tools, resources, prompts

MCP defines three core primitives that servers can expose:

Tools

Tools are callable functions. Each tool has a name, a description, and a JSON Schema that defines its parameters. When an AI decides to use a tool, it emits a tool call with the tool's name and a parameters object that matches the schema.

gengine's tools are named unreal_world, unreal_assets, unreal_blueprints, unreal_animation, unreal_character, unreal_input_materials, unreal_status, and unreal_get_ue_context. Each tool accepts an operation parameter that selects one of its sub-operations.

Resources

Resources are read-only data that the server exposes for AI models to inspect. In gengine, resources include the current level's actor list, the asset registry index, and the Output Log. The AI can request these resources to build context before deciding what tools to call.

Prompts

Prompts are reusable message templates. gengine ships with prompt templates for common workflows like "populate a level with foliage" or "audit Blueprint compile errors". You can access them via the slash-command menu in the chat input.

gengine's MCP implementation

gengine wraps 97 Unreal Editor operations behind 8 MCP tools. This design makes tool discovery manageable — the AI loads 8 tool definitions rather than 97 separate function signatures — while keeping each operation fully typed with its own parameter schema.

[
  "unreal_world",          // actors, level, console, viewport
  "unreal_assets",         // content browser, asset management
  "unreal_blueprints",     // Blueprint graph editing
  "unreal_animation",      // animation blueprints and state machines
  "unreal_character",      // characters, movement, stats
  "unreal_input_materials",// input actions, mappings, materials
  "unreal_status",         // plugin and server health
  "unreal_get_ue_context"  // UE 5.7 API documentation
]

Each tool's definition includes descriptions of every supported operation and their parameters. When the AI calls a tool, gengine's C++ plugin routes the call to the appropriate handler based on the operation field.

Tip: You can see the full tool definitions by opening the Tools tab in the Command Center. Each entry shows the operation name, parameter schema, and an example call. This is the same schema the AI receives during initialization.

The skill tree: 6 domains

The 97 operations are organized into six functional domains, each mapped to one MCP tool. This is gengine's "skill tree":

unreal_world          → 9 operations
  spawn_actor, move_actor, delete_actors, set_property,
  get_level_actors, open_level, run_console_command,
  capture_viewport, get_output_log

unreal_assets         → 12 operations
  search, dependencies, referencers, get_info, list,
  set_property, save, create_blueprint, duplicate,
  rename, delete, move

unreal_blueprints     → 10 operations
  list, inspect, get_graph, get_events, create,
  add_variable, add_function, add_node, connect_pins,
  set_pin_value

unreal_animation      → 8 operations
  get_info, create_state_machine, add_state,
  add_transition, set_state_animation,
  set_transition_duration, batch, validate_blueprint

unreal_character      → 7 operations
  list_characters, get_character_info, set_movement_params,
  create_data_asset, update_stats, create_stats_table,
  batch_update

unreal_input_materials → 6 operations
  create_input_action, create_mapping_context, add_mapping,
  create_material_instance, set_material_parameters,
  get_material_info

Plus unreal_status (server health, version, connected clients) and unreal_get_ue_context (in-context UE 5.7 API documentation lookups).

Tool discovery: how AI models find operations

When an AI client connects to gengine's MCP server, the first thing it does is call the MCP initialize handshake. The server responds with its capabilities and the full list of tool definitions:

→ Client sends:
{
  "method": "initialize",
  "params": {
    "protocolVersion": "1.0",
    "clientInfo": { "name": "claude-cli", "version": "1.5.0" }
  }
}

← Server responds:
{
  "protocolVersion": "1.0",
  "serverInfo": { "name": "gengine", "version": "2.1.0" },
  "capabilities": { "tools": {}, "resources": {}, "prompts": {} }
}

→ Client then calls:  { "method": "tools/list" }

← Server responds with all 8 tool definitions, each containing:
   - name (e.g., "unreal_world")
   - description (plain text for the AI)
   - inputSchema (JSON Schema for the "operation" and "params" fields)

This means the AI always has an up-to-date picture of what gengine can do. If you install a plugin update that adds new operations, the AI will discover them automatically on the next session without any configuration changes.

Transport: stdio and HTTP

MCP supports multiple transport mechanisms. gengine uses two:

stdio (for Claude CLI)

When you use gengine with the Claude CLI or Claude Desktop, MCP messages travel over standard input/output. The Claude app launches the gengine MCP bridge as a child process and communicates via its stdin/stdout pipes. This is the simplest setup and requires no network configuration.

{
  "mcpServers": {
    "gengine": {
      "command": "node",
      "args": ["/path/to/Plugins/gengine/Resources/mcp-bridge/dist/index.js"],
      "env": {
        "GENGINE_PORT": "8080"
      }
    }
  }
}

HTTP + SSE (for OpenAI and web clients)

For OpenAI-compatible clients and the gengine Web Chat Panel, MCP messages travel over HTTP with Server-Sent Events for streaming. The MCP bridge listens on localhost:3000 (configurable) and exposes a REST endpoint that any HTTP-capable AI client can call.

This transport is handled by the GengineMCPStreamable module and requires no additional configuration — it starts automatically when the plugin loads.

Note: Both transports connect to the same underlying gengine REST API on port 8080. The bridge handles translation so you can switch between Claude CLI and OpenAI without touching any Unreal code.