Beginner

Understand the tool call lifecycle in gengine — from typing a message to seeing the result in your level.

Your First Tool Call

Every message you send triggers a chain of events: the AI reads your intent, selects the right MCP operation, extracts parameters, executes the call in the editor, and returns the result. This page walks through that lifecycle in detail.

What happens when you send a message

When you press Enter in the chat, gengine sends your message to the configured AI provider along with the full list of available MCP tools. The AI reads both your message and the tool definitions, then decides which operations to call and in what order.

This happens through a mechanism called tool use (also called function calling). Instead of just generating text, the AI can emit structured tool call objects that gengine intercepts and executes. The AI never directly touches your editor — it issues requests through the MCP protocol, and the plugin executes them on the game thread.

You type:  "Spawn a directional light"
     │
     ▼
AI receives: your message + tool definitions for all 97 operations
     │
     ▼
AI emits:  tool_call { name: "unreal_world", operation: "spawn_actor",
                        params: { class: "DirectionalLight", location: [0,0,200] } }
     │
     ▼
MCP Bridge: forwards call to gengine REST API at localhost:8080
     │
     ▼
Plugin:    spawns DirectionalLight actor on the game thread
     │
     ▼
Plugin:    returns { success: true, actor_name: "DirectionalLight_0", location: [0,0,200] }
     │
     ▼
AI receives result and writes a reply: "Done! I spawned a Directional Light at (0, 0, 200)."

The whole cycle — message to visible result — typically takes 2–5 seconds depending on your AI provider's response time and the complexity of the operation.

The tool call lifecycle

Each tool call goes through five stages. Understanding them helps you diagnose problems and write better prompts.

1. Intent recognition

The AI reads your message and maps it to one or more gengine operations. For clear requests ("spawn a point light at 0,0,100") this is straightforward. For ambiguous requests ("add some lighting") the AI may ask a clarifying question or make a reasonable assumption and explain it.

2. Parameter extraction

The AI extracts the parameters each operation needs from your message, its conversation history, and any asset mentions you included. If a required parameter is missing and cannot be inferred, the AI asks for it.

3. Validation

Before execution, the gengine plugin validates every parameter. Path traversal attempts, invalid actor class names, non-finite numeric values, and dangerous console commands are all blocked at this layer. Validation errors are returned to the AI, which explains the problem to you in plain language.

4. Execution

The validated call runs on the editor's game thread. Unreal's editor APIs are called directly — this is the same code path as if you had clicked through the UI manually.

5. Result

The plugin returns a structured JSON result to the AI. Success results include the created or modified objects. Error results include a human-readable reason. The AI incorporates the result into its reply.

Example: spawning an actor

Here is the complete JSON for a spawn_actor tool call, exactly as the MCP protocol sends it from the AI to the gengine plugin:

{
  "tool": "unreal_world",
  "parameters": {
    "operation": "spawn_actor",
    "params": {
      "class": "PointLight",
      "location": { "x": 0, "y": 0, "z": 100 },
      "rotation": { "pitch": 0, "yaw": 0, "roll": 0 },
      "label": "MainFillLight"
    }
  }
}

And the response the plugin sends back:

{
  "success": true,
  "actor_name": "MainFillLight",
  "actor_class": "PointLight",
  "location": { "x": 0.0, "y": 0.0, "z": 100.0 },
  "world": "PersistentLevel"
}

The AI receives this result and writes its reply. You see something like: "Done! I spawned a Point Light named MainFillLight at (0, 0, 100) in the Persistent Level."

Example: searching assets

Asset searches use the unreal_assets tool with the search operation. You can filter by name, type, path prefix, or any combination:

{
  "tool": "unreal_assets",
  "parameters": {
    "operation": "search",
    "params": {
      "query": "Rock",
      "asset_type": "StaticMesh",
      "path": "/Game/Environment/",
      "limit": 20
    }
  }
}
{
  "results": [
    {
      "name": "SM_Rock_Large",
      "path": "/Game/Environment/Rocks/SM_Rock_Large",
      "type": "StaticMesh",
      "size_kb": 2048
    },
    {
      "name": "SM_Rock_Small",
      "path": "/Game/Environment/Rocks/SM_Rock_Small",
      "type": "StaticMesh",
      "size_kb": 512
    }
  ],
  "total": 2
}

The AI can chain this result into a follow-up operation — for example, spawning all found meshes or reporting their file sizes to you.

Example: inspecting a Blueprint

Blueprint inspection uses unreal_blueprints / inspect. This returns variables, functions, components, and parent class information:

{
  "tool": "unreal_blueprints",
  "parameters": {
    "operation": "inspect",
    "params": {
      "asset_path": "/Game/Blueprints/BP_Enemy_Goblin"
    }
  }
}
{
  "name": "BP_Enemy_Goblin",
  "parent_class": "ACharacter",
  "variables": [
    { "name": "Health", "type": "float", "default": 100.0 },
    { "name": "MoveSpeed", "type": "float", "default": 300.0 },
    { "name": "DropTable", "type": "UDataTable*", "default": null }
  ],
  "functions": [
    { "name": "OnDeath", "inputs": [], "outputs": [] },
    { "name": "TakeDamage", "inputs": ["float Damage"], "outputs": ["float Remaining"] }
  ],
  "components": [
    { "name": "CapsuleComponent", "type": "UCapsuleComponent" },
    { "name": "SkeletalMesh", "type": "USkeletalMeshComponent" }
  ]
}

With this information, the AI can answer questions like "what variables does BP_Enemy_Goblin expose?" or follow up with operations like setting a variable's default value.

Reading tool results in the chat

By default, the Chat tab shows AI replies in natural language — the JSON results are translated into readable summaries. If you want to see the raw tool call data, toggle Show Tool Details in the chat settings (gear icon > Display).

When Show Tool Details is on, each AI reply expands to show:

  • The tool name and operation that was called
  • The exact parameters that were sent
  • The execution time in milliseconds
  • The raw JSON result from the plugin

The Activity tab always shows the raw feed regardless of this setting, making it a reliable place to inspect calls without cluttering the chat.

Tip: If a tool call produces an unexpected result, copy the raw JSON from the Activity tab and paste it into your message with "Why did this return X instead of Y?" — the AI can reason about its own results and suggest a corrected approach.