How gengine's task queue works, which operations can run in parallel, and how to coordinate subagents without hitting queue limits.
Parallel Execution
gengine processes MCP tool calls through an async task queue on the Unreal game thread. Understanding the queue's concurrency model lets you write AI workflows that run fast without hitting timeouts or causing state corruption.
How the Task Queue Works
Every tool call submitted to gengine enters a task queue before touching the editor. The queue dispatches tasks to the game thread — the only thread allowed to modify editor state in Unreal Engine.
// From MCPTaskQueue.cpp — task dispatch loop
void FMCPTaskQueue::Tick(float DeltaTime)
{
int32 ActiveCount = 0;
for (const auto& Task : ActiveTasks)
{
if (Task->GetState() == EMCPTaskState::Running)
++ActiveCount;
}
while (ActiveCount < MaxConcurrentTasks && !PendingTasks.IsEmpty())
{
TSharedPtr<FMCPTask> Task;
PendingTasks.Dequeue(Task);
Task->Execute();
ActiveTasks.Add(Task);
++ActiveCount;
}
}
Key properties of the queue:
- Max 4 concurrent tasks at any time. A 5th task waits in the pending queue.
- Pending tasks time out after 30 seconds if they never reach the front of the queue.
- Each task is atomic — it either completes or fails; there is no partial rollback.
- Task states: Queued → Running → Completed / Failed
Concurrency Limits
| Scenario | Limit | Reason |
|---|---|---|
| Concurrent MCP tasks | 4 | Unreal game thread task queue capacity |
| Subagents in a coordinated team | 3 | Leaves 1 slot for the lead agent's own calls |
| Parallel calls to the same actor | 1 | Race condition on actor state |
| Parallel calls to the same asset | 1 | Asset serialization conflict |
| Read-only operations | No practical limit* | Do not modify state |
*Read-only operations still count against the 4-task limit. However, because they complete quickly, they rarely cause queuing delays in practice.
Read-Only Operations: Freely Parallel
Operations annotated readOnlyHint do not modify editor state. They can be parallelized freely, including against each other and against in-flight modifying operations targeting different objects.
// All readOnlyHint — safe to fire simultaneously
parallel([
{ tool: "unreal_world", operation: "get_level_actors", params: { class_filter: "PointLight" } },
{ tool: "unreal_assets", operation: "search", params: { query: "BP_Enemy" } },
{ tool: "unreal_character", operation: "list_characters", params: {} },
{ tool: "unreal_world", operation: "get_output_log", params: { lines: 50 } },
])
Read-only operations across all 6 domain tools:
| Tool | Read-Only Operations |
|---|---|
unreal_world | get_level_actors, get_output_log, capture_viewport |
unreal_assets | search, get_info, list, dependencies, referencers |
unreal_blueprints | list, inspect, get_graph, get_events |
unreal_animation | get_info |
unreal_character | list_characters, get_character_info |
unreal_input_materials | get_material_info |
Modifying Operations: Parallel on Different Objects Only
Operations that write to editor state can be parallelized — but only when each concurrent call targets a different object. Two calls targeting the same actor or asset must run sequentially.
// SAFE — each call targets a different actor
parallel([
set_property({ actor: "PointLight_01", property: "Intensity", value: 5000 }),
set_property({ actor: "PointLight_02", property: "Intensity", value: 3000 }),
set_property({ actor: "PointLight_03", property: "Intensity", value: 1000 }),
])
// UNSAFE — two calls targeting the same actor, race condition
parallel([
set_property({ actor: "PointLight_01", property: "Intensity", value: 5000 }),
set_property({ actor: "PointLight_01", property: "AttenuationRadius", value: 800 }),
// ^^^ These two must be sequential, not parallel
])
// SAFE — different asset paths
parallel([
set_material_parameters({ path: "/Game/Materials/MI_Rock_Mossy", scalars: { Roughness: 0.8 } }),
set_material_parameters({ path: "/Game/Materials/MI_Rock_Dry", scalars: { Roughness: 0.95 } }),
set_material_parameters({ path: "/Game/Materials/MI_Rock_Snow", scalars: { Roughness: 0.6 } }),
])
Sequential-Only Operations
Three operations must always run one at a time and never in parallel with any other operation:
| Operation | Reason |
|---|---|
open_level | Changes the entire editor world state |
delete_actors | Iterates and removes actors — concurrent modification causes crashes |
run_console_command | Commands can affect global state in unpredictable ways |
// CORRECT — sequential
await open_level({ action: "open", path: "/Game/Maps/L02_Village" })
await get_level_actors({}) // Now safe to read the new level
// INCORRECT — never do this
parallel([
open_level({ action: "open", path: "/Game/Maps/L02_Village" }),
get_level_actors({}), // Race: level may not be loaded yet
])
Subagent Coordination
When coordinating multiple AI subagents to work in parallel, cap at 3 subagents. This leaves 1 task queue slot for the lead agent's own calls, preventing the queue from filling completely and causing timeouts.
Lead agent (1 queue slot reserved)
├── Subagent A — handles world/actor operations (1 queue slot)
├── Subagent B — handles asset operations (1 queue slot)
└── Subagent C — handles Blueprint operations (1 queue slot)
Partition work by domain
Assign each subagent a distinct domain to minimize coordination overhead and eliminate object conflicts:
// Good partition — each subagent owns a domain
SubagentA.task = "Spawn and position all lights in L01_Dungeon"
SubagentB.task = "Create material instances for all environment meshes"
SubagentC.task = "Add HitReact and Death states to all enemy Animation Blueprints"
// Bad partition — multiple subagents touching the same actors
SubagentA.task = "Set Intensity on all point lights"
SubagentB.task = "Set AttenuationRadius on all point lights" // Conflict!
Fan-out pattern
The lead agent reads shared state, distributes independent work to subagents, then collects results.
// Lead agent: read once
const actors = await get_level_actors({ class_filter: "PointLight" })
// Split into 3 chunks for 3 subagents
const chunks = splitIntoChunks(actors, 3)
// Fan out — subagents work in parallel on different objects
await parallel([
SubagentA.processActors(chunks[0]),
SubagentB.processActors(chunks[1]),
SubagentC.processActors(chunks[2]),
])
// Lead agent: verify
await get_output_log({ lines: 50, filter: "Error" })
Timeout Handling
If a task waits in the queue for more than 30 seconds without executing, it times out with:
{
"error": "task_timeout",
"message": "Task waited 30s in queue without executing. The queue may be saturated.",
"queued_at": "2025-01-15T14:32:07Z"
}
If you receive this error:
- Check the Tasks tab in the Command Center — look for stuck Running tasks.
- Reduce parallelism — you may be submitting more than 4 concurrent tasks.
- Check for sequential-only operations (
open_level,delete_actors) that are blocking the queue. - Restart the MCP bridge if tasks appear stuck indefinitely.
Practical Patterns
Batch reads before writes
// Phase 1: Parallel reads (all readOnlyHint)
const [actors, assets, log] = await parallel([
get_level_actors({ class_filter: "StaticMeshActor" }),
search({ query: "SM_Rock", asset_type: "StaticMesh" }),
get_output_log({ lines: 100 }),
])
// Phase 2: Sequential writes using data from reads
for (const actor of actors.actors) {
await set_property({ actor: actor.name, property: "CastShadow", value: false })
}
Parallel Blueprint population
// Create 3 Blueprints sequentially (they don't exist yet)
const bps = ["/Game/BP_A", "/Game/BP_B", "/Game/BP_C"]
for (const bp of bps) {
await create({ path: bp, parent_class: "/Script/Engine.Actor" })
}
// Add variables in parallel — different Blueprints, no conflict
await parallel([
add_variable({ path: "/Game/BP_A", name: "Health", type: "float", default_value: 100 }),
add_variable({ path: "/Game/BP_B", name: "Health", type: "float", default_value: 200 }),
add_variable({ path: "/Game/BP_C", name: "Health", type: "float", default_value: 50 }),
])
Rate-aware batching
When making more than 4 calls, group them into batches of 4 and await each batch before starting the next:
const allActors = await get_level_actors({})
const batches = chunkArray(allActors.actors, 4)
for (const batch of batches) {
await parallel(
batch.map(actor => move_actor({ name: actor.name, location: offsetLocation(actor.location) }))
)
}