MCP Server

Model Context Protocol integration for external AI agents to access CCO tools via JSON-RPC over stdio.

What is MCP?

The Model Context Protocol (MCP) enables external AI agents to access CCO's tools via JSON-RPC over stdio. This follows the architecture described in arxiv:2602.20478 - "Codified Context: Infrastructure for AI Agents in a Complex Codebase".

MCP provides a standardized way for AI agents and assistants to interact with CCO's capabilities without direct CLI access. The protocol is designed for:

  • External agent integration - Cursor, OpenClaw, and other AI assistants can invoke CCO tools
  • Deep context retrieval - Query memory and context without token overhead
  • Surgical edits - Precise file modifications within bounded scope
  • Workflow automation - programmatic access to CCO capabilities
ℹ️

MCP is particularly useful when external agents need to work alongside CCO on complex tasks, providing a bridge between agent orchestration and CCO's memory and context systems.

Running the MCP Server

Start the MCP server to expose CCO tools to external agents:

bash
python -m codified_orchestrator.mcp.server

The server reads JSON-RPC requests from stdin and writes responses to stdout. It runs indefinitely until EOF or a fatal error.

⚠️

The MCP server must be run from within a CCO-managed workspace or with the --workspace flag specifying the target repository.

JSON-RPC Protocol

The MCP server implements JSON-RPC 2.0 over stdio. All requests and responses use JSON encoding.

Request Format

json
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "memory_query",
    "arguments": {
      "query": "architecture decisions",
      "limit": 5
    }
  }
}

Response Format

json
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "success": true,
    "data": {
      "insights": [
        {
          "id": 42,
          "content": "AOMA is the primary memory store aligned with arxiv:2602.20478",
          "status": "active",
          "relevance_score": 0.92
        }
      ],
      "raw_count": 3
    }
  }
}

Error Format

json
{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Invalid parameters: 'query' is required",
    "data": null
  }
}

Available MCP Tools

AOMA Memory Tools

Long-term memory storage and retrieval

Tool Parameters Description
memory_ingest text, source_type, source_ref Add information to AOMA memory store. Consolidation happens asynchronously.
memory_query query, limit, use_llm Query memory store with natural language. Returns distilled insights.
memory_list limit, offset, status List memory records with optional filtering by status.
memory_consolidate (none) Trigger memory consolidation to convert raw memories to active insights.
memory_status (none) Get memory system status including statistics and database path.
memory_delete memory_id Archive a memory record by ID (safe delete).

Context Search Tools

Search .context/*.md files and AOMA memory

Tool Parameters Description
search_context_docs query, workspace Full-text search of .context/*.md files in the workspace.
search_docs_by_pattern pattern, workspace, case_sensitive Regex pattern search across context documents.
find_relevant_context query, memory_limit, context_file_limit, use_llm Search both AOMA memory and context files, merge by relevance.

Subsystem Tools

Agent and subsystem discovery

Tool Parameters Description
list_subsystems workspace Scan workspace and list all available subsystems/agents.
suggest_agent task, workspace Suggest the best agent/role for a given task based on trigger patterns.

MCP Resources

MCP resources provide read-only access to CCO state:

Resource URI Type Description
memory://status JSON Current AOMA memory system status including statistics, model info, and database path.
memory://project JSON Current active project key/namespace in the memory system.

Example: List Resources

json
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "resources/list",
  "params": {}
}

Example: Read Resource

json
{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "resources/read",
  "params": {
    "uri": "memory://status"
  }
}

MCP Prompts

MCP prompts provide predefined workflows for common tasks:

Prompt Parameters Description
context_search query, workspace Search for relevant context documents matching a natural language query.
agent_selection task, workspace Select the best agent for a task based on trigger pattern matching.

MCP in CCO Workflow

MCP tools are designed for integration at key points in the CCO workflow:

When to Use MCP Tools

External Agent Integration

When external agents (Cursor, OpenClaw, etc.) need to work alongside CCO:

  • Agents can query CCO memory without token overhead via memory_query
  • Context can be retrieved surgically via find_relevant_context
  • Agent selection can be delegated to suggest_agent

Deep Context Retrieval

When full context files are too large for token budgets:

  • Use search_context_docs for targeted searches
  • Use search_docs_by_pattern for regex-based discovery
  • Results are filtered by relevance before return

Memory Operations

For programmatic memory management:

  • Batch ingest via memory_ingest
  • Monitor memory health via memory_status
  • Trigger consolidation during idle periods
💡

Pro tip: Use MCP in a cci-contract workflow. Open a contract before external agent work, use MCP tools for context retrieval, and close the contract as the acceptance gate.

Example Integration

Here's a complete example of using MCP tools from an external agent:

python
import subprocess
import json

def call_mcp(method, params=None):
    """Call MCP server and return parsed response."""
    request = {
        "jsonrpc": "2.0",
        "id": 1,
        "method": method,
        "params": params or {}
    }

    result = subprocess.run(
        ["python", "-m", "codified_orchestrator.mcp.server"],
        input=json.dumps(request),
        capture_output=True,
        text=True
    )

    response = json.loads(result.stdout)
    if "error" in response:
        raise Exception(f"MCP error: {response['error']}")
    return response.get("result")

# Query memory for relevant context
context = call_mcp("tools/call", {
    "name": "find_relevant_context",
    "arguments": {
        "query": "architecture decisions for memory system",
        "memory_limit": 5,
        "context_file_limit": 3
    }
})

# List available subsystems
subsystems = call_mcp("tools/call", {
    "name": "list_subsystems",
    "arguments": {
        "workspace": "/path/to/repo"
    }
})

# Ingest new memory
call_mcp("tools/call", {
    "name": "memory_ingest",
    "arguments": {
        "text": "External agent completed feature X",
        "source_type": "openclaude",
        "source_ref": "feature-x-completion"
    }
})

Security Considerations

  • Workspace boundaries - MCP tools respect repo-root confinement
  • No file mutation - MCP tools do not modify files directly
  • Safe deletes - memory_delete performs logical archive, not physical delete
  • Audit trail - All MCP operations are logged to run artifacts

See Also