top of page

Universal App Launcher: Build Once, Use Everywhere

Every AI app wiring directly to multiple tools using multiple integrations lead brittle prompts and duplicated glue code. Here comes a solution - MCP (Model Context Protocol), a standard client–server contract. Each Host ships one MCP Client and each capability lives behind an MCP Server. New pairings require zero new glue. It insures safer tool use, portable integrations, and faster iteration.


MCP Overview


An architecture diagram depicting the infrastructure prior to MCP implementation, highlighting Model A's connection to a database and filesystem, Model B's interaction with a web API and vector database, and Model C's connection to a payment API.
An architecture diagram depicting the infrastructure prior to MCP implementation, highlighting Model A's connection to a database and filesystem, Model B's interaction with a web API and vector database, and Model C's connection to a payment API.
Diagram illustrating a system architecture 'After MCP,' showing integration between Models A, B, and C with the MCP hub. This central processing unit connects to various components including a database, filesystem, web API, vector database, and payment API, streamlining operations and data flow.
Diagram illustrating a system architecture 'After MCP,' showing integration between Models A, B, and C with the MCP hub. This central processing unit connects to various components including a database, filesystem, web API, vector database, and payment API, streamlining operations and data flow.

Three Key Roles

  • Host: Your user‑facing app (e.g., IDE, chat UI). Owns UI, memory, and permissions.

  • Client: Runs inside the Host. Speaks the MCP protocol, discovers/negotiates capabilities, streams calls and results.

  • Server: External process (local or remote) that exposes capabilities: Tools, Resources, and Prompts.


Capability Types

  • Tools → do things (code execution, DB query, file I/O, web calls). Side‑effects allowed.

  • Resources → read things (knowledge/doc chunks, files). Read‑only.

  • Prompts → shape behavior (prebuilt system/workflow templates the Host can fetch).


Core Flows (Step‑by‑Step)


1) Tool Invocation Flow

User → Host → MCP Client → (list tools) → choose tool → call → Server → result → Host → User


Algorithmic Flow: (tool selection + execution):

  • Parse intent from the user message.

  • Rank tools using: name/description BM25 + lightweight embedding similarity.

  • Fill arguments by schema‑guided extraction (pydantic/json‑schema hints).

  • Safety gate: Host checks first‑use permissions & scopes (e.g., file write, network spend).

  • Execute tool; stream partial output (SSE/stdio) to Client.

  • Reflect & respond: LLM summarizes result; optionally chain another tool.


2) Resource Retrieval Flow

Host → MCP Client → resources.list → resources.get(uri) → Server → content (read‑only)


Algorithmic Flow:

  • Host decides which URIs are allowed (whitelist/tenant boundary).

  • Resolve URI template (e.g., file://, db://, kb://), enforce size/PII limits.

  • Return typed payload (text/binary pointers) with provenance metadata.


3) Prompt Template Flow

Host → MCP Client → prompts.list → prompts.get(name) → messages[] → LLM


Algorithmic Flow:

  • Pick template (e.g., code_review), insert variables (repo, diff, guidelines).

  • Prepend to conversation as system/tool messages.

  • LLM continues with consistent behavior.


Reference Implementations


A. Minimal MCP Server (Python, fastmcp)

from mcp.server.fastmcp 
import FastMCP
from pydantic 
import BaseModel

mcp = FastMCP("example-server")



class WeatherIn(BaseModel):
    
	location: str



@mcp.tool()

def get_weather(args: WeatherIn) -> dict:
    
	# call your provider here (mocked)
    
	return {"location": args.location, "temp_c": 23.5, "conditions": "Sunny"}



@mcp.resource("file://{path}")

def read_file(path: str) -> str:
    
	return open(path).read()



if __name__ == "__main__":
    
	mcp.run(transport="stdio")  
	# or transport="sse" for HTTP/SSE

B. Host‑side Client (Function‑Calling Agent)

from llama_index.core.agent import FunctionCallingAgent

from llama_index.tools.mcp import MCPToolBox  # wraps MCP tools for the agent



mcp_tools = MCPToolBox.from_server(name="example-server")

agent = FunctionCallingAgent(tools=mcp_tools.all(), system_prompt="Use tools before answering.")



while True:
    
	q = input("User: ")
    
	print(agent.chat(q))

Safety checklist: first‑tool‑use approval, per‑tool scopes, outbound rate limits, and result size caps.


Complex Business Scenarios


1) Insurance: “Smart Claims Adjuster”


Goal: Cut claim cycle time while enforcing policy rules and fraud checks.


Servers & Tools

  • policy_rules.query(policy_id, clause) – read‑only rules (Resource + Tool combo)

  • claim_db.get(claim_id) / claim_db.update_status(claim_id, status)

  • pricing.compute_payout(loss_event, policy_attrs)

  • fraud.score(claim_payload)


Process Flow

  • Intake: User uploads claim PDF → Host extracts text → stores as Resource (file://claims/123.pdf).

  • Triage: Client calls fraud.score → if score > threshold, flag & request docs.

  • Eligibility: Client fetches policy clauses via policy_rules.query and compares with loss event.

  • Payout: Call pricing.compute_payout → return explanation & breakdown.

  • Action: claim_db.update_status with audit trail (tool calls + resource URIs).


    Diagram

    [User Upload] → [Extract Text] → [fraud.score] → (ok?) → [policy_rules.query]              → [pricing.compute_payout] → [claim_db.update_status] → [Response]


Algorithmic Flow (eligibility + payout):

  • Extract (peril, date, location, amount) from claim.

  • Match peril to covered clauses using fuzzy match over clause taxonomy.

  • Apply deductible and sub‑limits; cap by coverage limit.

  • Adjust by fraud/override rules; produce explainable ledger (line‑items).


Why MCP helps: Each step is a tool/resource with narrow scope; Host gates risky calls (DB writes), while logic stays portable across UIs (helpdesk, portal, mobile).


2) Supply Chain: “Control‑Tower Replenishment”


Goal: Maintain target service level with minimal working capital.

Servers & Tools

  • erp.inventory(site, sku) (Resource)

  • forecast.demand(sku, horizon) (Tool)

  • optimizer.reorder_point(mu, sigma, z, lead_time) (Tool)

  • procure.create_po(site, sku, qty) (Tool with approval)


Process Flow

  • Planner asks: “Do we need to reorder SKU‑A for DC‑West?”

  • Client fetches inventory + open POs via resource URIs.

  • Client calls forecast.demand → returns mean (μ) and std (σ).

  • Compute reorder point (ROP = μ·L + z·σ·√L).


 if on_hand + on_order < ROP, 
	propose PO quantity (Q = μ·(L+T) - (on_hand + on_order)).
  • Host asks human to approve → calls procure.create_po.


Diagram

[Planner Q] → [erp.inventory] + [forecast.demand] → [optimizer.reorder_point]  → (ROP check) → [procure.create_po]*approval* → [Confirmation]


Algorithmic Flow (service‑level policy):

  • Pick z from service level (e.g., 95% ⇒ z≈1.65).

  • Use stochastic lead time L if variance known; otherwise conservative buffer.

  • Add MOQ and case‑pack rounding before creating PO.


Why MCP helps: Clean separation of forecasting/optimization tools, strict approval on PO creation, and audibility of every decision.


Engineering Details


Transport & Streaming

  • for simple local CLIs; (Server‑Sent Events) for HTTP streaming to multiple Hosts.

  • Stream tool logs and partial results for better UX and debuggability.


Schemas & Argument Filling

  • Define tool inputs with typed schemas (pydantic/JSON Schema). The Client can auto‑build arguments and validate before network hop.


Ranking Tools (lightweight but effective)

score(tool) 
   = w1*BM25(query, tool.docstring)+ w2*sim(emb(query), 	 
     emb(tool.name+docstring))+ w3*scope_bonus(tool.scope ∈ 
     allowed)

Guardrails

  • Per‑tool allow/deny prompts, rate‑limits, spend caps.

  • First‑use consent dialog for high‑risk tools (filesystem writes, payments).

  • Result filters: redaction/PII stripping on Resource reads.


Observability

  • Log: tool name, args hash, duration, exit code, bytes read/written, and caller trace.

  • Correlate logs with conversation IDs for audits.


Packaging & Deployment

  • Ship Servers as Docker images or single‑file CLIs.

  • Version capabilities (tools.get_meta()) so Hosts can adapt to changes.


Step‑by‑Step: Your First MCP Project

  • Create a server (Python): install fastmcp, scaffold tools/resources.

  • Pick a Host: Cursor or Claude Desktop → add your server in settings.

  • Wrap tools for your agent runtime (LlamaIndex / LangChain / custom).

  • Add safety: scopes, consent prompts, and budget guards.

  • Ship: package with Docker or pipx; document URIs and tool semantics.


Further Reading & References



 
 
 
bottom of page