Skip to main content

Case Studies

10.1 Case 1: E-Commerce Customer Service MCP Server

Scenario: Turn Claude/ChatGPT into your customer service assistant, answering customer questions about orders, shipping, and returns. Tool Design:
ToolFunction
get_order_statusQuery order status by order ID
track_shipmentQuery shipping tracking information
check_return_eligibilityCheck if return conditions are met
create_return_requestCreate a return request
get_faqGet frequently asked question answers
Resource Design:
Resource URIContentAnnotations
store://policies/returnReturn policyaudience: ["assistant"], priority: 0.9
store://policies/shippingShipping informationaudience: ["assistant"], priority: 0.8
store://faq/commonFrequently asked questionsaudience: ["assistant", "user"], priority: 0.7
Prompt Design:
server.prompt(
  "customer_service_mode",
  "Activate customer service assistant mode, loading store policies and FAQ",
  [],
  async () => ({
    messages: [{
      role: "assistant",
      content: {
        type: "text",
        text: "You are a professional customer service assistant. Answer questions based on store policies. Suggest transferring to a human agent for complaints you cannot handle."
      }
    }]
  })
);
Elicitation in Action: When processing a return request, use Elicitation to confirm the return reason and refund method with the user:
server.tool("create_return_request", "Create a return request", {
  orderId: { type: "string" }
}, async ({ orderId }, { requestElicitation }) => {
  // Request user confirmation via Elicitation
  const confirmation = await requestElicitation({
    message: "Please confirm the return details",
    requestedSchema: {
      type: "object",
      properties: {
        reason: { type: "string", enum: ["Quality issue", "Wrong size", "Changed mind", "Other"] },
        refundMethod: { type: "string", enum: ["Original payment method", "Store credit"] }
      },
      required: ["reason", "refundMethod"]
    }
  });
  // Create return...
  return {
    content: [{ type: "text", text: `Return request created. Refund method: ${confirmation.refundMethod}` }]
  };
});
Example Conversation:
User: Where is my order ORD-20260410-001?
AI: [calls get_order_status] Your order has been shipped, tracking number SF1234567890.
    [calls track_shipment] Latest status: April 10, 15:23 - Arrived at the
    regional distribution center. Estimated delivery: tomorrow.
Value: 24/7 automated responses, reducing customer service workload by over 60%.

10.2 Case 2: B2B Supply Chain Query

Scenario: Enable procurement staff to quickly query supplier inventory and quotes through an AI assistant. Tool Design:
server.tool("query_supplier_inventory", "Query supplier inventory and pricing", {
  productCode: { type: "string", description: "Product code" },
  quantity: { type: "number", description: "Required quantity" },
  supplierId: { type: "string", description: "Supplier ID" }
}, async ({ productCode, quantity, supplierId }) => {
  const quote = await supplierApi.getQuote(supplierId, productCode, quantity);
  const data = {
    supplier: quote.supplierName,
    product: quote.productName,
    unitPrice: quote.unitPrice,
    totalPrice: quote.totalPrice,
    currency: quote.currency,
    leadTime: quote.leadTimeDays,
    moq: quote.minimumOrderQuantity,
    inStock: quote.availableQuantity
  };
  return {
    content: [{
      type: "text",
      text: JSON.stringify(data, null, 2)
    }],
    structuredContent: data
  };
});

server.tool("compare_suppliers", "Compare quotes from multiple suppliers", {
  productCode: { type: "string", description: "Product code" },
  quantity: { type: "number", description: "Required quantity" },
  supplierIds: { type: "string", description: "Supplier ID list, comma-separated" }
}, async ({ productCode, quantity, supplierIds }) => {
  const ids = supplierIds.split(",").map(s => s.trim());
  const quotes = await Promise.all(
    ids.map(id => supplierApi.getQuote(id, productCode, quantity))
  );
  const comparison = quotes
    .sort((a, b) => a.unitPrice - b.unitPrice)
    .map((q, i) => ({
      rank: i + 1,
      supplier: q.supplierName,
      unitPrice: q.unitPrice,
      leadTime: q.leadTimeDays,
      moq: q.minimumOrderQuantity
    }));
  return {
    content: [{
      type: "text",
      text: JSON.stringify(comparison, null, 2)
    }]
  };
});
Sampling in Action: Use Sampling to request the Host’s LLM to generate procurement recommendations:
server.tool("procurement_advice", "Generate procurement advice based on quote data", {
  productCode: { type: "string" },
  quantity: { type: "number" }
}, async ({ productCode, quantity }, { requestSampling }) => {
  const quotes = await supplierApi.getAllQuotes(productCode, quantity);
  // Request LLM analysis via Sampling
  const analysis = await requestSampling({
    messages: [{
      role: "user",
      content: {
        type: "text",
        text: `Analyze the following supplier quotes and recommend the optimal procurement plan:\n${JSON.stringify(quotes, null, 2)}`
      }
    }],
    maxTokens: 500,
    modelPreferences: {
      intelligencePriority: 0.8,
      costPriority: 0.2
    }
  });
  return {
    content: [{ type: "text", text: analysis.content.text }]
  };
});

10.3 Case 3: Multi-Store Management

Scenario: Manage multiple e-commerce stores (Shopify + WooCommerce + self-hosted) through MCP with a unified interface. Tool Design:
ToolFunction
list_storesList all stores
get_store_statsGet sales data for a specific store
sync_inventorySync inventory across stores
compare_performanceCompare performance metrics across stores
Resources: Expose platform API configurations as resources
server.resource(
  "stores://platforms",
  "Connected e-commerce platforms",
  "application/json",
  async () => {
    return JSON.stringify([
      { id: "shopify-main", platform: "Shopify", name: "Main Store", status: "active" },
      { id: "woo-1", platform: "WooCommerce", name: "Secondary Store", status: "active" },
      { id: "self-hosted", platform: "Custom", name: "Self-hosted Store", status: "active" }
    ], null, 2);
  }
);
Example Conversation:
User: Compare the sales performance of my three stores this week.
AI: [calls get_store_stats x 3]

    This week's sales comparison:
    | Store            | Orders | Revenue   | Conversion |
    |------------------|--------|-----------|------------|
    | Shopify Main     | 142    | $8,940    | 3.2%       |
    | WooCommerce      | 68     | $4,210    | 2.8%       |
    | Self-hosted      | 35     | $2,860    | 4.1%       |

    The self-hosted store has the highest conversion rate but the least traffic.
    Consider applying Shopify's traffic strategies to the self-hosted store.

10.4 Case 4: OTR Trust Assessment Integration

Scenario: An AI agent verifies merchant OTR trust scores before making recommendations, ensuring only trustworthy merchants are suggested. Tool Design:
server.tool("verify_merchant", "Verify merchant trustworthiness", {
  domain: { type: "string", description: "Merchant domain" }
}, async ({ domain }) => {
  const otrResponse = await fetch(
    `https://${domain}/.well-known/otr/verify`
  );

  if (!otrResponse.ok) {
    return {
      content: [{
        type: "text",
        text: `${domain} has not deployed the OTR protocol. Unable to verify trust. Exercise caution.`
      }],
      isError: false  // This is not an error, just missing OTR data
    };
  }

  const otr = await otrResponse.json();
  return {
    content: [{
      type: "text",
      text: JSON.stringify({
        domain: otr.domain,
        trustScore: otr.trustScore,
        badges: otr.badges,
        dimensions: {
          identity: otr.dimensions?.I,
          security: otr.dimensions?.S,
          transparency: otr.dimensions?.T,
          reputation: otr.dimensions?.R,
          durability: otr.dimensions?.D,
          financial: otr.dimensions?.F
        },
        recommendation: otr.trustScore >= 70
          ? "Good trust score. Safe for transactions."
          : otr.trustScore >= 40
          ? "Average trust score. Recommended to learn more before deciding."
          : "Low trust score. Exercise caution."
      }, null, 2)
    }]
  };
});
Usage Scenario: An AI shopping assistant automatically verifies merchant trust when recommending products:
User: Help me find a reliable outdoor gear store.
AI: [calls search_products to search outdoor gear]
    [calls verify_merchant for outdoorgear.com]
    [calls verify_merchant for hikestore.com]

    Two OTR-verified outdoor gear stores recommended:

    1. outdoorgear.com - OTR Trust Score 82/100
       Six dimensions: Identity A / Security A / Transparency B / Reputation B / Durability A / Financial B

    2. hikestore.com - OTR Trust Score 71/100
       Six dimensions: Identity B / Security B / Transparency B / Reputation C / Durability B / Financial C

    Both have passed basic trust verification. outdoorgear.com has the higher overall score.

10.5 Quick Start Path

If this is your first time building an MCP Server:
  1. Hour 1: Read Chapters 1-4 of this book to understand MCP architecture and the development workflow
  2. Hour 2: Run the example code from Chapter 4 using MCP Inspector
  3. Hours 3-4: Build your own commerce Server based on the templates in Chapter 5
  4. Hour 5: Test in Claude Desktop, confirm tools work properly
  5. Beyond: Expand tools, add Resources and Prompts, deploy remotely as needed

AI Prompt: Quickly Build an MCP Server

Help me build an MCP Server.

I have read the MCP protocol documentation (modelcontextprotocol.io)
and now need your help with the implementation.

Requirements:
- Language: TypeScript
- Protocol version: 2025-11-25
- Tool list:
  1. [tool name]: [functionality], input [parameters], returns [data]
  2. [tool name]: [functionality], input [parameters], returns [data]
- Resources: [data resources to expose]
- Prompts: [predefined prompt templates]
- Data source: [database connection / API endpoint]
- Transport: [stdio / Streamable HTTP]

Please provide:
1. Complete runnable code (with outputSchema and structuredContent)
2. package.json
3. tsconfig.json
4. Claude Desktop configuration file
5. Testing commands

Congratulations on completing Book 5. You now have comprehensive knowledge of the MCP protocol and can build your own MCP Server to enable AI agents to interact with your systems. Recommended next steps:
Recommended Next: SEO for AI — Make your site and products discoverable by AI agents More Case Studies: OTR Cases | UCP Cases | SEO Cases