A working manual on the Model Context Protocol, built around your seven shipped MCP servers. Designed so a recruiter cannot trip you. Read once cold, return as reference.
Real spec, real code from your repos, recruiter-grade phrasing. No filler.
Memorize this. Everything below is the proof.
Most explanations stop at the analogy. A recruiter will push past the analogy. Here is what you say next.
MCP (Model Context Protocol) is an open specification, released by Anthropic in November 2024, that defines how an AI application gives a language model access to external context and tools. It is transport-agnostic, schema-driven, and built on JSON-RPC 2.0.
Before MCP, "tool use" meant every chat app and every IDE built one-off integrations to every API. Slack to ChatGPT, Slack to Claude, Slack to Cursor, Gmail to ChatGPT, Gmail to Claude, and so on. That is M models times N tools. Each integration broke whenever either side changed. None were reusable.
japan-ux-mcp works in Claude, Cursor, Windsurf, Zed, Cline, VS Code unchangedMCP servers expose three primitive types. The distinction is not academic — it controls who decides when each one is used.
A tool is a function the model can choose to invoke during a turn. The server describes its name, description, and input schema. The host shows the model what tools exist, the model emits a structured call, the host runs it, the result goes back.
Example from your code: create_invoice in xendit-mcp creates a real Xendit payment invoice. generate_jp_form in japan-ux-mcp returns Japanese form markup. rip_transcript in rippr fetches a YouTube transcript and saves it.
A resource is a piece of context exposed at a URI like japan-ux://typography-guide. The host (Claude Desktop, Cursor) decides whether to load it into the conversation. The model does not call resources directly the way it calls tools. The user or app says "include this."
Example from your code: japan-ux-mcp exposes nine resources — keigo guide, era calendar, color guide, layout guide, trust checklist. Pure reference data the model reads to do its job better.
A prompt is a parametrized message template the user can pick from a slash menu. The server defines the template. The user fills in the arguments. The host expands the prompt into the conversation.
Example from your code: japan-ux-mcp ships japan_form, japan_audit, japan_keigo. paypay-mcp ships accept_single_payment, refund_last_payment, debug_stuck_payment. These appear as slash commands in the host UI.
Three structural reasons. Memorize them in order — they build on each other.
OpenAI's Plugins (2023) had a head start and died because every plugin had to be built once for ChatGPT. Anthropic learned. MCP was launched as open, with reference SDKs in TypeScript and Python on day one. Within twelve months, OpenAI shipped MCP support in the Agents SDK, Microsoft added it to Copilot Studio, Google adopted it for Gemini. The cost of being incompatible became too high. That's how protocols win.
By 2025, frontier models could one-shot most tasks if the relevant context was in front of them. The hard part stopped being intelligence and became access — to your codebase, your calendar, your CRM, your Figma file, your team's wiki. MCP is the only credible standard for that access layer. You see this in the listings: 5,000+ public MCP servers on Glama, Smithery, and Lobehub by early 2026.
An agent is, structurally, a model that takes a goal and runs a loop of tool calls until the goal is met. Every leg of that loop is an MCP-shaped operation. Whoever provides clean, secure, well-described MCP servers becomes the distribution layer for agent capability. That is why every infrastructure company — Cloudflare, Vercel, Stripe, Notion, Linear, GitHub — has shipped or is shipping an MCP server. They don't want to be a feature of the agent; they want to be the substrate.
If you confuse host, client, and server in an interview, you lose. Drill this until it's automatic.
stdio for local subprocesses (host launches your server, talks over stdin/stdout), and Streamable HTTP for remote (host POSTs JSON-RPC to your URL). Most npm-published MCPs ship as stdio because installing is just npx."So an MCP server is like a REST API?" No. It's stateful. The server keeps a session per client connection, supports server-initiated notifications (tool list changed, resource updated), and even lets the server ask the host's LLM for completions (sampling). That's a real protocol, not an HTTP endpoint.
If you can walk through these five steps without notes, you can answer any "how does it work" question.
Host launches the server (for stdio, that's a child process). The client sends an initialize request announcing its protocol version and capabilities. The server replies with its own protocol version, server info, and the capabilities it supports.
// client → server { "jsonrpc": "2.0", "id": 1, "method": "initialize", "params": { "protocolVersion": "2025-06-18", "capabilities": { "roots": {}, "sampling": {} }, "clientInfo": { "name": "claude-code", "version": "1.x" } } }
Why it matters: capabilities is how each side says "I support tools, but not sampling" or "I support resources with subscriptions." A correct server checks these flags before sending optional features.
Once initialized, the client asks tools/list, resources/list, prompts/list. The server returns descriptors with names, human descriptions, and JSON Schema for inputs. Those descriptions go straight to the LLM as system context.
// client → server { "jsonrpc": "2.0", "id": 2, "method": "tools/list" } // server → client { "jsonrpc": "2.0", "id": 2, "result": { "tools": [{ "name": "generate_jp_form", "description": "Outputs Japanese form markup with sei/mei order, furigana...", "inputSchema": { "type": "object", "properties": { "fields": {...} } } }] } }
Why your tool description matters: the model never reads your code. It reads this string. Every tool description in your repos is what an LLM uses to decide whether to call it. That's why japan-ux-mcp ships descriptions in English and Japanese.
User asks: "Build me a Japanese signup form with name, email, and phone." The host serializes the available tools into the LLM call. The LLM produces a structured tool-use response naming generate_jp_form with arguments. This is model-internal. MCP is not invoked yet.
The host receives the tool-use intent and now needs to actually run the tool. That's where the client makes its second request to the server.
The client sends a tools/call request. The server validates the arguments against its schema, executes the handler, and returns a result with one or more content blocks (text, image, audio, or resource references).
// client → server { "jsonrpc": "2.0", "id": 7, "method": "tools/call", "params": { "name": "generate_jp_form", "arguments": { "fields": ["name", "email", "phone"], "framework": "react" } } } // server → client { "jsonrpc": "2.0", "id": 7, "result": { "content": [{ "type": "text", "text": "<form><label>姓..." }], "isError": false } }
Critical detail: the host shows the user this tool call before running it (in most safety-conscious hosts). The user can approve, deny, or always-allow. That's the layer where prompt-injection defense lives — your paypay-mcp goes further by gating refunds and cancels behind environment variables.
The host injects the tool result back into the conversation as a "tool result" message. The model sees it, decides whether the goal is met, and either answers the user or calls another tool. That loop — model, tool call, result, model — is what an agent is.
MCP doesn't define the loop. The loop is the host's job. MCP defines what each leg of the loop looks like on the wire.
Bonus: servers can also send notifications without being asked — for example, notifications/tools/list_changed when a server hot-reloads its tools. Stateful protocol, not an HTTP endpoint.
Click through each. Memorize the talk-track at the bottom of every card. That is the line you say when a recruiter asks "tell me about one you built."
Most designers stop at Figma. The ones who ship MCP servers stop being optional.
japan-ux-mcp doesn't just look at Japanese forms — it teaches every Claude/Cursor/Copilot user how to build them. That's leverage no Figma file matches.npx japan-ux-mcp. A recruiter at Stripe or Notion or Figma can install your work in 30 seconds and watch it run inside their own AI workflow. That converts faster than any case study.The manifesto, short version. The next decade of design is not Figma to Storybook to React. It is research to ontology to tool surface. The designer who can encode their domain knowledge as something an AI agent can call becomes the most valuable designer in the room. You already do that. The job is to know it cold so you can sell it cold.
Stop framing yourself as a designer who happens to code. Frame yourself as a designer who builds the surfaces AI agents work through. That is a bigger job and a rarer one.
If you can answer eight cleanly, you can hold your own. Below five, re-read the relevant section. The questions are written in the voice an actual senior recruiter or staff engineer would use.
Six concrete moves. Roughly in the order you should do them. Times are honest.
Open modelcontextprotocol.io/specification and read every page. Initialize, lifecycle, tools, resources, prompts, sampling, roots, transports. Then read the changelog for the last two protocol versions.
Why: you have shipped seven servers without doing this end to end. Doing it once moves you from "knows the pattern" to "could explain why a design choice was made". That's the difference between mid and senior in interviews.
All seven of your MCPs ship as stdio. Pick one (paypay is already partway there) and add a Streamable HTTP server entry point with bearer-token auth. Deploy to Cloudflare Workers or a small VPS.
Why: stdio is the easy mode. Remote MCP is where the field is moving (ChatGPT Apps SDK, Claude Connectors, Cursor Cloud). One remote-deployed server makes you a meaningfully different candidate.
Sampling is when the server calls back to the host's LLM. Most MCPs never use it. Add it to japan-ux-mcp — for example, have audit_japan_ux ask the host to summarize the issues in the user's voice, instead of returning a static report.
Why: 95% of public MCPs ignore sampling. Implementing it once means you can talk about the second-order MCP feature most engineers haven't touched. Senior signal.
A two-page doc. What can an attacker do if they control a tool input that the model passes through? What are your gates (refunds-disabled-by-default, sandbox-by-default, idempotency keys)? What still leaks?
Why: every recruiter at a payments, fintech, or infra company will ask about safety. Having a written threat model means you don't fumble. Publish it as a SECURITY.md on the repo.
japan-ux-mcp is already on Glama and Lobehub. Push for editorial placement. Write a launch post on the MCP subreddit and Hacker News. Get it to 50+ npm weekly downloads. Hit one of the curated registries' featured lists.
Why: "shipped" is good. "Shipped and used by other people" is much better. Distribution proof shortens every interview by 10 minutes.
Examples: an MCP that audits a Figma file for accessibility against WCAG 2.2. An MCP that returns the Material You / iOS HIG / shadcn equivalents of a given component. An MCP that scores a UI screenshot against a brand's design tokens. Pick one. Ship in two weeks.
Why: the seven you have are useful but most are not designer-native (transcripts, payments, search). One that only a designer could conceive of becomes the "this is who I am" repo at the top of your GitHub.
Don't try to do everything at once. Tier the bets so the critical-now stuff actually gets done.
The summary. You already have the rare thing — shipped MCP infra and a designer's eye. The work now is not "build more". The work is knowing what you built well enough to defend it under pressure, then putting one or two pieces in places where the right people see them.
The protocol is going to be obvious in two years. Your unfair advantage is being early and being a designer at the same time. Don't waste either.