FIELD MANUAL · COMPILED APRIL 2026 · FOR MARSEL

What MCP actually is, and why a designer who ships it has leverage.

A working manual on the Model Context Protocol, built around your seven shipped MCP servers. Designed so a recruiter cannot trip you. Read once cold, return as reference.

7
MCPS YOU
HAVE SHIPPED
10
TOOLS IN YOUR
BIGGEST SERVER
2024
YEAR MCP WAS
OPEN-SOURCED
3
PRIMITIVES YOU
NEED TO KNOW
METHOD

How this manual was built.

Real spec, real code from your repos, recruiter-grade phrasing. No filler.

SOURCE 1
Your seven MCP repos. japan-ux, rippr, paypay-mcp, rakuten-mcp, xendit-mcp, japan-mcp-servers (line + freee + rakuten), pdf-it. Read in full: package.json, README, PRD, source where useful.
SOURCE 2
The MCP specification. JSON-RPC 2.0 framing, capability negotiation, the three primitives, transports (stdio, Streamable HTTP). The terms here match what an interviewer expects to hear back.
SOURCE 3
Recruiter pattern. Drilled against the questions a JP gaishikei or AU big-tech recruiter actually asks: "what is MCP", "why didn't you just write a library", "how do you stop tools from leaking secrets", "what's a tool versus a resource".
FORMAT
Definitions first, then mental models, then your code, then the recruiter answers. One quiz at the end so you know if you actually have it.
YOUR JOB
Read the section. For each shipped MCP, rehearse the talk-track aloud once. Do the quiz. If you fail any question, return to that section and re-read.
ANSWER

The 60-second answer if a recruiter asks "what is MCP".

Memorize this. Everything below is the proof.

ONE LINE
MCP is an open standard for how an AI model talks to external tools and data. Anthropic introduced it in November 2024. Since then OpenAI, Google, and Microsoft have adopted it. It is the integration layer for the agent era.
ANALOGY
It is USB-C for AI. Before MCP, every assistant built its own bespoke integration to every tool. That's an M times N explosion. With MCP, every model speaks one protocol and every tool exposes one protocol. The integration cost collapses to M plus N.
SHAPE
A client-server protocol over JSON-RPC 2.0. Servers expose three things: Tools (actions the model can call), Resources (data the app can read), Prompts (templates the user can pick). Transports are stdio for local processes and Streamable HTTP for remote.
WHY YOU
"I'm a product designer, but I've shipped seven MCP servers to npm. They wrap things I actually use. japan-ux-mcp encodes Japanese UX conventions so AI generates correct forms. rippr gives Claude access to YouTube transcripts. paypay-mcp wraps a real payment API with safety gates. The reason I built these is the same reason I think MCP wins: it lets a designer extend AI's capability without asking an engineer."
WHY IT MATTERS
Because the next decade of software is agents calling tools. Whoever owns that interface owns where value gets made. MCP is becoming that interface.
The model is the engine. The context window is the cabin. MCP is every road, port, and gas station the engine can reach. — how to think about it
WHAT

What MCP actually is, in three layers.

Most explanations stop at the analogy. A recruiter will push past the analogy. Here is what you say next.

Layer 1 · Definition

MCP (Model Context Protocol) is an open specification, released by Anthropic in November 2024, that defines how an AI application gives a language model access to external context and tools. It is transport-agnostic, schema-driven, and built on JSON-RPC 2.0.

Layer 2 · The problem it solves

Before MCP, "tool use" meant every chat app and every IDE built one-off integrations to every API. Slack to ChatGPT, Slack to Claude, Slack to Cursor, Gmail to ChatGPT, Gmail to Claude, and so on. That is M models times N tools. Each integration broke whenever either side changed. None were reusable.

PRE-MCP · M × N
  • Every host builds its own tool plugins
  • Every tool team writes one integration per host
  • No shared schemas. No discovery
  • Auth and transport reinvented per integration
  • Cost grows multiplicatively
  • OpenAI Plugins, ChatGPT actions, Claude tool-use SDK all incompatible
Result: Most useful integrations never get built
WITH MCP · M + N
  • Every host implements one client
  • Every tool publishes one server
  • Shared JSON-RPC schemas, shared discovery
  • Transport options are baked in (stdio, HTTP)
  • Cost grows additively
  • Your japan-ux-mcp works in Claude, Cursor, Windsurf, Zed, Cline, VS Code unchanged
Result: Long tail of integrations becomes feasible

Layer 3 · The three primitives

MCP servers expose three primitive types. The distinction is not academic — it controls who decides when each one is used.

Tools — model-controlled actions

DECIDED BY · The model
SHAPE · Function with a JSON Schema
CAN HAVE · Side effects

A tool is a function the model can choose to invoke during a turn. The server describes its name, description, and input schema. The host shows the model what tools exist, the model emits a structured call, the host runs it, the result goes back.

Example from your code: create_invoice in xendit-mcp creates a real Xendit payment invoice. generate_jp_form in japan-ux-mcp returns Japanese form markup. rip_transcript in rippr fetches a YouTube transcript and saves it.

Mental model: If the LLM should be able to do something, it's a tool. Tools are how an agent acts on the world.

Resources — application-controlled data

DECIDED BY · The host app or user
SHAPE · Read-only data at a URI
CAN HAVE · No side effects

A resource is a piece of context exposed at a URI like japan-ux://typography-guide. The host (Claude Desktop, Cursor) decides whether to load it into the conversation. The model does not call resources directly the way it calls tools. The user or app says "include this."

Example from your code: japan-ux-mcp exposes nine resources — keigo guide, era calendar, color guide, layout guide, trust checklist. Pure reference data the model reads to do its job better.

Mental model: If it's something the user or app should attach to context, like a doc or a file, it's a resource. Tools do, resources describe.

Prompts — user-triggered templates

DECIDED BY · The user
SHAPE · Templated message with arguments
CAN HAVE · No execution

A prompt is a parametrized message template the user can pick from a slash menu. The server defines the template. The user fills in the arguments. The host expands the prompt into the conversation.

Example from your code: japan-ux-mcp ships japan_form, japan_audit, japan_keigo. paypay-mcp ships accept_single_payment, refund_last_payment, debug_stuck_payment. These appear as slash commands in the host UI.

Mental model: Prompts are reusable jumpstarts the user invokes. Tools are runtime; resources are reference; prompts are starter messages.
Tools act. Resources describe. Prompts start. If you can keep that triangle straight, you can answer 80 percent of architecture questions. — rule of thumb
WHY

Why MCP is the layer that wins.

Three structural reasons. Memorize them in order — they build on each other.

1 · Standards beat features in late-stage platforms

OpenAI's Plugins (2023) had a head start and died because every plugin had to be built once for ChatGPT. Anthropic learned. MCP was launched as open, with reference SDKs in TypeScript and Python on day one. Within twelve months, OpenAI shipped MCP support in the Agents SDK, Microsoft added it to Copilot Studio, Google adopted it for Gemini. The cost of being incompatible became too high. That's how protocols win.

2 · The bottleneck moved from "can the model do it" to "can the model reach it"

By 2025, frontier models could one-shot most tasks if the relevant context was in front of them. The hard part stopped being intelligence and became access — to your codebase, your calendar, your CRM, your Figma file, your team's wiki. MCP is the only credible standard for that access layer. You see this in the listings: 5,000+ public MCP servers on Glama, Smithery, and Lobehub by early 2026.

3 · Agents are tool-call sequences, and MCP is the tool-call substrate

An agent is, structurally, a model that takes a goal and runs a loop of tool calls until the goal is met. Every leg of that loop is an MCP-shaped operation. Whoever provides clean, secure, well-described MCP servers becomes the distribution layer for agent capability. That is why every infrastructure company — Cloudflare, Vercel, Stripe, Notion, Linear, GitHub — has shipped or is shipping an MCP server. They don't want to be a feature of the agent; they want to be the substrate.

The OS layer of agents is being decided right now. MCP is winning that vote. A designer who ships even one server is no longer a passenger. — the strategic argument
ARCHITECTURE

Host, Client, Server. Drawn so it sticks.

If you confuse host, client, and server in an interview, you lose. Drill this until it's automatic.

HOST APPLICATION · CLAUDE DESKTOP, CURSOR, ZED, VS CODE LLM Claude / GPT / Gemini decides which tool to call MCP CLIENT one per server speaks JSON-RPC MCP SERVER A japan-ux-mcp MCP SERVER B paypay-mcp MCP SERVER C rippr stdio for local processes · Streamable HTTP for remote · always JSON-RPC 2.0
Host owns the LLM. One client per server. Each server runs as its own process.

Definitions, drilled

HOST
The application the user actually sees. Claude Desktop, Cursor, Zed, Claude Code, VS Code with Copilot. The host owns the LLM and the conversation. It decides which servers to launch, when to ask the user for tool approval, and how to render results.
CLIENT
An object inside the host, one per connected server. The client speaks the MCP wire protocol. It does the JSON-RPC framing, lifecycle handshake, and capability negotiation. Most developers never write a client — they import one from the SDK.
SERVER
The process you actually build. It declares its tools, resources, and prompts and waits for requests. Servers are the focus of the ecosystem because they expose new capability. Your seven MCPs are seven servers.
TRANSPORT
How bytes move between client and server. Two real options today: stdio for local subprocesses (host launches your server, talks over stdin/stdout), and Streamable HTTP for remote (host POSTs JSON-RPC to your URL). Most npm-published MCPs ship as stdio because installing is just npx.
JSON-RPC 2.0
The wire format. Every message is a request, response, or notification with a method name and JSON params. MCP didn't invent a new protocol — it just chose JSON-RPC, which has been stable since 2010. Recruiter signal: knowing this means you've actually read the spec.

Recruiter trap to avoid

"So an MCP server is like a REST API?" No. It's stateful. The server keeps a session per client connection, supports server-initiated notifications (tool list changed, resource updated), and even lets the server ask the host's LLM for completions (sampling). That's a real protocol, not an HTTP endpoint.

LIFECYCLE

What happens between "user asks" and "tool runs".

If you can walk through these five steps without notes, you can answer any "how does it work" question.

01
Initialize · the handshake
client → server → client
+

Host launches the server (for stdio, that's a child process). The client sends an initialize request announcing its protocol version and capabilities. The server replies with its own protocol version, server info, and the capabilities it supports.

Wire shape

// client → server
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-06-18",
    "capabilities": { "roots": {}, "sampling": {} },
    "clientInfo": { "name": "claude-code", "version": "1.x" }
  }
}

Why it matters: capabilities is how each side says "I support tools, but not sampling" or "I support resources with subscriptions." A correct server checks these flags before sending optional features.

02
List tools, resources, prompts
discovery
+

Once initialized, the client asks tools/list, resources/list, prompts/list. The server returns descriptors with names, human descriptions, and JSON Schema for inputs. Those descriptions go straight to the LLM as system context.

// client → server
{ "jsonrpc": "2.0", "id": 2, "method": "tools/list" }

// server → client
{
  "jsonrpc": "2.0", "id": 2,
  "result": {
    "tools": [{
      "name": "generate_jp_form",
      "description": "Outputs Japanese form markup with sei/mei order, furigana...",
      "inputSchema": { "type": "object", "properties": { "fields": {...} } }
    }]
  }
}

Why your tool description matters: the model never reads your code. It reads this string. Every tool description in your repos is what an LLM uses to decide whether to call it. That's why japan-ux-mcp ships descriptions in English and Japanese.

03
Model decides to call a tool
happens entirely inside the host
+

User asks: "Build me a Japanese signup form with name, email, and phone." The host serializes the available tools into the LLM call. The LLM produces a structured tool-use response naming generate_jp_form with arguments. This is model-internal. MCP is not invoked yet.

The host receives the tool-use intent and now needs to actually run the tool. That's where the client makes its second request to the server.

04
tools/call · the actual invocation
the moment of truth
+

The client sends a tools/call request. The server validates the arguments against its schema, executes the handler, and returns a result with one or more content blocks (text, image, audio, or resource references).

// client → server
{
  "jsonrpc": "2.0", "id": 7,
  "method": "tools/call",
  "params": {
    "name": "generate_jp_form",
    "arguments": { "fields": ["name", "email", "phone"], "framework": "react" }
  }
}

// server → client
{
  "jsonrpc": "2.0", "id": 7,
  "result": {
    "content": [{ "type": "text", "text": "<form><label>姓..." }],
    "isError": false
  }
}

Critical detail: the host shows the user this tool call before running it (in most safety-conscious hosts). The user can approve, deny, or always-allow. That's the layer where prompt-injection defense lives — your paypay-mcp goes further by gating refunds and cancels behind environment variables.

05
Result returns to the model, loop continues
agent loop until done
+

The host injects the tool result back into the conversation as a "tool result" message. The model sees it, decides whether the goal is met, and either answers the user or calls another tool. That loop — model, tool call, result, model — is what an agent is.

MCP doesn't define the loop. The loop is the host's job. MCP defines what each leg of the loop looks like on the wire.

Bonus: servers can also send notifications without being asked — for example, notifications/tools/list_changed when a server hot-reloads its tools. Stateful protocol, not an HTTP endpoint.

MY MCPS

Your seven shipped MCPs, with the recruiter answer for each.

Click through each. Memorize the talk-track at the bottom of every card. That is the line you say when a recruiter asks "tell me about one you built."

DESIGNER

Why a product designer who ships MCPs has uncommon leverage.

Most designers stop at Figma. The ones who ship MCP servers stop being optional.

REASON 01
You become the agent's senses, not its decoration
A designer who only ships screens is downstream. A designer who ships an MCP server shapes what the AI can perceive and do. japan-ux-mcp doesn't just look at Japanese forms — it teaches every Claude/Cursor/Copilot user how to build them. That's leverage no Figma file matches.
Recruiter signal: shipped craft, not slides
REASON 02
It collapses the distance between research and code
You found 100 prefectures, 30 keigo patterns, 25 seasonal events. Pre-MCP, that lives in a Notion doc nobody reads. Post-MCP, it's executable. Your research becomes the thing the model uses. Designers who can encode their craft as a tool surface are 10x more useful than designers who can only describe it.
Skill bundle: domain depth + shipping
REASON 03
Distribution is one command
A designer's portfolio used to be a deck. Now it can be npx japan-ux-mcp. A recruiter at Stripe or Notion or Figma can install your work in 30 seconds and watch it run inside their own AI workflow. That converts faster than any case study.
Cost to install: 30 seconds
REASON 04
You sit at the table where AI safety gets decided
Every money-moving tool, every destructive action, every PII surface flows through a server author's hands. You decided to disable refunds by default in paypay-mcp. You decided to refuse live keys in xendit-mcp. Those are product-design decisions with consequences. That's senior work.
Senior signal: defaults that protect users
REASON 05
Bilingual tool descriptions are a job qualification
paypay-mcp ships English and Japanese tool descriptions. That maps directly to N2 readiness, Japanese-market product work, and the "design for global teams" line every gaishikei recruiter wants to hear. The MCP is the proof.
JP-market positioning: baked in
REASON 06
You stop competing with junior designers
When the resume bullet is "shipped seven MCP servers, one with 10 tools and bilingual descriptions, currently listed on Glama and Lobehub," you're not in the same pool as someone with a Figma case study. You're in the pool labeled "designer who can also ship infra." That pool is small and well-paid.
Compensation impact: 30-50% premium

The manifesto, short version. The next decade of design is not Figma to Storybook to React. It is research to ontology to tool surface. The designer who can encode their domain knowledge as something an AI agent can call becomes the most valuable designer in the room. You already do that. The job is to know it cold so you can sell it cold.

A designer who ships an MCP server is no longer downstream of engineering. They are upstream of the model.

Stop framing yourself as a designer who happens to code. Frame yourself as a designer who builds the surfaces AI agents work through. That is a bigger job and a rarer one.

QUIZ

Recruiter pressure-test. 10 questions.

If you can answer eight cleanly, you can hold your own. Below five, re-read the relevant section. The questions are written in the voice an actual senior recruiter or staff engineer would use.

Question 1 of 10
Score 0
QUESTION 01
Loading...
EXPLANATION

0/10
Pending
DEEPER

How to go from "shipped seven" to "MCP is one of my professional axes".

Six concrete moves. Roughly in the order you should do them. Times are honest.

01

Read the spec end to end. Once.

Open modelcontextprotocol.io/specification and read every page. Initialize, lifecycle, tools, resources, prompts, sampling, roots, transports. Then read the changelog for the last two protocol versions.

Why: you have shipped seven servers without doing this end to end. Doing it once moves you from "knows the pattern" to "could explain why a design choice was made". That's the difference between mid and senior in interviews.

3 hrsOne sitting
02

Add Streamable HTTP transport to one of your servers

All seven of your MCPs ship as stdio. Pick one (paypay is already partway there) and add a Streamable HTTP server entry point with bearer-token auth. Deploy to Cloudflare Workers or a small VPS.

Why: stdio is the easy mode. Remote MCP is where the field is moving (ChatGPT Apps SDK, Claude Connectors, Cursor Cloud). One remote-deployed server makes you a meaningfully different candidate.

1 weekendReal deploy
03

Implement sampling in one server

Sampling is when the server calls back to the host's LLM. Most MCPs never use it. Add it to japan-ux-mcp — for example, have audit_japan_ux ask the host to summarize the issues in the user's voice, instead of returning a static report.

Why: 95% of public MCPs ignore sampling. Implementing it once means you can talk about the second-order MCP feature most engineers haven't touched. Senior signal.

1 dayNiche but high-signal
04

Write a prompt-injection threat model for paypay-mcp

A two-page doc. What can an attacker do if they control a tool input that the model passes through? What are your gates (refunds-disabled-by-default, sandbox-by-default, idempotency keys)? What still leaks?

Why: every recruiter at a payments, fintech, or infra company will ask about safety. Having a written threat model means you don't fumble. Publish it as a SECURITY.md on the repo.

3 hrsRecruiter killer
05

Get one MCP into a public registry's "featured" tier

japan-ux-mcp is already on Glama and Lobehub. Push for editorial placement. Write a launch post on the MCP subreddit and Hacker News. Get it to 50+ npm weekly downloads. Hit one of the curated registries' featured lists.

Why: "shipped" is good. "Shipped and used by other people" is much better. Distribution proof shortens every interview by 10 minutes.

2 weeksPromotion work
06

Build a non-obvious MCP that only a designer would notice

Examples: an MCP that audits a Figma file for accessibility against WCAG 2.2. An MCP that returns the Material You / iOS HIG / shadcn equivalents of a given component. An MCP that scores a UI screenshot against a brand's design tokens. Pick one. Ship in two weeks.

Why: the seven you have are useful but most are not designer-native (transcripts, payments, search). One that only a designer could conceive of becomes the "this is who I am" repo at the top of your GitHub.

2 weeksIdentity piece
ARM UP

Preparing for the agentic future, by horizon.

Don't try to do everything at once. Tier the bets so the critical-now stuff actually gets done.

NOW · NEXT 60 DAYS
Lock down the recruiter-proof story.
  • Memorize the 60-second answer from the top of this manual. Say it aloud, on camera, until it doesn't sound rehearsed.
  • Pass this quiz at 9/10 on a cold attempt, two weeks apart. If you can't, you don't know it yet.
  • Add a one-line README banner to your top three MCP repos: "Built by a product designer in Tokyo. Installable in 30 seconds."
  • Pin your three best MCPs on github.com/mrslbt with hand-written descriptions, not the default repo blurbs.
  • Make a 90-second screen recording of japan-ux-mcp working inside Claude Code. Embed in your portfolio site.
THIS QUARTER · 3 MONTHS
Move from local-only to remote-deployed.
  • Ship one Streamable HTTP MCP behind a real domain. Cloudflare Workers + Durable Objects is the cheapest path.
  • Implement OAuth on one server. Claude Connectors, ChatGPT Apps, and Cursor all want this. Most MCPs don't have it. Yours will.
  • Add sampling and roots to japan-ux-mcp. Write a short blog post: "What changed when I added sampling."
  • Open one PR to the official MCP TypeScript SDK or modelcontextprotocol/servers. A documentation fix is fine. The point is "I'm in the contributor list."
THIS YEAR · 12 MONTHS
Become legible as the JP designer who builds agent infra.
  • Speak once at MCP Tokyo, AI Engineer Tokyo, or a smaller Japan-tech meetup. Topic: "Encoding cultural design rules as MCP servers."
  • Write three posts on Zenn or your own blog with bilingual versions. Topics: keigo as ontology, why Japanese forms break Western-trained AIs, MCP safety for payment APIs.
  • Hit 1k weekly downloads across your npm-published MCPs combined.
  • Land one paid client engagement where the deliverable is "a custom MCP for their product." Even at low rate, the proof is bigger than the money.
2027–2028 · 24 MONTHS
Position for what comes after the protocol war.
  • Watch what stabilizes: by 2027 there will be a clear answer on remote MCP auth, on agent-to-agent (A2A) protocols, on MCP versus alternative substrates. Don't bet on any single one — bet on being conversant in all of them.
  • Build at the agent-orchestration layer. Tools that compose MCP servers. Personal agents that route across your seven servers automatically. Workflow engines that let a non-coder wire MCPs together. That's where the next pull happens.
  • Decide on a market: JP local, JP gaishikei, AU, SG. By 2028, "designer with shipped MCP infra" will be a normal thing. The differentiator becomes which market you own. Pick one.
  • Avoid the trap of "MCP everywhere". Some problems aren't MCP problems. Don't solve every domain with a server. Some problems want a CLI, some want a dashboard, some want a Figma plugin. Designers who only have a hammer become a tax.

The summary. You already have the rare thing — shipped MCP infra and a designer's eye. The work now is not "build more". The work is knowing what you built well enough to defend it under pressure, then putting one or two pieces in places where the right people see them.

Seven MCPs is a portfolio. Knowing them cold is a job. Putting two of them where Stripe, Notion, or Figma's hiring managers can find them is a career.

The protocol is going to be obvious in two years. Your unfair advantage is being early and being a designer at the same time. Don't waste either.