← Blog

Choosing the Right Projection

The library-vs-service question has been around for decades. Import the code directly or call it over a network boundary? The tradeoffs are well-understood: libraries are fast and tightly coupled; services are slower and loosely coupled. There’s been a healthy conversation about when microservices are worth the overhead and when a monolith is the right starting point [1].

What’s changed is the caller landscape. It used to be applications calling other applications. Now it’s AI agents calling tools, humans calling CLIs, applications importing libraries, and remote services calling APIs — all reaching the same underlying capability. The question isn’t library or service anymore. It’s: which integration point is right for this caller in this context?

We’ve been working through this with our own tools. We don’t have a complete framework — the landscape is still forming. But we’ve made enough concrete decisions to share what we’ve found.

MCP-to-Library, Not MCP-to-MCP

LangLearn is an orchestrator that composes three independent components: langlearn-tts for speech synthesis, langlearn-anki for flashcard generation, and langlearn-imagegen for visual assets. Each component follows the universal access pattern — library, CLI, MCP server, and REST from a single codebase.

The orchestrator imports the libraries. It doesn’t call the MCP servers.

This was a deliberate decision. All four components run on the same machine, in the same process. Calling langlearn-tts over MCP would mean: serialize the request to JSON, send it over stdio, deserialize it in the MCP server, call the library function, serialize the response, send it back, deserialize it in the orchestrator. For a function that takes a string and returns an audio file path, that’s pure overhead.

The library import skips all of it. Same function call, no transport layer, no serialization boundary. When the caller and the capability share a process, the library is the right projection.

But the MCP servers still exist. When a human uses langlearn-tts from Claude Code, they reach it through MCP. When langlearn-tts runs as a standalone Claude Desktop extension, it’s an MCP server. The projection that’s wrong for tight composition is right for agent integration. Both projections exist because different callers need different things.

Quarry: Local Deployment, and a Path to Remote

Quarry is a semantic search tool. Locally, it indexes your files — PDFs, source code, spreadsheets, 30+ formats — and runs entirely offline. No API keys, no cloud dependency. The embedding model downloads once and everything stays on your machine.

We haven’t built a hosted version. But the library doesn’t know whether it’s running on a laptop or in a container, and we think the same search() function could back a remote service — one with access to licensed content like market research databases or industry reports. If we did build it, we’d expect the projection to change (local CLI to remote REST API) while the core search logic stays the same. The business model would follow from the deployment: local is free because you bring your own data; remote would be paid because the operator brings licensed content.

This is how SaaS has always worked. What the universal access pattern adds is that the transition should be a deployment decision rather than an architectural rebuild. We haven’t proven that with Quarry yet — it’s a hypothesis, not a case study.

The Decision Framework

When we’re deciding how one component should reach another, we ask three questions:

Where does the caller run? If it’s the same process, import the library. If it’s the same machine but a different process (Claude Code spawning a tool), MCP over stdio. If it’s a different machine, REST or MCP over Streamable HTTP.

What is the caller? In our stack, AI agents integrated with MCP-aware hosts (Anthropic’s Claude Code [2] and Claude Desktop) reach for MCP. Applications speak REST or import directly. Humans speak CLI. Shell scripts and CI pipelines speak CLI. Matching the projection to the caller means no one is forced through an interface designed for someone else.

What are the economics? A free local tool and a paid remote service can share the same library. The projection and deployment determine who pays for what. Local projections (CLI, stdio MCP, library import) run on the user’s hardware at the user’s cost. Remote projections (REST, Streamable HTTP MCP) run on the operator’s infrastructure and can be metered.

These questions aren’t always independent. A paid service needs authentication, which means the REST or remote MCP projection, which means a deployment pipeline — each decision constrains the next. When questions conflict — say, a CI pipeline (CLI caller) running on a remote host (REST by location) — location tends to win in our experience, because the transport constraint is harder to work around than the interface preference. But we’ve only encountered a few of these conflicts so far.

Starting from the caller and working outward has helped us avoid overbuilding. We scaffold all four projection surfaces from the start with punt init (see the companion post), but the scaffolding is thin — the real work is in the library, and each projection only gets fleshed out when a caller actually needs it.

What We Don’t Know Yet

The agent landscape is moving fast. Anthropic’s MCP [3] is less than two years old. Streamable HTTP transport replaced SSE in 2025. The patterns we’re describing work for our current tools, but we’re a small team with a handful of projects. We don’t know how this scales to large systems with dozens of capabilities composed across trust boundaries. We don’t know how the protocol will evolve.

We believe building the library first and projecting it outward gives flexibility — if the landscape shifts, the library stays the same and the projections adapt. But we haven’t yet had to absorb a major protocol change with these tools. That claim is still a prediction based on the architecture, not a proven outcome.

That’s the practical value. Not a grand theory — just a set of decisions that have worked, shared in case they’re useful to others building for the same landscape.

References

  1. DHH. “The Majestic Monolith.” Signal v. Noise, 2016. m.signalvnoise.com
  2. Anthropic. “Claude Code.” 2024–present. docs.anthropic.com
  3. Anthropic. “Model Context Protocol.” 2024–present. modelcontextprotocol.io