# It’s not about the API, it's about the design <sup>Date: May 27, 2025</sup> <iframe src="https://www.youtube.com/embed/eeOANluSqAE"> </iframe> *Notes from David Gomes' [presentation](https://youtu.be/eeOANluSqAE) at Microsoft Build 2025* When I first encountered the Model Context Protocol (MCP), I'll admit I was a bit skeptical. As a developer, I thought I’d rather just call an API directly. MCP seemed like unnecessary complexity when I could already access the functionality I needed through existing endpoints. But then, this presentation from [David Gomes](https://neon.tech) at Microsoft Build 2025 showed me I was missing the bigger picture. The real benefit of MCP is realized through standardizing and distributing the design, prompts, and tool descriptions. ## Why auto-generation falls short David talked about the pitfalls of MCP auto-generation, and why you can’t simply map an existing API and its specifications to an MCP server. Here’s the thing: REST APIs aren’t designed for LLMs—they’re designed for developers. Models have an amazing ability to understand code, learn by example, and then generate new code—even for internal or undocumented APIs they weren’t trained on. However, when we dump a full API spec into the context window, it can be wasteful—and models often perform worse when presented with more choices. This means we need to curate not just what functionality to expose, but how to present it in a way that maximizes the LLM’s ability to use it effectively. ## Designing for the LLM MCP is fundamentally about *designing* functionality for the LLM, not just exposing it. This means combining prompt engineering principles with thoughtful curation of specific features. David’s recommendations reinforce this approach: - **Descriptive Guidance:** Provide clear descriptions that explain not only how to use tools, but when to use them. This contextual guidance helps LLMs make better decisions about tool selection and usage patterns. - **LLM-First Language:** Write descriptions and documentation for the LLM audience, not for human developers or end users. This might feel counterintuitive, but LLMs parse and understand information differently than humans do. This can mean using prompting techniques like XML tagging, YAML structure, or other ways of clarifying the language structure for the model. - **Validation Through Evals:** Implement evaluation frameworks to ensure LLMs are calling the right tools for the right jobs. Without this feedback loop, you’re optimizing blind. Really, this guidance applies to all tool/function calls. And if you are designing the application as well, it can be helpful to add additional controls/guidance through the system prompt. ## The importance of evals David’s emphasis on LLM evals for MCP development also resonates deeply with my own experience. If you are creating LLM-powered tools and you haven't read [Hamel](https://hamel.dev/blog/posts/evals/) [Husain's](https://hamel.dev/blog/posts/llm-judge/) [comprehensive work](https://hamel.dev/blog/posts/field-guide/) on this topic, it should be your next priority. His insights have been foundational to how I approach LLM system design. A robust eval framework provides the foundation for rapid iteration and improvement of both prompts and MCP structure. Without evals, you are making changes blindly. With them, you can systematically optimize for the behaviors and outcomes you actually want. Evals become even more critical in MCP development because you’re often one step removed from the end interaction. As an MCP server author, you can’t always control how clients use your tools, but you can use evals to guide their design and encourage optimal usage patterns. ## The bigger picture MCP represents a shift toward thinking about LLM integration as a design discipline, not just a technical implementation. For developers like me who maybe didn't initially "get it," the key insight is that MCP isn't about replacing direct API access. It's about creating a better interface layer that understands how LLMs actually work and what they need to be successful. The future of LLM tools isn’t just about the functionality we expose—it’s about how thoughtfully we can expose it.